diff --git a/.beads/issues.jsonl b/.beads/issues.jsonl index 5594b707..94319430 100644 --- a/.beads/issues.jsonl +++ b/.beads/issues.jsonl @@ -1,527 +1,527 @@ -{"id":"bd-05a8","title":"Split large cmd/bd files: doctor.go (2948 lines), sync.go (2121 lines)","description":"Code health review found several oversized files:\n\n1. doctor.go - 2948 lines, 48 functions mixed together\n - Should split into doctor/checks/*.go for individual diagnostics\n - applyFixes() and previewFixes() are nearly identical\n\n2. sync.go - 2121 lines\n - ZFC (Zero Flush Check) logic embedded inline (lines 213-247)\n - Multiple mode handlers should be extracted\n\n3. init.go - 1732 lines\n4. compact.go - 1097 lines\n5. show.go - 1069 lines\n\nRecommendation: Extract into focused sub-packages or split into logical files.","status":"in_progress","priority":2,"issue_type":"task","assignee":"beads/lima","created_at":"2025-12-16T18:17:18.169927-08:00","updated_at":"2025-12-23T22:29:35.681167-08:00"} -{"id":"bd-06px","title":"bd sync --from-main fails: unknown flag --no-git-history","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-17T14:32:02.998106-08:00","updated_at":"2025-12-17T23:13:40.531756-08:00","closed_at":"2025-12-17T17:21:48.506039-08:00"} -{"id":"bd-077e","title":"Add close_reason field to CLI schema and documentation","description":"PR #551 persists close_reason, but the CLI documentation may not mention this field as part of the issue schema.\n\n## Current State\n- close_reason is now persisted in database\n- `bd show --json` will return close_reason in JSON output\n- Documentation may not reflect this new field\n\n## What's Missing\n- CLI reference documentation for close_reason field\n- Schema documentation showing close_reason is a top-level issue field\n- Example output showing close_reason in bd show --json\n- bd close command documentation should mention close_reason parameter is optional\n\n## Suggested Action\n1. Update README.md or CLI reference docs to list close_reason as an issue field\n2. Add example to bd close documentation\n3. Update any type definitions or schema specs\n4. Consider adding close_reason to verbose list output (bd list --verbose)","status":"open","priority":3,"issue_type":"task","created_at":"2025-12-14T14:25:28.448654-08:00","updated_at":"2025-12-14T14:25:28.448654-08:00","dependencies":[{"issue_id":"bd-077e","depends_on_id":"bd-z86n","type":"discovered-from","created_at":"2025-12-14T14:25:28.449968-08:00","created_by":"stevey","metadata":"{}"}]} -{"id":"bd-0a43","title":"Split monolithic sqlite.go into focused files","description":"internal/storage/sqlite/sqlite.go is 1050 lines containing initialization, 20+ CRUD methods, query building, and schema management.\n\nSplit into:\n- store.go: Store struct \u0026 initialization (150 lines)\n- bead_queries.go: Bead CRUD (300 lines)\n- work_queries.go: Work queries (200 lines) \n- stats_queries.go: Statistics (150 lines)\n- schema.go: Schema \u0026 migrations (150 lines)\n- helpers.go: Common utilities (100 lines)\n\nImpact: Impossible to understand at a glance; hard to find specific functionality; high cognitive load\n\nEffort: 6-8 hours","status":"closed","priority":0,"issue_type":"task","created_at":"2025-11-16T14:51:16.520465-08:00","updated_at":"2025-12-17T23:13:40.533947-08:00","closed_at":"2025-12-17T16:51:30.236012-08:00"} -{"id":"bd-0d5p","title":"Fix TestRunSync_Timeout failing on macOS","description":"The hooks timeout test fails because exec.CommandContext doesn't properly terminate child processes of shell scripts on macOS. The test creates a hook that runs 'sleep 60' with a 500ms timeout, but it waits the full 60 seconds.\n\nOptions to fix:\n- Use SysProcAttr{Setpgid: true} to create process group and kill the group\n- Skip test on darwin with build tag\n- Use a different approach for timeout testing\n\nLocation: internal/hooks/hooks_test.go:220-253","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-16T20:52:51.771217-08:00","updated_at":"2025-12-17T23:13:40.532688-08:00","closed_at":"2025-12-17T17:23:55.678799-08:00"} -{"id":"bd-0fvq","title":"bd doctor should recommend bd prime migration for existing repos","description":"bd doctor should detect old beads integration patterns and recommend migrating to bd prime approach.\n\n## Current behavior\n- bd doctor checks if Claude hooks are installed globally\n- Doesn't check project-level integration (AGENTS.md, CLAUDE.md)\n- Doesn't recommend migration for repos using old patterns\n\n## Desired behavior\nbd doctor should detect and suggest:\n\n1. **Old slash command pattern detected**\n - Check for /beads:* references in AGENTS.md, CLAUDE.md\n - Suggest: These slash commands are deprecated, use bd prime hooks instead\n \n2. **No agent documentation**\n - Check if AGENTS.md or CLAUDE.md exists\n - Suggest: Run 'bd onboard' or 'bd setup claude' to document workflow\n \n3. **Old MCP-only pattern**\n - Check for instructions to use MCP tools but no bd prime hooks\n - Suggest: Add bd prime hooks for better token efficiency\n\n4. **Migration path**\n - Show: 'Run bd setup claude to add SessionStart/PreCompact hooks'\n - Show: 'Update AGENTS.md to reference bd prime instead of slash commands'\n\n## Example output\n\n⚠ Warning: Old beads integration detected in CLAUDE.md\n Found: /beads:* slash command references (deprecated)\n Recommend: Migrate to bd prime hooks for better token efficiency\n Fix: Run 'bd setup claude' and update CLAUDE.md\n\nπŸ’‘ Tip: bd prime + hooks reduces token usage by 80-99% vs slash commands\n MCP mode: ~50 tokens vs ~10.5k for full MCP scan\n CLI mode: ~1-2k tokens with automatic context recovery\n\n## Benefits\n- Helps existing repos adopt new best practices\n- Clear migration path for users\n- Better token efficiency messaging","status":"in_progress","priority":2,"issue_type":"feature","assignee":"beads/mike","created_at":"2025-11-12T03:20:25.567748-08:00","updated_at":"2025-12-23T22:29:35.695285-08:00"} -{"id":"bd-0j5y","title":"Merge: bd-05a8","description":"branch: polecat/valkyrie\ntarget: main\nsource_issue: bd-05a8\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-23T20:50:27.125378-08:00","updated_at":"2025-12-23T21:21:57.69697-08:00","closed_at":"2025-12-23T21:21:57.69697-08:00","close_reason":"stale - no code pushed"} -{"id":"bd-0kai","title":"Work on beads-ocs: Thin shim hooks to eliminate version d...","description":"Work on beads-ocs: Thin shim hooks to eliminate version drift (GH#615). Replace full hook scripts with thin shims that call bd hooks run. When done, submit MR (not PR) to integration branch for Refinery.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-19T22:57:22.91347-08:00","updated_at":"2025-12-20T00:49:51.926425-08:00","closed_at":"2025-12-19T23:24:08.828172-08:00","close_reason":"Implemented thin shim hooks to eliminate version drift (beads-ocs)"} -{"id":"bd-0oqz","title":"Add GetMoleculeProgress RPC endpoint","description":"New RPC endpoint to get detailed progress for a specific molecule. Returns: moleculeID, title, assignee, and list of steps with their status (done/current/ready/blocked), start/close times. Used when user expands a worker in the activity feed TUI.","status":"closed","priority":2,"issue_type":"feature","assignee":"beads/furiosa","created_at":"2025-12-23T16:26:38.137866-08:00","updated_at":"2025-12-23T18:27:49.033335-08:00","closed_at":"2025-12-23T18:27:49.033335-08:00","close_reason":"Implemented GetMoleculeProgress RPC endpoint"} -{"id":"bd-0vg","title":"Pinned issues: persistent context markers","description":"Add ability to pin issues so they remain visible and are excluded from work-finding commands. Pinned issues serve as persistent context markers (handoffs, architectural notes, recovery instructions) that should not be claimed as work items.\n\nUse Cases:\n1. Handoff messages - Pin session handoffs so new agents always see them\n2. Architecture decisions - Pin ADRs or design notes for reference \n3. Recovery context - Pin amnesia-cure notes that help agents orient\n\nCore commands: bd pin, bd unpin, bd list --pinned/--no-pinned","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-18T23:33:10.911092-08:00","updated_at":"2025-12-21T11:30:28.989696-08:00","closed_at":"2025-12-21T11:30:28.989696-08:00","close_reason":"All children complete - pinned issues feature fully implemented"} -{"id":"bd-0w5","title":"Fix update-hooks verification in version-bump.yaml","description":"The update-hooks task verification command at version-bump.yaml:358 always succeeds due to '|| echo ...' fallback. Remove the fallback so verification actually fails when hooks aren't installed.","status":"closed","priority":3,"issue_type":"bug","created_at":"2025-12-17T22:23:06.55467-08:00","updated_at":"2025-12-17T22:34:07.290409-08:00","closed_at":"2025-12-17T22:34:07.290409-08:00"} -{"id":"bd-0zp7","title":"Add missing hook calls in mail reply and ack","description":"The mail commands are missing hook calls:\n\n1. runMailReply (mail.go:525-672) creates a message but doesn't call hookRunner.Run(hooks.EventMessage, ...) after creating the reply in direct mode (around line 640)\n\n2. runMailAck (mail.go:432-523) closes messages but doesn't call hookRunner.Run(hooks.EventClose, ...) after closing each message (around line 487 for daemon mode, 493 for direct mode)\n\nThis means GGT hooks won't fire for replies or message acknowledgments.","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-16T20:52:53.069412-08:00","updated_at":"2025-12-17T23:13:40.532054-08:00","closed_at":"2025-12-17T17:22:59.368024-08:00"} -{"id":"bd-118d","title":"Commit release v0.33.2","description":"Stage and commit the version bump:\n\n```bash\ngit add cmd/bd/version.go cmd/bd/info.go CHANGELOG.md\ngit commit -m \"release: v0.33.2\"\n```\n\nDo NOT push yet - tag first.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.761725-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","close_reason":"Release committed","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task","wisp":true} -{"id":"bd-14ie","title":"Work on beads-2vn: Add simple built-in beads viewer (GH#6...","description":"Work on beads-2vn: Add simple built-in beads viewer (GH#654). Add bd list --pretty with --watch flag, tree view with priority/status symbols. When done, submit MR (not PR) to integration branch for Refinery.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-19T22:56:47.305831-08:00","updated_at":"2025-12-19T23:28:32.429492-08:00","closed_at":"2025-12-19T23:23:13.928323-08:00","close_reason":"Implemented --pretty flag with tree view and symbols. Tests pass."} -{"id":"bd-1slh","title":"Investigate charmbracelet-based TUI for beads","description":"Now that we've merged the create-form command (PR #603) which uses charmbracelet/huh, investigate whether beads should have a more comprehensive TUI.\n\nConsiderations:\n- Should this be in core or a separate binary (bd-tui)?\n- What functionality would benefit from a TUI? (list view, issue details, search, bulk operations)\n- Plugin/extension architecture vs build tags vs separate binary\n- Dependency cost vs user experience tradeoff\n- Target audience: humans who want interactive workflows vs CLI/scripting users\n\nRelated: PR #603 added charmbracelet/huh dependency for create-form command.","notes":"Foundation is in place (lipgloss, huh), but not a priority right now","status":"deferred","priority":3,"issue_type":"feature","created_at":"2025-12-17T14:20:51.503563-08:00","updated_at":"2025-12-20T23:31:34.354023-08:00"} -{"id":"bd-1tw","title":"Fix G104 errors unhandled in internal/storage/sqlite/queries.go:1186","description":"Linting issue: G104: Errors unhandled (gosec) at internal/storage/sqlite/queries.go:1186:2. Error: rows.Close()","status":"closed","priority":0,"issue_type":"bug","created_at":"2025-12-07T15:35:13.051671889-07:00","updated_at":"2025-12-17T23:13:40.53486-08:00","closed_at":"2025-12-17T16:46:11.0289-08:00"} -{"id":"bd-20j","title":"sync branch not match config","description":"./bd sync\nβ†’ Exporting pending changes to JSONL...\nβ†’ No changes to commit\nβ†’ Pulling from sync branch 'gh-386'...\nError pulling from sync branch: failed to create worktree: failed to create worktree parent directory: mkdir /var/home/matt/dev/beads/worktree-db-fail/.git: not a directory\nmatt@blufin-framation ~/d/b/worktree-db-fail (worktree-db-fail) [1]\u003e bd config list\n\nConfiguration:\n auto_compact_enabled = false\n compact_batch_size = 50\n compact_model = claude-3-5-haiku-20241022\n compact_parallel_workers = 5\n compact_tier1_days = 30\n compact_tier1_dep_levels = 2\n compact_tier2_commits = 100\n compact_tier2_days = 90\n compact_tier2_dep_levels = 5\n compaction_enabled = false\n issue_prefix = worktree-db-fail\n sync.branch = worktree-db-fail","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-08T06:49:04.449094018-07:00","updated_at":"2025-12-08T06:49:04.449094018-07:00"} -{"id":"bd-23z9","title":"Upgrade beads-mcp to 0.33.2","description":"Upgrade the MCP server via pip:\n\n```bash\npip install --upgrade beads-mcp\npip show beads-mcp | grep Version # Verify 0.33.2\n```\n\nNote: Restart Claude Code or MCP session to use new version.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.761057-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","close_reason":"beads-mcp not installed locally, PyPI updated separately","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task","wisp":true} -{"id":"bd-28db","title":"Add 'bd status' command for issue database overview","description":"Implement a bd status command that provides a quick snapshot of the issue database state, similar to how git status shows working tree state.\n\nExpected output: Show summary including counts by state (open, in-progress, blocked, closed), recent activity (last 7 days), and quick overview without needing multiple queries.\n\nExample output showing issue counts, recent activity stats, and pointer to bd list for details.\n\nProposed options: --all (show all issues), --assigned (show issues assigned to current user), --json (JSON format output)\n\nUse cases: Quick project health check, onboarding for new contributors, integration with shell prompts or CI/CD, daily standup reference","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-11-02T17:25:59.203549-08:00","updated_at":"2025-12-21T17:54:00.205191-08:00","closed_at":"2025-12-21T17:54:00.205191-08:00","close_reason":"Already implemented - bd status shows summary and activity"} -{"id":"bd-29fb","title":"Implement bd close --continue flag","description":"Auto-advance to next step in molecule when closing an issue. Referenced by gt-um6q, gt-lz13. Needed for molecule navigation workflow.","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-12-23T00:17:55.032875-08:00","updated_at":"2025-12-23T01:26:47.255313-08:00","closed_at":"2025-12-23T01:26:47.255313-08:00","close_reason":"Already implemented: --continue flag auto-advances to next step in molecule, --no-auto prevents auto-claiming"} -{"id":"bd-2ep8","title":"Update CHANGELOG.md with release notes","description":"Add meaningful release notes to CHANGELOG.md describing what changed in 0.30.7","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T22:56:48.649053-08:00","updated_at":"2025-12-19T22:57:31.69559-08:00","closed_at":"2025-12-19T22:57:31.69559-08:00","dependencies":[{"issue_id":"bd-2ep8","depends_on_id":"bd-8pyn","type":"parent-child","created_at":"2025-12-19T22:56:48.650816-08:00","created_by":"stevey"},{"issue_id":"bd-2ep8","depends_on_id":"bd-rupw","type":"blocks","created_at":"2025-12-19T22:56:48.651136-08:00","created_by":"stevey"}]} -{"id":"bd-2l03","title":"Implement await type handlers (gh:run, gh:pr, timer, human, mail)","description":"Implement condition checking for each await type.\n\n## Handlers Needed\n- gh:run:\u003cid\u003e - Check GitHub Actions run status via gh CLI\n- gh:pr:\u003cid\u003e - Check PR merged/closed status via gh CLI \n- timer:\u003cduration\u003e - Simple elapsed time check\n- human:\u003cprompt\u003e - Check for human approval (via mail?)\n- mail:\u003cpattern\u003e - Check for mail matching pattern\n\n## Implementation Location\nThis is Deacon logic, so likely in Gas Town (gt) not beads.\n\n## Interface\n```go\ntype AwaitHandler interface {\n Check(awaitID string) (completed bool, result string, err error)\n}\n```","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-23T11:44:38.492837-08:00","updated_at":"2025-12-23T12:19:44.283318-08:00","closed_at":"2025-12-23T12:19:44.283318-08:00","close_reason":"Moved to gastown: gt-ng6g","dependencies":[{"issue_id":"bd-2l03","depends_on_id":"bd-udsi","type":"parent-child","created_at":"2025-12-23T11:44:52.990746-08:00","created_by":"daemon"},{"issue_id":"bd-2l03","depends_on_id":"bd-is6m","type":"blocks","created_at":"2025-12-23T11:44:56.510792-08:00","created_by":"daemon"}]} -{"id":"bd-2oo","title":"Edge Schema Consolidation: Unify all edges in dependencies table","description":"Consolidate all edge types into the dependency table per decision 004.\n\n## Changes\n- Add metadata column to dependencies table\n- Add thread_id column for conversation grouping\n- Remove redundant Issue fields: replies_to, relates_to, duplicate_of, superseded_by\n- Update all code to use dependencies API\n- Migration script for existing data\n- JSONL format change (breaking)\n\nReference: ~/gt/hop/decisions/004-edge-schema-consolidation.md","status":"closed","priority":0,"issue_type":"epic","created_at":"2025-12-18T02:01:48.785558-08:00","updated_at":"2025-12-18T02:49:10.61237-08:00","closed_at":"2025-12-18T02:49:10.61237-08:00","close_reason":"Phase 4 complete: all edge fields removed, dependencies API used exclusively"} -{"id":"bd-2oo.1","title":"Add metadata and thread_id columns to dependencies table","description":"Schema changes:\n- ALTER TABLE dependencies ADD COLUMN metadata TEXT DEFAULT '{}'\n- ALTER TABLE dependencies ADD COLUMN thread_id TEXT DEFAULT ''\n- CREATE INDEX idx_dependencies_thread ON dependencies(thread_id) WHERE thread_id != ''","status":"closed","priority":0,"issue_type":"task","created_at":"2025-12-18T02:02:00.468223-08:00","updated_at":"2025-12-18T02:49:10.575133-08:00","closed_at":"2025-12-18T02:49:10.575133-08:00","close_reason":"Phase 4 complete: all edge fields removed, dependencies API used exclusively","dependencies":[{"issue_id":"bd-2oo.1","depends_on_id":"bd-2oo","type":"parent-child","created_at":"2025-12-18T02:02:00.470012-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-2oo.2","title":"Remove redundant edge fields from Issue struct","description":"Remove from Issue struct:\n- RepliesTo -\u003e dependency with type replies-to\n- RelatesTo -\u003e dependencies with type relates-to \n- DuplicateOf -\u003e dependency with type duplicates\n- SupersededBy -\u003e dependency with type supersedes\n\nKeep: Sender, Ephemeral (these are attributes, not relationships)","status":"closed","priority":0,"issue_type":"task","created_at":"2025-12-18T02:02:00.891206-08:00","updated_at":"2025-12-18T02:49:10.584381-08:00","closed_at":"2025-12-18T02:49:10.584381-08:00","close_reason":"Phase 4 complete: all edge fields removed, dependencies API used exclusively","dependencies":[{"issue_id":"bd-2oo.2","depends_on_id":"bd-2oo","type":"parent-child","created_at":"2025-12-18T02:02:00.891655-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-2oo.3","title":"Update all code to use dependencies API for edges","description":"Find and update all code that reads/writes:\n- replies_to field -\u003e use dependency API\n- relates_to field -\u003e use dependency API\n- duplicate_of field -\u003e use dependency API\n- superseded_by field -\u003e use dependency API\n\nCommands affected: bd mail, bd relate, bd duplicate, bd supersede, bd show, etc.","status":"closed","priority":0,"issue_type":"task","created_at":"2025-12-18T02:02:01.317006-08:00","updated_at":"2025-12-18T02:49:10.59233-08:00","closed_at":"2025-12-18T02:49:10.59233-08:00","close_reason":"Phase 4 complete: all edge fields removed, dependencies API used exclusively","dependencies":[{"issue_id":"bd-2oo.3","depends_on_id":"bd-2oo","type":"parent-child","created_at":"2025-12-18T02:02:01.31856-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-2oo.4","title":"Create migration script for edge field to dependency conversion","description":"Migration must:\n1. Read existing JSONL with old fields\n2. Convert field values to dependency records\n3. Write updated JSONL without old fields\n4. Handle edge cases (missing refs, duplicates)\n\nRun via: bd migrate or automatic on bd prime","status":"closed","priority":0,"issue_type":"task","created_at":"2025-12-18T02:02:01.760277-08:00","updated_at":"2025-12-18T02:49:10.602446-08:00","closed_at":"2025-12-18T02:49:10.602446-08:00","close_reason":"Phase 4 complete: all edge fields removed, dependencies API used exclusively","dependencies":[{"issue_id":"bd-2oo.4","depends_on_id":"bd-2oo","type":"parent-child","created_at":"2025-12-18T02:02:01.760694-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-2q6d","title":"Beads commands operate on stale database without warning","description":"All beads read operations should validate database is in sync with JSONL before proceeding.\n\n**Current Behavior:**\n- Commands can query/read from stale database\n- Only mutation operations (like 'bd sync') check if JSONL is newer\n- User gets incorrect results without realizing database is out of sync\n\n**Expected Behavior:**\n- All beads commands should have pre-flight check for database freshness\n- If JSONL is newer than database, refuse to operate with error: \"Database out of sync. Run 'bd import' first.\"\n- Same safety check that exists for 'bd sync' should apply to ALL operations\n\n**Impact:**\n- Users make decisions based on incomplete/outdated data\n- Silent failures lead to confusion (e.g., thinking issues don't exist when they do)\n- Similar to running git commands on stale repo without being warned to pull\n\n**Example:**\n- Searched for bd-g9eu issue file: not found\n- Issue exists in .beads/issues.jsonl (in git)\n- Database was stale, but no warning was given\n- Led to incorrect conclusion that issue was already closed/deleted","notes":"## Implementation Complete\n\n**Phase 1: Created staleness check (cmd/bd/staleness.go)**\n- ensureDatabaseFresh() function checks JSONL mtime vs last_import_time\n- Returns error with helpful message when database is stale\n- Auto-skips in daemon mode (daemon has auto-import)\n\n**Phase 2: Added to all read commands**\n- list, show, ready, status, stale, info, duplicates, validate\n- Check runs before database queries in direct mode\n- Daemon mode already protected via checkAndAutoImportIfStale()\n\n**Phase 3: Code Review Findings**\nSee follow-up issues:\n- bd-XXXX: Add warning when staleness check errors\n- bd-YYYY: Improve CheckStaleness error handling\n- bd-ZZZZ: Refactor redundant daemon checks (low priority)\n\n**Testing:**\n- Build successful: go build ./cmd/bd\n- Binary works: ./bd --version\n- Ready for manual testing\n\n**Next Steps:**\n1. Test with stale database scenario\n2. Implement review improvements\n3. Close issue when tests pass","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-11-20T19:33:40.019297-05:00","updated_at":"2025-12-17T23:13:40.535149-08:00","closed_at":"2025-12-17T19:11:12.982639-08:00"} -{"id":"bd-2v0f","title":"Add gate issue type to beads","description":"Add 'gate' as a new issue type for async coordination.\n\n## Changes Needed\n- Add 'gate' to IssueType enum in internal/types/types.go\n- Update validation to accept gate type\n- Update CLI help text and completion\n\n## Gate Type Semantics\n- Gates are ephemeral (live in wisp storage)\n- Managed by Deacon patrol\n- Have special fields: await_type, await_id, timeout, waiters[]","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-23T11:44:31.331897-08:00","updated_at":"2025-12-23T11:47:06.287781-08:00","closed_at":"2025-12-23T11:47:06.287781-08:00","close_reason":"Added TypeGate constant and IsValid() validation. Updated CLI help text in create.go, list.go, show.go, search.go, count.go, export.go.","dependencies":[{"issue_id":"bd-2v0f","depends_on_id":"bd-udsi","type":"parent-child","created_at":"2025-12-23T11:44:52.659005-08:00","created_by":"daemon"}]} -{"id":"bd-2vh3","title":"Ephemeral issue cleanup and history compaction","description":"## Problem\n\nBeads history grows without bound. Every message, handoff, work assignment\nstays in issues.jsonl forever. Enterprise users will balk at \"git as database.\"\n\n## Solution: Two-Tier Cleanup\n\n### Tier 1: Ephemeral Cleanup (v1)\n\nbd cleanup --ephemeral --closed\n\n- Deletes closed issues where ephemeral=true from issues.jsonl\n- Safe: only removes explicitly marked ephemeral + closed\n- Preserves git history (commits still exist)\n- Run after swarm completion\n\n### Tier 2: History Compaction (v2)\n\nbd compact --squash\n\n- Rewrites issues.jsonl to remove tombstones\n- Optionally squashes git history (interactive rebase equivalent)\n- Preserves Merkle proofs for deleted items\n- Advanced: cold storage tiering\n\n## HOP Context\n\n| Layer | HOP Role | Persistence |\n|-------|----------|-------------|\n| Execution trace | None | Ephemeral |\n| Work scaffolding | None | Summarizable |\n| Work outcome | CV entry | Permanent |\n| Validation record | Stake proof | Permanent |\n\n\"Execution is ephemeral. Outcomes are permanent. You can't squash your CV.\"\n\n## Success Criteria\n\n- After cleanup --ephemeral: issues.jsonl only contains persistent work\n- Work outcomes preserved (CV entries)\n- Validation records preserved (stake proofs)\n- Execution scaffolding removed (transient coordination)","notes":"## Implementation Plan (REVISED after code review)\n\nSee history/EPHEMERAL_MOLECULES_DESIGN.md for comprehensive design + review.\n\n## Key Simplification\n\nAfter code review, Tier 1 is MUCH simpler than originally designed:\n\n- **Original**: Separate ephemeral repo with routing.ephemeral config\n- **Revised**: Just set Wisp: true in cloneSubgraph()\n\nThe wisp field and bd cleanup --wisp already exist\\!\n\n## Child Tasks (in dependency order)\n\n1. **bd-2vh3.2**: Tier 1 - Ephemeral spawning (SIMPLIFIED) [READY]\n - Just add Wisp: true to template.go:474\n - Add --persistent flag to opt out\n2. **bd-2vh3.3**: Tier 2 - Basic bd mol squash command\n3. **bd-2vh3.4**: Tier 3 - AI-powered squash summarization\n4. **bd-2vh3.5**: Tier 4 - Auto-squash on molecule completion\n5. **bd-2vh3.6**: Tier 5 - JSONL archive rotation (DEFERRED: post-1.0)\n\n## What Already Exists\n\n| Component | Location |\n|-----------|----------|\n| Ephemeral field | internal/types/types.go:45 |\n| bd cleanup --wisp | cmd/bd/cleanup.go:72 |\n| cloneSubgraph() | cmd/bd/template.go:456 |\n| loadTemplateSubgraph() | cmd/bd/template.go |\n\n## HOP Alignment\n\n'Execution is ephemeral. Outcomes are permanent. You can't squash your CV.'","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-12-20T21:02:20.101367-08:00","updated_at":"2025-12-21T17:50:02.958155-08:00","closed_at":"2025-12-21T17:50:02.958155-08:00","close_reason":"Core feature complete: bd cleanup --wisp works, molecules marked as wisps. bd-2vh3.6 (archive rotation) deferred post-1.0."} -{"id":"bd-2vh3.1","title":"Tier 1: Ephemeral repo routing","description":"Add routing.ephemeral config option to route ephemeral=true issues to separate location.\n\n## Changes Required\n\n1. Add `routing.ephemeral` config option (default: empty = disabled)\n2. Update routing logic in `determineRepo()` to check ephemeral flag\n3. Update `bd create` to respect ephemeral routing\n4. Update import/export for multi-location support\n5. Ephemeral repo can be:\n - Separate git repo (~/.beads-ephemeral)\n - Non-git directory (just filesystem)\n - Same repo, different branch (future)\n\n## Config\n\n```bash\nbd config set routing.ephemeral \"~/.beads-ephemeral\"\n```\n\n## Acceptance Criteria\n\n- `bd create \"test\" --ephemeral` creates in ephemeral repo when configured\n- `bd list` shows issues from both repos\n- Ephemeral repo never synced to remote","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T12:57:26.648052-08:00","updated_at":"2025-12-21T12:59:01.815357-08:00","deleted_at":"2025-12-21T12:59:01.815357-08:00","deleted_by":"stevey","delete_reason":"manual delete","original_type":"task"} -{"id":"bd-2vh3.2","title":"Tier 1: Ephemeral repo routing","description":"Simplified: Make mol spawn set ephemeral=true on spawned issues.\n\n## The Fix\n\nModify cloneSubgraph() in template.go to set Ephemeral: true:\n\n```go\n// template.go:474\nnewIssue := \u0026types.Issue{\n Title: substituteVariables(oldIssue.Title, vars),\n // ... existing fields ...\n Ephemeral: true, // ADD THIS LINE\n}\n```\n\n## Optional: Add --persistent flag\n\nAdd flag to bd mol spawn for when you want spawned issues to persist:\n\n```bash\nbd mol spawn mol-code-review --var pr=123 # ephemeral (default)\nbd mol spawn mol-code-review --var pr=123 --persistent # not ephemeral\n```\n\n## Why This Is Simpler Than Original Design\n\nOriginal design proposed separate ephemeral repo routing. After code review:\n\n- Ephemeral field already exists in schema\n- bd cleanup --ephemeral already works\n- No new config needed\n- No multi-repo complexity\n\n## Acceptance Criteria\n\n- bd mol spawn creates issues with ephemeral=true\n- bd cleanup --ephemeral -f deletes them after closing\n- --persistent flag opts out of ephemeral\n- Existing molecules continue to work","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-21T12:57:36.661604-08:00","updated_at":"2025-12-21T13:43:22.990244-08:00","closed_at":"2025-12-21T13:43:22.990244-08:00","close_reason":"Implemented: mol spawn now creates ephemeral issues by default. Use --persistent to opt out. Changes in template.go, mol.go, mol_spawn.go, mol_bond.go, mol_run.go.","dependencies":[{"issue_id":"bd-2vh3.2","depends_on_id":"bd-2vh3","type":"parent-child","created_at":"2025-12-21T12:57:36.662118-08:00","created_by":"stevey"}]} -{"id":"bd-2vh3.3","title":"Tier 2: Basic bd mol squash command","description":"Add bd mol squash command for basic molecule execution compression.\n\n## Command\n\nbd mol squash \u003cmolecule-id\u003e [flags]\n --dry-run Preview what would be squashed\n --keep-children Don't delete ephemeral children after squash\n --json JSON output\n\n## Implementation\n\n1. Find all ephemeral children of molecule (parent-child deps)\n2. Concatenate child descriptions/notes into digest\n3. Create digest issue in main repo with:\n - Title: 'Molecule Execution Summary: \u003coriginal-title\u003e'\n - digest_of: [list of squashed child IDs]\n - ephemeral: false (digest is permanent)\n4. Delete ephemeral children (unless --keep-children)\n5. Link digest to parent work item\n\n## Schema Changes\n\nAdd to Issue struct:\n- SquashedAt *time.Time\n- SquashDigest string (ID of digest)\n- DigestOf []string (IDs of squashed children)\n\n## Acceptance Criteria\n\n- bd mol squash \u003cid\u003e creates digest, removes children\n- --dry-run shows preview\n- Digest has proper metadata linking","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-21T12:57:48.338114-08:00","updated_at":"2025-12-21T13:53:58.974433-08:00","closed_at":"2025-12-21T13:53:58.974433-08:00","close_reason":"Implemented bd mol squash command with tests","dependencies":[{"issue_id":"bd-2vh3.3","depends_on_id":"bd-2vh3","type":"parent-child","created_at":"2025-12-21T12:57:48.338636-08:00","created_by":"stevey"},{"issue_id":"bd-2vh3.3","depends_on_id":"bd-2vh3.2","type":"blocks","created_at":"2025-12-21T12:58:22.601321-08:00","created_by":"stevey"}]} -{"id":"bd-2vh3.4","title":"Tier 3: AI-powered squash summarization","description":"## Design: Agent-Provided Summarization (Inversion of Control)\n\nbd is a tool FOR agents, not an agent itself. The calling agent provides\nthe summary; bd just stores it.\n\n### API\n\n```bash\n# Agent generates summary, passes to bd\nbd mol squash bd-xxx --summary \"Agent-generated summary here\"\n\n# Without --summary, falls back to basic concatenation\nbd mol squash bd-xxx\n```\n\n### Gas Town Integration Pattern\n\n```go\n// In polecat completion handler or witness\nraw := exec.Command(\"bd\", \"mol\", \"show\", molID, \"--json\").Output()\nsummary := callHaiku(buildSummaryPrompt(raw)) // agent's job\nexec.Command(\"bd\", \"mol\", \"squash\", molID, \"--summary\", summary).Run()\n```\n\n### Why This Design\n\n| Concern | bd's job | Agent's job |\n|---------|----------|-------------|\n| Store data | βœ… | |\n| Query data | βœ… | |\n| Generate summaries | | βœ… |\n| Call LLMs | | βœ… |\n| Manage API keys | | βœ… |\n\n### Implementation Status\n\n- [x] --summary flag added to bd mol squash\n- [x] Tests for agent-provided summary\n- [ ] Gas Town integration (separate task)\n\n### Acceptance Criteria\n\n- βœ… bd mol squash --summary uses provided text\n- βœ… Without --summary, falls back to concatenation\n- βœ… No LLM calls in bd itself","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-21T12:58:00.732749-08:00","updated_at":"2025-12-21T14:29:16.288713-08:00","closed_at":"2025-12-21T14:29:16.288713-08:00","close_reason":"Implemented --summary flag for agent-provided summaries. bd stays a pure tool; agents provide the intelligence.","dependencies":[{"issue_id":"bd-2vh3.4","depends_on_id":"bd-2vh3","type":"parent-child","created_at":"2025-12-21T12:58:00.733264-08:00","created_by":"stevey"},{"issue_id":"bd-2vh3.4","depends_on_id":"bd-2vh3.3","type":"blocks","created_at":"2025-12-21T12:58:22.698686-08:00","created_by":"stevey"}]} -{"id":"bd-2vh3.5","title":"Tier 4: Auto-squash on molecule completion","description":"Automatically squash molecules when they reach terminal state.\n\n## Integration Points\n\n1. Hook into molecule completion handler\n2. Detect when all steps are done/failed\n3. Trigger squash automatically\n\n## Config\n\nbd config set mol.auto_squash true # Default: false\nbd config set mol.auto_squash_on_success true # Only on success\nbd config set mol.auto_squash_delay '5m' # Wait before squash\n\n## Implementation Options\n\n### Option A: Post-Completion Hook\nIn mol completion handler:\n- Check if auto_squash enabled\n- Call Squash() after terminal state\n\n### Option B: Git Hook\nIn .beads/hooks/post-commit:\n- bd mol squash --auto\n\n### Option C: Daemon Background Task\n- Daemon periodically checks for squashable molecules\n- Squashes in background\n\n## Acceptance Criteria\n\n- Completed molecules auto-squash without manual intervention\n- Configurable delay before squash\n- Option to squash only on success vs always\n- Works with both daemon and no-daemon modes","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-21T12:58:13.345577-08:00","updated_at":"2025-12-21T17:40:39.794527-08:00","closed_at":"2025-12-21T17:40:39.794527-08:00","close_reason":"Won't fix - cleanup handled via molecule workflows (e.g. daily cleanup plugin)","dependencies":[{"issue_id":"bd-2vh3.5","depends_on_id":"bd-2vh3","type":"parent-child","created_at":"2025-12-21T12:58:13.346152-08:00","created_by":"stevey"},{"issue_id":"bd-2vh3.5","depends_on_id":"bd-2vh3.4","type":"blocks","created_at":"2025-12-21T12:58:22.797141-08:00","created_by":"stevey"}]} -{"id":"bd-2vh3.6","title":"Tier 5 (Future): JSONL archive rotation","description":"Periodic rotation of issues.jsonl for long-running repos.\n\n## Design\n\n.beads/\nβ”œβ”€β”€ issues.jsonl # Current (hot)\nβ”œβ”€β”€ archive/\nβ”‚ β”œβ”€β”€ issues-2025-12.jsonl.gz # Archived (cold)\nβ”‚ └── ...\n└── index.jsonl # Merged index for queries\n\n## Commands\n\nbd archive rotate [flags]\n --older-than N Archive issues closed \u003e N days\n --compress Gzip archives\n --dry-run Preview\n\nbd archive list # Show archived periods\nbd archive restore \u003cperiod\u003e # Restore from archive\n\n## Config\n\nbd config set archive.enabled true\nbd config set archive.rotate_days 90\nbd config set archive.compress true\nbd config set archive.path '.beads/archive'\n\n## Considerations\n\n- Archives can be gitignored (local only) or committed (shared)\n- Query layer must check index, hydrate from archive\n- Cold storage tiering (S3/GCS) for enterprise\n- Merkle proofs preserved for audit\n\n## Priority\n\nThis is post-1.0 work. Current focus is on squash (removes ephemeral).\nArchive helps with long-term history but is less critical.","status":"deferred","priority":4,"issue_type":"feature","created_at":"2025-12-21T12:58:38.210008-08:00","updated_at":"2025-12-23T12:27:02.371921-08:00","dependencies":[{"issue_id":"bd-2vh3.6","depends_on_id":"bd-2vh3","type":"parent-child","created_at":"2025-12-21T12:58:38.210543-08:00","created_by":"stevey"}]} -{"id":"bd-2wh","title":"Test pinned for stats","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-12-18T21:47:09.334108-08:00","updated_at":"2025-12-18T21:47:25.17917-08:00","deleted_at":"2025-12-18T21:47:25.17917-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} -{"id":"bd-313v","title":"rpc: Rich mutation events not emitted","description":"The activity command (activity.go) references rich mutation event types (MutationBonded, MutationSquashed, MutationBurned, MutationStatus) that include metadata like OldStatus, NewStatus, ParentID, and StepCount.\n\nHowever, the emitMutation() function in server_core.go:141 only accepts (eventType, issueID) and only populates Type, IssueID, and Timestamp. The additional metadata fields are never set.\n\nNeed to either:\n1. Add an emitRichMutation() function that accepts the additional metadata\n2. Update call sites (close, bond, squash, burn operations) to emit rich events\n\nWithout this fix, the activity feed will never show:\n- Status transitions (in_progress -\u003e closed)\n- Bonded events with step counts\n- Parent molecule relationships\n\nDiscovered during code review of bd-xo1o implementation.","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-23T04:06:17.39523-08:00","updated_at":"2025-12-23T04:13:19.205249-08:00","closed_at":"2025-12-23T04:13:19.205249-08:00","close_reason":"Added emitRichMutation() function and updated handleClose/handleUpdate to emit MutationStatus events with old/new status metadata. Added comprehensive tests."} -{"id":"bd-379","title":"Implement `bd setup cursor` for Cursor IDE integration","description":"Create a `bd setup cursor` command that integrates Beads workflow into Cursor IDE via .cursorrules file. Unlike Claude Code (which has hooks), Cursor uses a static rules file to provide context to its AI.","status":"open","priority":3,"issue_type":"feature","created_at":"2025-11-11T23:32:22.170083-08:00","updated_at":"2025-11-11T23:32:22.170083-08:00"} -{"id":"bd-3852","title":"Add orphan detection migration","description":"Create migration to detect orphaned children in existing databases. Query: SELECT id FROM issues WHERE id LIKE '%.%' AND substr(id, 1, instr(id || '.', '.') - 1) NOT IN (SELECT id FROM issues). Log results, let user decide action (delete orphans or convert to top-level).","status":"closed","priority":2,"issue_type":"task","created_at":"2025-11-04T12:32:30.727044-08:00","updated_at":"2025-12-21T21:00:05.041582-08:00","closed_at":"2025-12-21T21:00:05.041582-08:00","close_reason":"Already implemented in migration 016_orphan_detection.go - detects orphans and logs for user action"} -{"id":"bd-396j","title":"GetBlockedIssues shows external deps as blocking even when satisfied","description":"GetBlockedIssues (ready.go:385-493) shows external:* refs in the blocked_by list but doesn't check if they're actually satisfied using CheckExternalDep.\n\nThis can be confusing - an issue shows as blocked by external:project:capability even if that capability has been shipped (closed issue with provides: label exists).\n\nOptions:\n1. Call CheckExternalDep for each external ref and filter satisfied ones from blocked_by\n2. Add a note in output indicating external deps need lazy resolution\n3. Document this is expected behavior (bd blocked shows all deps, bd ready shows resolved state)\n\nRelated: GetReadyWork correctly filters by external deps, but GetBlockedIssues doesn't.","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-21T23:45:05.286304-08:00","updated_at":"2025-12-22T21:48:38.086451-08:00","closed_at":"2025-12-22T21:48:38.086451-08:00","close_reason":"Implemented filterBlockedByExternalDeps to check external deps satisfaction and filter them from BlockedBy lists","dependencies":[{"issue_id":"bd-396j","depends_on_id":"bd-zmmy","type":"discovered-from","created_at":"2025-12-21T23:45:05.286971-08:00","created_by":"daemon"}]} -{"id":"bd-3bsz","title":"gt mail send: support reading message body from stdin","description":"Currently gt mail send -m requires the message as a command-line argument, which causes shell escaping issues with backticks, quotes, and special characters.\n\nAdd support for reading message body from stdin:\n- gt mail send addr -s 'Subject' --stdin # Read body from stdin\n- echo 'body' | gt mail send addr -s 'Subject' -m - # Convention: -m - means stdin\n\nThis would allow:\ncat \u003c\u003c'EOF' | gt mail send addr -s 'Subject' --stdin\nMessage with `backticks` and 'quotes' safely\nEOF\n\nWithout this, agents struggle to send handoff messages containing code snippets.","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-12-23T03:21:39.496208-08:00","updated_at":"2025-12-23T12:19:44.443554-08:00","closed_at":"2025-12-23T12:19:44.443554-08:00","close_reason":"Moved to gastown: gt-rw2z"} -{"id":"bd-3ggb","title":"Rebuild local binary","description":"Build and verify: go build -o bd ./cmd/bd \u0026\u0026 ./bd version","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T22:43:03.101428-08:00","updated_at":"2025-12-18T22:46:40.955673-08:00","closed_at":"2025-12-18T22:46:40.955673-08:00","dependencies":[{"issue_id":"bd-3ggb","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T22:43:16.748289-08:00","created_by":"daemon"},{"issue_id":"bd-3ggb","depends_on_id":"bd-4y4g","type":"blocks","created_at":"2025-12-18T22:43:20.950376-08:00","created_by":"daemon"}]} -{"id":"bd-3jcw","title":"activity.go: Missing test coverage","description":"The new activity.go command (from bd-xo1o.3) has no test coverage. At minimum, tests should cover:\n- parseDurationString() for various formats (5m, 1h, 2d, invalid)\n- filterEvents() for --mol and --type filtering\n- formatEvent() and getEventDisplay() for all mutation types\n\nDiscovered during code review of bd-xo1o implementation.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-23T04:06:15.563579-08:00","updated_at":"2025-12-23T04:14:56.150151-08:00","closed_at":"2025-12-23T04:14:56.150151-08:00","close_reason":"Added activity_test.go with tests for parseDurationString, filterEvents, getEventDisplay, and formatEvent covering all mutation types"} -{"id":"bd-3sz0","title":"Auto-repair stale merge driver configs with invalid placeholders","description":"Old bd versions (\u003c0.24.0) installed merge driver with invalid placeholders %L %R instead of %A %B. Add detection to bd doctor --fix: check if git config merge.beads.driver contains %L or %R, auto-repair to 'bd merge %A %O %A %B'. One-time migration for users who initialized with old versions.","status":"open","priority":2,"issue_type":"feature","created_at":"2025-11-21T23:16:10.762808-08:00","updated_at":"2025-11-21T23:16:28.892655-08:00","dependencies":[{"issue_id":"bd-3sz0","depends_on_id":"bd-tbz3","type":"parent-child","created_at":"2025-11-21T23:16:10.763612-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-3uje","title":"Test issue for pin --for","description":"Testing the pin --for flag","status":"tombstone","priority":3,"issue_type":"task","assignee":"test-agent","created_at":"2025-12-22T02:53:43.075522-08:00","updated_at":"2025-12-22T02:54:07.973855-08:00","deleted_at":"2025-12-22T02:54:07.973855-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} -{"id":"bd-3x9o","title":"Merge: bd-by0d","description":"branch: polecat/furiosa\ntarget: main\nsource_issue: bd-by0d\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-19T23:21:26.817906-08:00","updated_at":"2025-12-20T23:17:26.998785-08:00","closed_at":"2025-12-20T23:17:26.998785-08:00","close_reason":"Branches nuked, MRs obsolete"} -{"id":"bd-3zzh","title":"Merge: bd-tvu3","description":"branch: polecat/Beader\ntarget: main\nsource_issue: bd-tvu3\nrig: beads","status":"closed","priority":1,"issue_type":"merge-request","created_at":"2025-12-23T13:36:55.016496-08:00","updated_at":"2025-12-23T19:12:08.347363-08:00","closed_at":"2025-12-23T19:12:08.347363-08:00","close_reason":"Stale merge-requests from orphaned polecat branches - refinery not processing"} -{"id":"bd-401h","title":"Work on beads-7jl: Fix Windows installer file locking iss...","description":"Work on beads-7jl: Fix Windows installer file locking issue (GH#652). Close file handle before extraction in postinstall.js. When done, submit MR (not PR) to integration branch for Refinery.","status":"closed","priority":2,"issue_type":"task","assignee":"beads/rictus","created_at":"2025-12-19T22:55:57.873767-08:00","updated_at":"2025-12-19T23:20:05.747664-08:00","closed_at":"2025-12-19T23:20:05.747664-08:00","close_reason":"Fixed file handle closure race condition in postinstall.js"} -{"id":"bd-411u","title":"Document BEADS_DIR pattern for multi-agent workspaces (Gas Town)","description":"Gas Town and similar multi-agent systems need to configure separate beads databases per workspace/rig, distinct from any project-level beads.\n\n## Use Case\n\nIn Gas Town:\n- Each 'rig' (managed project) has multiple agents (polecats, refinery, witness)\n- All agents in a rig should share a single beads database at the rig level\n- This should be separate from any .beads/ the project itself uses\n- The BEADS_DIR env var enables this\n\n## Documentation Needed\n\n1. Add a section to docs explaining BEADS_DIR for multi-agent setups\n2. Example: setting BEADS_DIR in agent startup scripts/hooks\n3. Clarify interaction with project-level .beads/ (BEADS_DIR takes precedence)\n\n## Current Support\n\nAlready implemented in internal/beads/beads.go:FindDatabasePath():\n- BEADS_DIR env var is checked first (preferred)\n- BEADS_DB env var still supported (deprecated)\n- Falls back to .beads/ search in tree\n\nJust needs documentation for the multi-agent workspace pattern.","status":"open","priority":3,"issue_type":"task","created_at":"2025-12-15T22:08:22.158027-08:00","updated_at":"2025-12-15T22:08:22.158027-08:00"} -{"id":"bd-47tn","title":"Add bd daemon --stop-all command to kill all daemon processes","description":"Currently there's no easy way to stop all running bd daemon processes. Users must resort to pkill -f 'bd daemon' or similar shell commands.\n\nAdd a --stop-all flag to bd daemon that:\n1. Finds all running bd daemon processes (not just the current repo's daemon)\n2. Gracefully stops them all\n3. Reports how many were stopped\n\nThis is useful when:\n- Multiple daemons are running and causing race conditions\n- User wants a clean slate before running bd sync\n- Debugging daemon-related issues","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-12-13T06:34:45.080633-08:00","updated_at":"2025-12-16T01:14:49.501989-08:00","closed_at":"2025-12-14T17:33:03.057089-08:00"} -{"id":"bd-49kw","title":"Workaround for FastMCP outputSchema bug in Claude Code","description":"The beads MCP server (v0.23.1) successfully connects to Claude Code, but all tools fail to load with a schema validation error due to a bug in FastMCP 2.13.1.\n\nError: \"Invalid literal value, expected \\\"object\\\"\" in outputSchema.\n\nRoot Cause: FastMCP generates outputSchema with $ref at root level without \"type\": \"object\" for self-referential models (Issue).\n\nWorkaround: Use slash commands (/beads:ready) or wait for FastMCP fix.\n","status":"open","priority":1,"issue_type":"bug","created_at":"2025-11-20T18:55:39.041831-05:00","updated_at":"2025-12-23T21:22:15.889295-08:00"} -{"id":"bd-4bsb","title":"Code review findings: mol squash deletion bypasses tombstones","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-21T13:57:14.154316-08:00","updated_at":"2025-12-21T18:01:06.811216-08:00","closed_at":"2025-12-21T18:01:06.811216-08:00","close_reason":"Won't fix: Wisps are ephemeral by design. They're local-only, never synced to other clones, so tombstones are unnecessary overhead. Hard delete is intentional.","dependencies":[{"issue_id":"bd-4bsb","depends_on_id":"bd-2vh3.3","type":"discovered-from","created_at":"2025-12-21T13:57:14.155488-08:00","created_by":"daemon"}]} -{"id":"bd-4ec8","title":"Widespread double JSON encoding bug in daemon mode RPC calls","description":"Multiple CLI commands had the same double JSON encoding bug found in bd-1048. All commands that called ResolveID via RPC used string(resp.Data) instead of properly unmarshaling the JSON response. This caused IDs to retain JSON quotes (\"bd-1048\" instead of bd-1048), which then got double-encoded when passed to subsequent RPC calls.\n\nAffected commands:\n- bd show (3 instances)\n- bd dep add/remove/tree (5 instances)\n- bd label add/remove/list (3 instances)\n- bd reopen (1 instance)\n\nRoot cause: resp.Data is json.RawMessage (already JSON-encoded), so string() conversion preserves quotes.\n\nFix: Replace all string(resp.Data) with json.Unmarshal(resp.Data, \u0026id) for proper deserialization.\n\nAll commands now tested and working correctly with daemon mode.","status":"closed","priority":0,"issue_type":"bug","created_at":"2025-11-02T22:33:01.632691-08:00","updated_at":"2025-12-17T23:13:40.533631-08:00","closed_at":"2025-12-17T16:26:05.851197-08:00"} -{"id":"bd-4hn","title":"wish: list \u0026 ready show issues as hierarchy tree","description":"`bd ready` and `bd list` just show a flat list, and it's up to the reader to parse which ones are dependent or sub-issues of others. It would be much easier to understand if they were shown in a tree format","status":"open","priority":3,"issue_type":"task","created_at":"2025-12-08T06:38:24.016316945-07:00","updated_at":"2025-12-08T06:39:04.065882225-07:00"} -{"id":"bd-4lm3","title":"Correction: Pinned field already in v0.31.0","description":"Quick correction - the Pinned field is already in the current bd v0.31.0:\n\n```go\n// In beads internal/types/types.go\nPinned bool `json:\"pinned,omitempty\"`\n```\n\nSo you just need to:\n1. Add `Pinned bool `json:\"pinned,omitempty\"`` to BeadsMessage in types.go\n2. Sort pinned messages first in listBeads() after fetching\n\nNo migration needed - the field is already there.\n\n-- Mayor","status":"closed","priority":2,"issue_type":"message","assignee":"gastown/crew/max","created_at":"2025-12-20T17:52:27.321458-08:00","updated_at":"2025-12-21T17:52:18.617995-08:00","closed_at":"2025-12-21T17:52:18.617995-08:00","close_reason":"Stale correction message","labels":["from:beads-crew-dave","thread:thread-4dd70157dbc1"]} -{"id":"bd-4nqq","title":"Remove dead test code in info_test.go","description":"Code health review found cmd/bd/info_test.go has two tests permanently skipped:\n\n- TestInfoCommand\n- TestInfoCommandNoDaemon\n\nBoth skip with: 'Manual test - bd info command is working, see manual testing'\n\nThese are essentially dead code. Either automate them or remove them entirely.","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-16T18:17:27.554019-08:00","updated_at":"2025-12-22T21:01:24.524963-08:00","closed_at":"2025-12-22T21:01:24.524963-08:00","close_reason":"Removed 2 permanently skipped tests (TestInfoCommand, TestInfoWithNoDaemon). Kept 3 useful tests for versionChanges."} -{"id":"bd-4opy","title":"Refactor long SQLite test files","description":"The SQLite test files have grown unwieldy. Review and refactor.\n\n## Goals\n- Break up large test files into focused modules\n- Improve test organization by feature area\n- Reduce test duplication\n- Make tests easier to maintain and extend\n\n## Areas to Review\n- main_test.go (likely the largest)\n- Any test files over 500 lines\n- Shared test fixtures and helpers\n- Test coverage gaps\n\n## Approach\n- Group tests by feature (CRUD, sync, queries, transactions)\n- Extract common fixtures to test helpers\n- Consider table-driven tests where appropriate\n- Ensure each test file has clear focus\n\n## Reference\nSee docs/dev-notes/ for any existing test audit notes","status":"closed","priority":2,"issue_type":"task","assignee":"beads/angharad","created_at":"2025-12-21T23:41:47.025285-08:00","updated_at":"2025-12-23T01:33:25.733299-08:00","closed_at":"2025-12-23T01:33:25.733299-08:00","close_reason":"Merged to main"} -{"id":"bd-4or","title":"Add tests for daemon functionality","description":"Critical daemon functions have 0% test coverage including daemon lifecycle, health checks, and RPC server functionality. These are essential for system reliability and need comprehensive test coverage.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T07:00:26.916050465-07:00","updated_at":"2025-12-19T09:54:57.017114822-07:00","closed_at":"2025-12-18T12:29:06.134014366-07:00","dependencies":[{"issue_id":"bd-4or","depends_on_id":"bd-6ss","type":"discovered-from","created_at":"2025-12-18T07:00:26.919347253-07:00","created_by":"matt"}]} -{"id":"bd-4p3k","title":"Release v0.34.0","description":"Minor version release for beads v0.34.0. This bead serves as my persistent work assignment; the actual release steps are tracked in an attached wisp.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-22T03:03:20.73092-08:00","updated_at":"2025-12-22T03:05:03.168622-08:00","deleted_at":"2025-12-22T03:05:03.168622-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} -{"id":"bd-4q8","title":"bd cleanup --hard should skip tombstone creation for true permanent deletion","description":"## Problem\n\nWhen using bd cleanup --hard --older-than N --force, the command:\n1. Deletes closed issues older than N days (converting them to tombstones with NOW timestamp)\n2. Then tries to prune tombstones older than N days (finds none because they were just created)\n\nThis leaves the database bloated with fresh tombstones that will not be pruned.\n\n## Expected Behavior\n\nIn --hard mode, the deletion should be permanent without creating tombstones, since the user explicitly requested bypassing sync safety.\n\n## Workaround\n\nManually delete from database: sqlite3 .beads/beads.db 'DELETE FROM issues WHERE status=tombstone'\n\n## Fix Options\n\n1. In --hard mode, use a different delete path that does not create tombstones\n2. After deleting, immediately prune the just-created tombstones regardless of age\n3. Pass a skip_tombstone flag to the delete operation\n\nOption 1 is cleanest - --hard should mean permanent delete without tombstone.","status":"tombstone","priority":1,"issue_type":"bug","created_at":"2025-12-16T01:33:36.580657-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} -{"id":"bd-4qfb","title":"Improve bd doctor output formatting for better readability","description":"Improve bd doctor output formatting for better readability.\n\n## Current State\nDoctor output is a wall of text with:\n- All checks shown (even passing ones)\n- No visual hierarchy\n- Hard to spot failures in long output\n\n## Target Output\n\n```\n$ bd doctor\n\nbd doctor v0.35.0\n\nSummary: 24 checks passed, 1 warning, 0 errors\n\n─────────────────────────────────────────────────\n⚠ Warnings (1)\n─────────────────────────────────────────────────\n\n[hooks] Git hooks outdated\n Current version: 0.34.0\n Latest version: 0.35.0\n Fix: bd hooks install\n\n─────────────────────────────────────────────────\nβœ“ Passed (24) [use --verbose to show details]\n─────────────────────────────────────────────────\n```\n\nWith --verbose:\n```\n$ bd doctor --verbose\n\nbd doctor v0.35.0\n\nSummary: 24 checks passed, 1 warning, 0 errors\n\n─────────────────────────────────────────────────\n⚠ Warnings (1)\n─────────────────────────────────────────────────\n\n[hooks] Git hooks outdated\n ...\n\n─────────────────────────────────────────────────\nβœ“ Passed (24)\n─────────────────────────────────────────────────\n\n Database\n βœ“ Database exists\n βœ“ Database readable\n βœ“ Schema up to date\n \n Git Hooks\n βœ“ Pre-commit hook installed\n βœ“ Post-merge hook installed\n ⚠ Hooks version mismatch (see above)\n \n Sync\n βœ“ Sync branch configured\n βœ“ Remote accessible\n ...\n```\n\n## Implementation\n\n### 1. Add check categories (cmd/bd/doctor/categories.go)\n\n```go\ntype Category string\n\nconst (\n CatDatabase Category = \"Database\"\n CatHooks Category = \"Git Hooks\"\n CatSync Category = \"Sync\"\n CatDaemon Category = \"Daemon\"\n CatConfig Category = \"Configuration\"\n CatIntegrity Category = \"Data Integrity\"\n)\n\n// Assign categories to checks\nvar checkCategories = map[string]Category{\n \"database-exists\": CatDatabase,\n \"database-readable\": CatDatabase,\n \"schema-version\": CatDatabase,\n \"pre-commit-hook\": CatHooks,\n \"post-merge-hook\": CatHooks,\n \"hooks-version\": CatHooks,\n \"sync-branch\": CatSync,\n \"remote-access\": CatSync,\n // ... etc\n}\n```\n\n### 2. Add --verbose flag\n\n```go\n// In cmd/bd/doctor.go init()\ndoctorCmd.Flags().BoolP(\"verbose\", \"v\", false, \"Show all checks including passed\")\n```\n\n### 3. Create formatter (cmd/bd/doctor/format.go)\n\n```go\ntype Formatter struct {\n verbose bool\n noColor bool\n}\n\nfunc (f *Formatter) Format(results []CheckResult) string {\n var buf strings.Builder\n \n // Count by status\n passed, warnings, errors := countByStatus(results)\n \n // Header\n buf.WriteString(fmt.Sprintf(\"bd doctor v%s\\n\\n\", version.Version))\n buf.WriteString(fmt.Sprintf(\"Summary: %d passed, %d warnings, %d errors\\n\\n\", \n passed, warnings, errors))\n \n // Errors section (always show)\n if errors \u003e 0 {\n f.writeSection(\u0026buf, \"βœ— Errors\", filterByStatus(results, StatusError))\n }\n \n // Warnings section (always show)\n if warnings \u003e 0 {\n f.writeSection(\u0026buf, \"⚠ Warnings\", filterByStatus(results, StatusWarning))\n }\n \n // Passed section (only with --verbose)\n if f.verbose \u0026\u0026 passed \u003e 0 {\n f.writePassedSection(\u0026buf, filterByStatus(results, StatusPassed))\n } else if passed \u003e 0 {\n buf.WriteString(fmt.Sprintf(\"βœ“ Passed (%d) [use --verbose to show details]\\n\", passed))\n }\n \n return buf.String()\n}\n\nfunc (f *Formatter) writeSection(buf *strings.Builder, title string, results []CheckResult) {\n buf.WriteString(\"─────────────────────────────────────────────────\\n\")\n buf.WriteString(title + \"\\n\")\n buf.WriteString(\"─────────────────────────────────────────────────\\n\\n\")\n \n for _, r := range results {\n buf.WriteString(fmt.Sprintf(\"[%s] %s\\n\", r.CheckName, r.Message))\n if r.Details != \"\" {\n buf.WriteString(fmt.Sprintf(\" %s\\n\", r.Details))\n }\n if r.Fix != \"\" {\n buf.WriteString(fmt.Sprintf(\" Fix: %s\\n\", r.Fix))\n }\n buf.WriteString(\"\\n\")\n }\n}\n\nfunc (f *Formatter) writePassedSection(buf *strings.Builder, results []CheckResult) {\n // Group by category\n byCategory := groupByCategory(results)\n \n buf.WriteString(\"─────────────────────────────────────────────────\\n\")\n buf.WriteString(fmt.Sprintf(\"βœ“ Passed (%d)\\n\", len(results)))\n buf.WriteString(\"─────────────────────────────────────────────────\\n\\n\")\n \n for _, cat := range categoryOrder {\n if checks, ok := byCategory[cat]; ok {\n buf.WriteString(fmt.Sprintf(\" %s\\n\", cat))\n for _, r := range checks {\n buf.WriteString(fmt.Sprintf(\" βœ“ %s\\n\", r.Message))\n }\n buf.WriteString(\"\\n\")\n }\n }\n}\n```\n\n### 4. Update run function\n\n```go\nfunc runDoctor(cmd *cobra.Command, args []string) {\n verbose, _ := cmd.Flags().GetBool(\"verbose\")\n noColor, _ := cmd.Flags().GetBool(\"no-color\")\n \n results := runAllChecks()\n \n formatter := \u0026Formatter{verbose: verbose, noColor: noColor}\n fmt.Print(formatter.Format(results))\n \n // Exit code based on results\n if hasErrors(results) {\n os.Exit(1)\n }\n}\n```\n\n## Files to Modify\n\n1. **cmd/bd/doctor.go** - Add --verbose flag, update run function\n2. **cmd/bd/doctor/format.go** - New file for formatting logic\n3. **cmd/bd/doctor/categories.go** - New file for check categorization\n4. **cmd/bd/doctor/common.go** - Add Status field to CheckResult if missing\n\n## Testing\n\n```bash\n# Default output (concise)\nbd doctor\n\n# Verbose output\nbd doctor --verbose\n\n# JSON output (should still work)\nbd doctor --json\n```\n\n## Success Criteria\n- Summary line at top with counts\n- Only failures/warnings shown by default\n- --verbose shows grouped passed checks\n- Visual separators between sections\n- Exit code 1 if errors, 0 otherwise","status":"closed","priority":2,"issue_type":"task","assignee":"beads/Polish","created_at":"2025-12-13T09:29:27.557578+11:00","updated_at":"2025-12-23T13:37:18.48781-08:00","closed_at":"2025-12-23T13:37:18.48781-08:00","close_reason":"Implemented: summary at top, --verbose flag, collapsed passed checks","dependencies":[{"issue_id":"bd-4qfb","depends_on_id":"bd-iz5t","type":"parent-child","created_at":"2025-12-23T12:44:07.972517-08:00","created_by":"daemon"}]} -{"id":"bd-4ri","title":"Fix TestFallbackToDirectModeEnablesFlush deadlock causing 10min test timeout","description":"## Problem\n\nTestFallbackToDirectModeEnablesFlush in direct_mode_test.go deadlocks for 9m59s before timing out, causing the entire test suite to take 10+ minutes instead of \u003c10 seconds.\n\n## Root Cause\n\nDatabase lock contention between test cleanup and flushToJSONL():\n- Test cleanup (line 36) tries to close DB via defer\n- flushToJSONL() (line 132) is still accessing DB\n- Results in deadlock: database/sql.(*DB).Close() waits for mutex while GetJSONLFileHash() holds it\n\n## Stack Trace Evidence\n\n```\ngoroutine 512 [sync.Mutex.Lock, 9 minutes]:\ndatabase/sql.(*DB).Close(0x14000643790)\n .../database/sql/sql.go:927 +0x84\ngithub.com/steveyegge/beads/cmd/bd.TestFallbackToDirectModeEnablesFlush.func1()\n .../direct_mode_test.go:36 +0xf4\n\nWhile goroutine running flushToJSONL() holds DB connection via GetJSONLFileHash()\n```\n\n## Impact\n\n- Test suite: 10+ minutes β†’ should be \u003c10 seconds\n- ALL other tests pass in ~4 seconds\n- This ONE test accounts for 99.9% of test runtime\n\n## Related\n\nThis is the EXACT same issue documented in MAIN_TEST_REFACTOR_NOTES.md for why main_test.go refactoring was deferred - global state manipulation + DB cleanup = deadlock.\n\n## Fix Approaches\n\n1. **Add proper cleanup sequencing** - stop flush goroutines BEFORE closing DB\n2. **Use test-specific DB lifecycle** - ensure flush completes before cleanup\n3. **Mock the flush mechanism** - avoid real DB for testing this code path \n4. **Add explicit timeout handling** - fail fast with clear error instead of hanging\n\n## Files\n\n- cmd/bd/direct_mode_test.go:36-132\n- cmd/bd/autoflush.go:353 (validateJSONLIntegrity)\n- cmd/bd/autoflush.go:508 (flushToJSONLWithState)\n\n## Acceptance\n\n- Test passes without timeout\n- Test suite completes in \u003c10 seconds\n- No deadlock between cleanup and flush operations","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-11-21T20:09:00.794372-05:00","updated_at":"2025-12-17T23:13:40.533279-08:00","closed_at":"2025-12-17T17:25:07.626617-08:00"} -{"id":"bd-4sfl","title":"Merge: bd-14ie","description":"branch: polecat/toast\ntarget: main\nsource_issue: bd-14ie\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-19T23:23:37.360782-08:00","updated_at":"2025-12-20T23:17:26.997276-08:00","closed_at":"2025-12-20T23:17:26.997276-08:00","close_reason":"Branches nuked, MRs obsolete"} -{"id":"bd-4uoc","title":"Code Review Followup Summary: PR #481 + PR #551","description":"## Merged PRs Summary\n\n### PR #551: Persist close_reason to issues table\n- βœ… Merged successfully\n- βœ… Bug fix: close_reason now persisted in database column (not just events table)\n- βœ… Comprehensive test coverage added\n- βœ… Handles reopen case (clearing close_reason)\n\n**Followup Issues Filed:**\n- bd-lxzx: Document close_reason in JSONL export format\n- bd-077e: Update CLI documentation for close_reason field\n\n---\n\n### PR #481: Context Engineering Optimizations (80-90% context reduction)\n- βœ… Merged successfully \n- βœ… Lazy tool discovery: discover_tools() + get_tool_info()\n- βœ… Minimal issue models: IssueMinimal (~80% smaller than full Issue)\n- βœ… Result compaction: Auto-compacts results \u003e20 items\n- βœ… All 28 tests passing\n- ⚠️ Breaking change: ready() and list() return type changed\n\n**Followup Issues Filed:**\n- bd-b318: Add integration tests for CompactedResult\n- bd-4u2b: Make compaction settings configurable (THRESHOLD, PREVIEW_COUNT)\n- bd-2kf8: Document CompactedResult response format in CONTEXT_ENGINEERING.md\n- bd-pdr2: Document backwards compatibility considerations\n\n---\n\n## Overall Assessment\n\nBoth PRs are production-ready with solid implementations. All critical functionality works and tests pass. Followup issues focus on:\n1. Documentation improvements (5 issues)\n2. Integration test coverage (1 issue)\n3. Configuration flexibility (1 issue)\n4. Backwards compatibility guidance (1 issue)\n\nNo critical bugs or design issues found.\n\n## Review Completed By\nCode review process completed. Issues auto-created for tracking improvements.","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-14T14:25:59.214886-08:00","updated_at":"2025-12-14T14:25:59.214886-08:00","dependencies":[{"issue_id":"bd-4uoc","depends_on_id":"bd-otf4","type":"discovered-from","created_at":"2025-12-14T14:25:59.216884-08:00","created_by":"stevey","metadata":"{}"},{"issue_id":"bd-4uoc","depends_on_id":"bd-z86n","type":"discovered-from","created_at":"2025-12-14T14:25:59.217296-08:00","created_by":"stevey","metadata":"{}"}]} -{"id":"bd-4y4g","title":"Bump version in all files","description":"Run ./scripts/bump-version.sh {{version}} to update 10 version files. Then run with --commit after info.go is updated.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T22:43:01.859728-08:00","updated_at":"2025-12-18T22:46:24.537336-08:00","closed_at":"2025-12-18T22:46:24.537336-08:00","dependencies":[{"issue_id":"bd-4y4g","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T22:43:16.623724-08:00","created_by":"daemon"},{"issue_id":"bd-4y4g","depends_on_id":"bd-8v2","type":"blocks","created_at":"2025-12-18T22:43:20.823329-08:00","created_by":"daemon"}]} -{"id":"bd-512v","title":"Verify release artifacts","description":"Check GitHub releases page - binaries for darwin/linux/windows should be available","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T13:52:33.067124-08:00","updated_at":"2025-12-21T13:53:49.35495-08:00","deleted_at":"2025-12-21T13:53:49.35495-08:00","deleted_by":"stevey","delete_reason":"manual delete","original_type":"task","wisp":true} -{"id":"bd-56x","title":"Review PR #514: fix plugin install docs","description":"Review and merge PR #514 from aspiers. This PR fixes incorrect docs for installing Claude Code plugin from source in docs/PLUGIN.md. Clarifies shell vs Claude Code commands and fixes the . vs ./beads argument issue. URL: https://github.com/anthropics/beads/pull/514","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-13T08:15:16.865354+11:00","updated_at":"2025-12-13T07:07:19.729213-08:00","closed_at":"2025-12-13T07:07:19.729213-08:00"} -{"id":"bd-581b80b3","title":"bd find-duplicates - AI-powered duplicate detection","description":"Find semantically duplicate issues.\n\nApproaches:\n1. Mechanical: Exact title/description matching\n2. Embeddings: Cosine similarity (cheap, scalable)\n3. AI: LLM-based semantic comparison (expensive, accurate)\n\nUses embeddings by default for \u003e100 issues.\n\nFiles: cmd/bd/find_duplicates.go (new)","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-29T20:49:49.126801-07:00","updated_at":"2025-12-17T22:58:34.563511-08:00","closed_at":"2025-12-17T22:58:34.563511-08:00","close_reason":"Closed"} -{"id":"bd-589x","title":"HANDOFF: Version 0.30.7 release in progress","description":"## Context\nDoing a 0.30.7 patch release with bug fixes.\n\n## What's done\n- Fixed #657: bd graph nil pointer crash (graph.go:102)\n- Fixed #652: Windows npm installer file lock (postinstall.js)\n- Updated CHANGELOG.md and info.go\n- Pushed to main, CI running (run 20390861825)\n- Created version-bump molecule template (bd-6s61) and instantiated for 0.30.7 (bd-8pyn)\n\n## In progress\nMolecule bd-8pyn has 3 remaining tasks:\n - bd-dxo7: Wait for CI to pass\n - bd-7l70: Verify release artifacts \n - bd-5c91: Update local installation\n\n## Check CI\n gh run list --repo steveyegge/beads --limit 1\n gh run view 20390861825 --repo steveyegge/beads\n\n## New feature filed\nbd-n777: Timer beads for scheduled agent callbacks\nDesign for Deacon-managed timers that can interrupt agents via tmux\n\n## Resume commands\n bd --no-daemon show bd-8pyn\n gh run list --repo steveyegge/beads --limit 1","status":"closed","priority":2,"issue_type":"message","assignee":"beads/dave","created_at":"2025-12-19T23:06:14.902334-08:00","updated_at":"2025-12-20T00:49:51.927111-08:00","closed_at":"2025-12-20T00:25:59.596546-08:00"} -{"id":"bd-5b6e","title":"Add tests for helper functions (GetDirtyIssueHash, GetAllDependencyRecords, export hashes)","description":"Several utility functions have 0% coverage:\n- GetDirtyIssueHash (dirty.go)\n- GetAllDependencyRecords (dependencies.go)\n- GetExportHash, SetExportHash, ClearAllExportHashes (hash.go)\n\nThese are lower priority but should have basic coverage.","status":"open","priority":4,"issue_type":"task","created_at":"2025-11-01T22:40:58.989976-07:00","updated_at":"2025-11-01T22:40:58.989976-07:00"} -{"id":"bd-5exm","title":"Merge: bd-49kw","description":"branch: polecat/nux\ntarget: main\nsource_issue: bd-49kw\nrig: beads","status":"closed","priority":1,"issue_type":"merge-request","created_at":"2025-12-23T20:43:23.156375-08:00","updated_at":"2025-12-23T21:21:57.693169-08:00","closed_at":"2025-12-23T21:21:57.693169-08:00","close_reason":"stale - no code pushed"} -{"id":"bd-5hrq","title":"bd doctor: detect issues referenced in commits but still open","description":"Add a doctor check that finds 'orphaned' issues - ones referenced in git commit messages (e.g., 'fix bug (bd-xxx)') but still marked as open in beads.\n\n**Detection logic:**\n1. Get all open issue IDs from beads\n2. Parse git log for issue ID references matching pattern \\(prefix-[a-z0-9.]+\\)\n3. Report issues that appear in commits but are still open\n\n**Output:**\n⚠ Warning: N issues referenced in commits but still open\n bd-xxx: 'Issue title' (commit abc123)\n bd-yyy: 'Issue title' (commit def456)\n \n These may be implemented but not closed. Run 'bd show \u003cid\u003e' to check.\n\n**Implementation:**\n- Add check to doctor/checks.go\n- Use git log parsing (already have git utilities)\n- Match against configured issue_prefix","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-12-21T21:48:08.473165-08:00","updated_at":"2025-12-21T21:55:37.795109-08:00","closed_at":"2025-12-21T21:55:37.795109-08:00","close_reason":"Implemented CheckOrphanedIssues in git.go with 8 test cases. Detects issues referenced in commits (e.g., 'fix bug (bd-xxx)') that are still open. Shows warning with issue IDs and commit hashes."} -{"id":"bd-5qim","title":"Optimize GetReadyWork performance - 752ms on 10K database (target: \u003c50ms)","notes":"# Performance Analysis (10K Issue Database)\n\nAnalyzed using CPU profiles from benchmark suite on Apple M2 Pro.\n\n## Operation Performance\n\n| Operation | Time | Allocations | Memory |\n|----------------------------------|---------|-------------|--------|\n| bd ready (GetReadyWork) | ~752ms | 167,466 | 16MB |\n| bd list (SearchIssues no filter) | ~11.6ms | 89,214 | 5.8MB |\n| bd list (SearchIssues filtered) | ~9.2ms | 62,365 | 3.5MB |\n| bd create (CreateIssue) | ~2.6ms | 146 | 8.6KB |\n| bd update (UpdateIssue) | ~0.32ms | 364 | 15KB |\n| bd close (UpdateIssue) | ~0.32ms | 364 | 15KB |\n\n**Target: \u003c50ms for all operations on 10K database**\n\n**Current issue: GetReadyWork is 15x over target (752ms vs 50ms)**\n\n## Root Cause\n\nGetReadyWork (internal/storage/sqlite/ready.go:90-128) uses recursive CTE to propagate blocking:\n- 65x slower than SearchIssues\n- Recalculates entire blocked issue tree on every call\n- Algorithm:\n 1. Find directly blocked issues via 'blocks' dependencies\n 2. Recursively propagate blockage to descendants (max depth: 50)\n 3. Exclude all blocked issues from results\n\n## CPU Profile Analysis\n\n- Database syscalls (pthread_cond_signal, syscall6): ~75%\n- SQLite engine overhead: inherent to recursive CTE\n- Application code (query construction): \u003c1%\n\n**Bottleneck is the recursive CTE query execution, not application code.**\n\n## Optimization Recommendations\n\n### High Impact (Likely to achieve \u003c50ms target)\n\n1. **Cache blocked issue calculation**\n - Add `blocked_issues` table updated on dependency changes\n - Trade write complexity for read speed (ready called \u003e\u003e dependency changes)\n - Eliminates recursive CTE on every read\n\n2. **Add/verify database indexes**\n ```sql\n CREATE INDEX IF NOT EXISTS idx_dependencies_blocked \n ON dependencies(issue_id, type, depends_on_id);\n CREATE INDEX IF NOT EXISTS idx_issues_status \n ON issues(status);\n ```\n\n### Medium Impact\n\n3. **Reduce allocations** (167K allocations for GetReadyWork)\n - Profile `scanIssues()` for object pooling opportunities\n - Reuse slice capacity for repeated calls\n\n### Low Impact (Not recommended)\n- Query optimization for CRUD operations (already \u003c3ms)\n- Connection pooling tuning (not showing in profiles)\n\n## Verification\n\nRun benchmarks to validate optimization:\n```bash\nmake bench-quick\ngo tool pprof -http=:8080 internal/storage/sqlite/bench-cpu-*.prof\n```\n\nProfile files automatically generated in `internal/storage/sqlite/`.","status":"closed","priority":0,"issue_type":"bug","created_at":"2025-11-14T09:02:46.507526-08:00","updated_at":"2025-12-17T23:13:40.534258-08:00","closed_at":"2025-12-17T16:21:37.918868-08:00"} -{"id":"bd-5rj1","title":"Merge: bd-gqxd","description":"branch: polecat/furiosa\ntarget: main\nsource_issue: bd-gqxd\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-23T16:40:21.707706-08:00","updated_at":"2025-12-23T19:12:08.349245-08:00","closed_at":"2025-12-23T19:12:08.349245-08:00","close_reason":"Stale merge-requests from orphaned polecat branches - refinery not processing"} -{"id":"bd-66l4","title":"Runtime bonding: bd mol attach","description":"Attach a molecule to an already-running workflow.\n\nCOMMAND: bd mol attach \u003cepic-id\u003e \u003cproto\u003e [--after \u003cissue-id\u003e]\n\nBEHAVIOR:\n- Resolve running epic and proto\n- Spawn proto as new subtree\n- Wire to specified attachment point (or epic root)\n- Handle in-progress issues: new work doesn't block completed work\n\nUSE CASES:\n- Discovered need for docs while implementing feature\n- Hotfix needs attaching to release workflow\n- Additional testing scope identified mid-flight\n\nFLAGS:\n- --after ISSUE: Specific attachment point within epic\n- --type: sequential (default) or parallel\n- --var: Variables for the attached proto\n\nCONSIDERATIONS:\n- What if epic is already closed? Error or reopen?\n- What if attachment point issue is closed? Attach as ready-to-work?","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-21T00:59:16.920483-08:00","updated_at":"2025-12-21T01:08:43.530597-08:00","closed_at":"2025-12-21T01:08:43.530597-08:00","close_reason":"Merged into bd-o91r: bond command handles all bonding cases polymorphically","dependencies":[{"issue_id":"bd-66l4","depends_on_id":"bd-o5xe","type":"parent-child","created_at":"2025-12-21T00:59:51.435542-08:00","created_by":"daemon"},{"issue_id":"bd-66l4","depends_on_id":"bd-o91r","type":"blocks","created_at":"2025-12-21T00:59:51.813782-08:00","created_by":"daemon"}]} -{"id":"bd-66w1","title":"Add external_projects to config schema","description":"Add external_projects mapping to .beads/config.yaml:\n\n```yaml\nexternal_projects:\n beads: ../beads\n gastown: ../gastown\n other: /absolute/path/to/project\n```\n\nUsed by bd ready and other commands to resolve external: references.\n\nPart of cross-project dependency system.\nSee: gastown/docs/cross-project-deps.md","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-12-21T22:37:39.245017-08:00","updated_at":"2025-12-21T23:03:19.81448-08:00","closed_at":"2025-12-21T23:03:19.81448-08:00","close_reason":"Added external_projects to config schema with GetExternalProjects() and ResolveExternalProjectPath() functions, tests, and documentation"} -{"id":"bd-687g","title":"Code review: mol squash deletion bypasses tombstone system","description":"The deleteEphemeralChildren function in mol_squash.go uses DeleteIssue directly instead of the proper deletion flow. This bypasses tombstone creation, deletion tracking (deletions.jsonl), and dependency cleanup. Could cause issues with deletion propagation across clones.\n\nCurrent code uses d.DeleteIssue(ctx, id) but should probably use d.DeleteIssues(ctx, ids, false, true, false) for proper tombstone handling.\n\nAlternative: Document that ephemeral issues intentionally use hard delete since they are transient and should never propagate to other clones anyway.","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-21T13:57:20.223345-08:00","updated_at":"2025-12-21T14:17:38.073899-08:00","closed_at":"2025-12-21T14:17:38.073899-08:00","close_reason":"Implemented ephemeral issue filtering from JSONL export and fixed comments leak in DeleteIssue"} -{"id":"bd-687v","title":"Consider caching external dep resolution results","description":"Each call to GetReadyWork re-checks all external dependencies by:\n1. Querying for external deps in the local database\n2. Opening each external project's database\n3. Querying for closed issues with provides: labels\n\nFor workloads with many external deps or slow external databases, this adds latency on every bd ready call.\n\nPotential optimizations:\n- In-memory TTL cache for external dep status (e.g., 60 second TTL)\n- Store resolved status in a local cache table with timestamp\n- Batch resolution of common project/capability pairs\n\nThis is not urgent - current implementation is correct and performant for typical workloads. Only becomes an issue with many external deps across many projects.","status":"deferred","priority":3,"issue_type":"task","created_at":"2025-12-21T23:45:16.360877-08:00","updated_at":"2025-12-23T12:27:02.223409-08:00","dependencies":[{"issue_id":"bd-687v","depends_on_id":"bd-zmmy","type":"discovered-from","created_at":"2025-12-21T23:45:16.361493-08:00","created_by":"daemon"}]} -{"id":"bd-68bf","title":"Code review: bd mol bond implementation","description":"Review the mol bond command implementation before shipping.\n\nFocus areas:\n1. runMolBond() - polymorphic dispatch logic correctness\n2. bondProtoProto() - compound proto creation, dependency wiring\n3. bondProtoMol() / bondMolProto() - spawn and attach logic\n4. bondMolMol() - joining molecules, lineage tracking\n5. BondRef usage - is lineage tracked correctly?\n6. Error handling - are all failure modes covered?\n7. Edge cases - what could go wrong?\n\nFile: cmd/bd/mol.go (lines 485-859)\nCommit: 386b513e","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-21T10:13:09.425229-08:00","updated_at":"2025-12-21T11:18:14.206869-08:00","closed_at":"2025-12-21T11:18:14.206869-08:00","close_reason":"Reviewed and fixed label persistence bug","dependencies":[{"issue_id":"bd-68bf","depends_on_id":"bd-o91r","type":"discovered-from","created_at":"2025-12-21T10:13:09.426471-08:00","created_by":"daemon"}]} -{"id":"bd-68e4","title":"doctor --fix should export when DB has more issues than JSONL","description":"When 'bd doctor' detects a count mismatch (DB has more issues than JSONL), it currently recommends 'bd sync --import-only', which imports JSONL into DB. But JSONL is the source of truth, not the DB.\n\n**Current behavior:**\n- Doctor detects: DB has 355 issues, JSONL has 292\n- Recommends: 'bd sync --import-only' \n- User runs it: Returns '0 created, 0 updated' (no-op, because JSONL hasn't changed)\n- User is stuck\n\n**Root cause:**\nThe doctor fix is one-directional (JSONLβ†’DB) when it should be bidirectional. If DB has MORE issues, they haven't been exported yet - the fix should be 'bd export' (DBβ†’JSONL), not import.\n\n**Desired fix:**\nIn fix.DBJSONLSync(), detect which has more data:\n- If DB \u003e JSONL: Run 'bd export' to sync JSONL (since DB is the working copy)\n- If JSONL \u003e DB: Run 'bd sync --import-only' to import (JSONL is source of truth)\n- If equal but timestamps differ: Detect based on file mtime\n\nThis makes 'bd doctor --fix' actually fix the problem instead of being a no-op.","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-21T11:17:20.994319182-07:00","updated_at":"2025-12-21T11:23:24.38523731-07:00","closed_at":"2025-12-21T11:23:24.38523731-07:00"} -{"id":"bd-6fe4622f","title":"Remove unreachable utility functions","description":"Several small utility functions are unreachable:\n\nFiles to clean:\n1. `internal/storage/sqlite/hash.go` - `computeIssueContentHash` (line 17)\n - Check if entire file can be deleted if only contains this function\n\n2. `internal/config/config.go` - `FileUsed` (line 151)\n - Delete unused config helper\n\n3. `cmd/bd/git_sync_test.go` - `verifyIssueOpen` (line 300)\n - Delete dead test helper\n\n4. `internal/compact/haiku.go` - `HaikuClient.SummarizeTier2` (line 81)\n - Tier 2 summarization not implemented\n - Options: implement feature OR delete method\n\nImpact: Removes 50-100 LOC depending on decisions","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-28T16:20:02.434573-07:00","updated_at":"2025-12-17T22:58:34.563993-08:00","closed_at":"2025-12-17T22:58:34.563993-08:00","close_reason":"Closed"} -{"id":"bd-6gd","title":"Remove legacy MCP Agent Mail integration","description":"## Summary\n\nRemove the legacy MCP Agent Mail system that requires an external HTTP server. Keep the native `bd mail` system which stores messages as git-synced issues.\n\n## Background\n\nTwo mail systems exist in the codebase:\n1. **Legacy Agent Mail** (`bd message`) - External server dependency, complex setup\n2. **Native bd mail** (`bd mail`) - Built-in, git-synced, no dependencies\n\nThe legacy system causes confusion and is no longer needed. Gas Town's Town Mail will use the native `bd mail` system.\n\n## Files to Delete\n\n### CLI Command\n- [ ] `cmd/bd/message.go` - The `bd message` command implementation\n\n### MCP Integration\n- [ ] `integrations/beads-mcp/src/beads_mcp/mail.py` - HTTP wrapper for Agent Mail server\n- [ ] `integrations/beads-mcp/src/beads_mcp/mail_tools.py` - MCP tool definitions\n- [ ] `integrations/beads-mcp/tests/test_mail.py` - Tests for legacy mail\n\n### Documentation\n- [ ] `docs/AGENT_MAIL.md`\n- [ ] `docs/AGENT_MAIL_QUICKSTART.md`\n- [ ] `docs/AGENT_MAIL_DEPLOYMENT.md`\n- [ ] `docs/AGENT_MAIL_MULTI_WORKSPACE_SETUP.md`\n- [ ] `docs/adr/002-agent-mail-integration.md`\n\n## Code to Update\n\n- [ ] Remove `message` command registration from `cmd/bd/main.go`\n- [ ] Remove mail tool imports/registration from MCP server `__init__.py` or `server.py`\n- [ ] Check for any other references to Agent Mail in the codebase\n\n## Verification\n\n- [ ] `bd message` command no longer exists\n- [ ] `bd mail` command still works\n- [ ] MCP server starts without errors\n- [ ] Tests pass\n","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T23:04:04.099935-08:00","updated_at":"2025-12-17T23:13:24.128752-08:00","closed_at":"2025-12-17T23:13:24.128752-08:00","close_reason":"Removed legacy MCP Agent Mail integration. Kept native bd mail system."} -{"id":"bd-6ns7","title":"test hook pin","status":"tombstone","priority":2,"issue_type":"task","assignee":"stevey","created_at":"2025-12-23T04:39:16.619755-08:00","updated_at":"2025-12-23T04:51:29.436788-08:00","deleted_at":"2025-12-23T04:51:29.436788-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} -{"id":"bd-6pc","title":"Implement bd pin/unpin commands","description":"Add 'bd pin \u003cid\u003e' and 'bd unpin \u003cid\u003e' commands to toggle the pinned status of issues. Should support multiple IDs like other bd commands.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T23:33:28.292937-08:00","updated_at":"2025-12-19T17:43:35.713398-08:00","closed_at":"2025-12-19T00:35:31.612589-08:00","dependencies":[{"issue_id":"bd-6pc","depends_on_id":"bd-0vg","type":"blocks","created_at":"2025-12-18T23:33:56.119852-08:00","created_by":"daemon"},{"issue_id":"bd-6pc","depends_on_id":"bd-7h5","type":"blocks","created_at":"2025-12-18T23:34:07.352848-08:00","created_by":"daemon"}]} -{"id":"bd-6rl","title":"Merge3Way public API does not expose TTL parameter","description":"The public Merge3Way() function in merge.go does not allow callers to configure the tombstone TTL. It hard-codes the default via merge3WayWithTTL(). While merge3WayWithTTL() exists, it is unexported (lowercase). This means the CLI and tests cannot configure TTL at merge time. Use cases: testing with different TTL values, per-repository TTL configuration, debugging with short TTL, supporting --ttl flag in bd merge command (mentioned in design doc bd-zvg). Recommendation: Export Merge3WayWithTTL (rename to uppercase). Files: internal/merge/merge.go:77, 292-298","status":"open","priority":3,"issue_type":"feature","created_at":"2025-12-05T16:36:15.756814-08:00","updated_at":"2025-12-05T16:36:15.756814-08:00"} -{"id":"bd-6s61","title":"Version Bump: {{version}}","description":"Release checklist for version {{version}}. This molecule ensures all release steps are completed properly.","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-19T22:55:42.487701-08:00","updated_at":"2025-12-20T17:59:26.261233-08:00","closed_at":"2025-12-20T01:18:47.905306-08:00","labels":["molecule","template"]} -{"id":"bd-6sm6","title":"Improve test coverage for internal/export (37.1% β†’ 60%)","description":"The export package has only 37.1% test coverage. Export functionality needs good coverage to ensure data integrity.\n\nCurrent coverage: 37.1%\nTarget coverage: 60%","status":"in_progress","priority":2,"issue_type":"task","assignee":"beads/alpha","created_at":"2025-12-13T20:43:06.802277-08:00","updated_at":"2025-12-23T22:29:35.499415-08:00"} -{"id":"bd-6ss","title":"Improve test coverage","description":"The test suite reports less than 45% code coverage. Identify the specific uncovered areas of the codebase, including modules, functions, or features. Rank them by potential impact on system reliability and business value, from most to least, and provide actionable recommendations for improving coverage in each area.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-18T06:54:23.036822442-07:00","updated_at":"2025-12-18T07:17:49.245940799-07:00","closed_at":"2025-12-18T07:17:49.245940799-07:00"} -{"id":"bd-70an","title":"test pin","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-21T11:19:16.760214-08:00","updated_at":"2025-12-21T11:19:46.500688-08:00","closed_at":"2025-12-21T11:19:46.500688-08:00","close_reason":"test issue for pin fix"} -{"id":"bd-746","title":"Fix resolvePartialID stub in workflow.go","description":"The resolvePartialID function at workflow.go:921-925 is a stub that just returns the ID unchanged. Should use utils.ResolvePartialID for proper partial ID resolution in direct mode (non-daemon).","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-17T22:22:57.586917-08:00","updated_at":"2025-12-17T22:34:07.270168-08:00","closed_at":"2025-12-17T22:34:07.270168-08:00"} -{"id":"bd-74w1","title":"Consolidate duplicate path-finding utilities (findJSONLPath, findBeadsDir, findGitRoot)","description":"Code health review found these functions defined in multiple places:\n\n- findJSONLPath() in autoflush.go:45-73 and doctor/fix/migrate.go\n- findBeadsDir() in autoimport.go:197-239 (with git worktree handling)\n- findGitRoot() in autoimport.go:242-269 (Windows path conversion)\n\nThe beads package has public FindBeadsDir() and FindJSONLPath() APIs that should be used consistently.\n\nImpact: Bug fixes need to be applied in multiple places. Git worktree handling may not be replicated everywhere.\n\nFix: Consolidate all implementations to use the beads package APIs. Remove duplicates.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-16T18:17:16.694293-08:00","updated_at":"2025-12-22T21:13:46.83103-08:00","closed_at":"2025-12-22T21:13:46.83103-08:00","close_reason":"Consolidated duplicate path-finding utilities: findGitRoot() now delegates to git.GetRepoRoot(), findBeadsDir() replaced with beads.FindBeadsDir() across 8 files"} -{"id":"bd-754r","title":"Merge: bd-thgk","description":"branch: polecat/Compactor\ntarget: main\nsource_issue: bd-thgk\nrig: beads","status":"closed","priority":1,"issue_type":"merge-request","created_at":"2025-12-23T13:41:43.965771-08:00","updated_at":"2025-12-23T19:12:08.345449-08:00","closed_at":"2025-12-23T19:12:08.345449-08:00","close_reason":"Stale merge-requests from orphaned polecat branches - refinery not processing"} -{"id":"bd-77gm","title":"Import reports misleading '0 created, 0 updated' when actually importing all issues","description":"When running 'bd import' on a fresh database (no existing issues), the command reports 'Import complete: 0 created, 0 updated' even though it successfully imported all issues from the JSONL file.\n\n**Steps to reproduce:**\n1. Delete .beads/beads.db\n2. Run: bd import .beads/issues.jsonl\n3. Observe output: 'Import complete: 0 created, 0 updated'\n4. Run: bd list\n5. Confirm: All issues are actually present in the database\n\n**Expected behavior:**\nReport the actual number of issues imported, e.g., 'Import complete: 523 created, 0 updated'\n\n**Actual behavior:**\n'Import complete: 0 created, 0 updated' (misleading - makes user think import failed)\n\n**Impact:**\n- Users think import failed when it succeeded\n- Confusing during database sync operations (e.g., after git pull)\n- Makes debugging harder (can't tell if import actually worked)\n\n**Context:**\nDiscovered during VC session when syncing database after git pull. The misleading message caused confusion about whether the database was properly synced with the canonical JSONL file.","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-11-09T16:20:13.191156-08:00","updated_at":"2025-12-21T21:13:27.057292-08:00","closed_at":"2025-12-21T21:13:27.057292-08:00","close_reason":"Already fixed in commit 196ce3a6 - added validation for positional arguments"} -{"id":"bd-7b7h","title":"bd sync --merge fails due to chicken-and-egg: .beads/ always dirty","description":"## Problem\n\nWhen sync.branch is configured (e.g., beads-sync), the bd sync workflow creates a chicken-and-egg problem:\n\n1. `bd sync` commits changes to beads-sync via worktree\n2. `bd sync` copies JSONL to main working dir via `copyJSONLToMainRepo()` (sync.go line 364, worktree.go line 678-685)\n3. The copy is NOT committed to main - it just updates the working tree\n4. `bd sync --merge` checks for clean working dir (sync.go line 1547-1548)\n5. `bd sync --merge` FAILS because .beads/issues.jsonl is uncommitted!\n\n## Impact\n\n- sync.branch workflow is fundamentally broken\n- Users cannot periodically merge beads-sync β†’ main\n- Main branch always shows as dirty\n- Creates confusion about git state\n\n## Root Cause\n\nsync.go:1547-1548:\n```go\nif len(strings.TrimSpace(string(statusOutput))) \u003e 0 {\n return fmt.Errorf(\"main branch has uncommitted changes, please commit or stash them first\")\n}\n```\n\nThis check blocks merge when ANY uncommitted changes exist, including the .beads/ changes that `bd sync` itself created.\n\n## Proposed Fix\n\nOption A: Exclude .beads/ from the clean check in `mergeSyncBranch`:\n```go\n// Check if there are non-beads uncommitted changes\nstatusCmd := exec.CommandContext(ctx, \"git\", \"status\", \"--porcelain\", \"--\", \":!.beads/\")\n```\n\nOption B: Auto-stash .beads/ changes before merge, restore after\n\nOption C: Change the workflow - do not copy JSONL to main working dir, instead always read from worktree\n\n## Files to Modify\n\n- cmd/bd/sync.go:1540-1549 (mergeSyncBranch function)\n- Possibly internal/syncbranch/worktree.go (copyJSONLToMainRepo)","notes":"## Fix Implemented\n\nModified cmd/bd/sync.go mergeSyncBranch function:\n\n1. **Exclude .beads/ from dirty check** (line 1543):\n Changed `git status --porcelain` to `git status --porcelain -- :!.beads/`\n This allows merge to proceed when only .beads/ has uncommitted changes.\n\n2. **Restore .beads/ to HEAD before merge** (lines 1553-1561):\n Added `git checkout HEAD -- .beads/` before merge to prevent\n \"Your local changes would be overwritten by merge\" errors.\n The .beads/ changes are redundant since they came FROM beads-sync.\n\n## Testing\n\n- All cmd/bd sync/merge tests pass\n- All internal/syncbranch tests pass\n- Manual verification needed for full workflow","status":"tombstone","priority":0,"issue_type":"bug","created_at":"2025-12-16T23:06:06.97703-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} -{"id":"bd-7bbc4e6a","title":"Add MCP server functions for repair commands","description":"**Summary:** Added MCP server repair functions for agent dependency management, system validation, and pollution detection. Implemented across BdClientBase, BdCliClient, and daemon clients to enhance system diagnostics and self-healing capabilities.\n\n**Key Decisions:** \n- Expose repair_deps(), detect_pollution(), validate() via MCP server\n- Create abstract method stubs with fallback to CLI execution\n- Use @mcp.tool decorators for function registration\n\n**Resolution:** Successfully implemented comprehensive repair command infrastructure, enabling more robust system health monitoring and automated remediation with full CLI and daemon support.","notes":"Implemented all three MCP server functions:\n\n1. **repair_deps(fix=False)** - Find/fix orphaned dependencies\n2. **detect_pollution(clean=False)** - Detect/clean test issues \n3. **validate(checks=None, fix_all=False)** - Run comprehensive health checks\n\nChanges:\n- Added abstract methods to BdClientBase\n- Implemented in BdCliClient (CLI execution)\n- Added NotImplementedError stubs in BdDaemonClient (falls back to CLI)\n- Created wrapper functions in tools.py\n- Registered @mcp.tool decorators in server.py\n\nAll commands tested and working with --no-daemon flag.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-28T19:37:55.72639-07:00","updated_at":"2025-12-16T01:08:11.983953-08:00","closed_at":"2025-11-07T19:38:12.152437-08:00"} -{"id":"bd-7di","title":"worktree: any bd command is slow","description":"in a git worktree any bd command is slow, with a 2-3s pause before any results are shown. The identical command with `--no-daemon` is near instant.","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-05T15:33:42.924618693-07:00","updated_at":"2025-12-05T15:33:42.924618693-07:00"} -{"id":"bd-7h5","title":"Add pinned field to issue schema","description":"Add boolean 'pinned' field to the issue schema. When true, the issue is marked as a persistent context marker that should not be treated as a work item.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T23:33:26.767247-08:00","updated_at":"2025-12-19T00:08:59.854605-08:00","closed_at":"2025-12-19T00:08:59.854605-08:00","close_reason":"Implemented by polecat Slit - pushed to main","dependencies":[{"issue_id":"bd-7h5","depends_on_id":"bd-0vg","type":"blocks","created_at":"2025-12-18T23:33:55.98635-08:00","created_by":"daemon"}]} -{"id":"bd-7h7","title":"bd init should stop running daemon to avoid stale cache","description":"When running bd init, any running daemon continues with stale cached data, causing bd stats and other commands to show old counts.\n\nRepro:\n1. Have daemon running with 788 issues cached\n2. Clean JSONL to 128 issues, delete db, run bd init\n3. bd stats still shows 788 (daemon cache)\n4. Must manually run bd daemon --stop\n\nFix: bd init should automatically stop any running daemon before reinitializing.","status":"tombstone","priority":2,"issue_type":"bug","created_at":"2025-12-16T13:26:47.117226-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} -{"id":"bd-7m16","title":"GH#519: bd sync fails when sync.branch is currently checked-out branch","description":"bd sync tries to create worktree for sync.branch even when already on that branch. Should commit directly instead. See GitHub issue #519.","status":"tombstone","priority":2,"issue_type":"bug","created_at":"2025-12-16T01:03:36.613211-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} -{"id":"bd-7pwh","title":"HOP-compatible schema additions","description":"Add optional fields to Beads schema to enable future HOP integration.\nAll fields are backwards-compatible (optional, omitted if empty).\n\n## Reference\nSee ~/gt/docs/hop/BEADS-SCHEMA-CHANGES.md for full specification.\n\n## P1 Changes (Must Have Before Launch)\n\n### 1. EntityRef type\nStructured entity reference that can become HOP URI:\n```go\ntype EntityRef struct {\n Name string // \"polecat/Nux\"\n Platform string // \"gastown\"\n Org string // \"steveyegge\" \n ID string // \"polecat-nux\"\n}\n```\n\n### 2. creator field\nEvery issue tracks who created it (EntityRef).\n\n### 3. assignee_ref field\nStructured form alongside existing string assignee.\n\n### 4. validations array\nTrack who validated work completion:\n```go\ntype Validation struct {\n Validator *EntityRef\n Outcome string // accepted, rejected, revision_requested\n Timestamp time.Time\n Score *float32 // Future\n}\n```\n\n## P2 Changes (Should Have)\n\n### 5. work_type field\n\"mutex\" (default) or \"open_competition\"\n\n### 6. crystallizes field\nBoolean - does this work compound (true) or evaporate (false)?\n\n### 7. cross_refs field\nArray of URIs to beads in other repos:\n- \"beads://github/anthropics/claude-code/bd-xyz\"\n\n## P3 Changes (Nice to Have)\n\n### 8. skill_vector placeholder\nReserved for future embeddings: []float32\n\n## Implementation Notes\n- All fields optional in JSONL serialization\n- Empty/null fields omit from output\n- No migration needed for existing data\n- CLI additions: --creator, --validated-by filters","notes":"Scope reduced after review. P1 only: EntityRef type, creator field, validations array. Deferred: assignee_ref, work_type, crystallizes, cross_refs, skill_vector (YAGNI - semantics unclear, can add later when needed).","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-22T02:42:39.267984-08:00","updated_at":"2025-12-22T20:09:09.211821-08:00","closed_at":"2025-12-22T20:09:09.211821-08:00","close_reason":"HOP P1 schema additions complete: EntityRef type, Creator field, Validations array. P2/P3 items deferred to bd-zt59."} -{"id":"bd-7tuu","title":"Commit and push release","description":"git add -A \u0026\u0026 git commit \u0026\u0026 git push to trigger CI","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T22:56:02.053382-08:00","updated_at":"2025-12-20T01:23:52.484043-08:00","closed_at":"2025-12-20T01:23:52.484043-08:00","close_reason":"Superseded by 0.30.7 release - already committed and pushed","dependencies":[{"issue_id":"bd-7tuu","depends_on_id":"bd-6s61","type":"parent-child","created_at":"2025-12-19T22:56:15.021087-08:00","created_by":"daemon"},{"issue_id":"bd-7tuu","depends_on_id":"bd-hw3w","type":"blocks","created_at":"2025-12-19T22:56:23.291591-08:00","created_by":"daemon"}]} -{"id":"bd-7yg","title":"Git merge driver uses invalid placeholders (%L, %R instead of %A, %B)","description":"## Problem\n\nThe beads git merge driver is configured with invalid Git placeholders:\n\n```\ngit config merge.beads.driver \"bd merge %A %O %L %R\"\n```\n\nGit doesn't recognize `%L` or `%R` as valid merge driver placeholders. The valid placeholders are:\n- `%O` = base (common ancestor)\n- `%A` = current version (ours)\n- `%B` = other version (theirs)\n\n## Impact\n\n- Affects ALL users when they have `.beads/beads.jsonl` merge conflicts\n- Automatic JSONL merge fails with error: \"error reading left file: failed to open file: open 7: no such file or directory\"\n- Users must manually resolve conflicts instead of getting automatic merge\n\n## Root Cause\n\nThe `bd init` command (or wherever the merge driver is configured) is using non-standard placeholders. When Git encounters `%L` and `%R`, it either passes them literally or interprets them incorrectly.\n\n## Fix\n\nUpdate the merge driver configuration to:\n```\ngit config merge.beads.driver \"bd merge %A %O %A %B\"\n```\n\nWhere:\n- 1st `%A` = output file (current file, will be overwritten)\n- `%O` = base (common ancestor)\n- 2nd `%A` = left/current version\n- `%B` = right/other version\n\n## Action Items\n\n1. Fix `bd init` (or equivalent setup command) to use correct placeholders\n2. Add migration/warning for existing users with misconfigured merge driver\n3. Update documentation with correct merge driver setup\n4. Consider adding validation when `bd init` is run","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-11-21T19:51:55.747608-05:00","updated_at":"2025-12-17T23:13:40.532368-08:00","closed_at":"2025-12-17T17:24:52.678668-08:00"} -{"id":"bd-7z4","title":"Add tests for delete operations","description":"Core delete functionality including deleteViaDaemon, createTombstone, and deleteIssue functions have 0% coverage. These are critical for data integrity and need comprehensive test coverage.","status":"open","priority":1,"issue_type":"task","created_at":"2025-12-18T07:00:34.867680882-07:00","updated_at":"2025-12-18T07:00:34.867680882-07:00","dependencies":[{"issue_id":"bd-7z4","depends_on_id":"bd-6ss","type":"discovered-from","created_at":"2025-12-18T07:00:34.870254935-07:00","created_by":"matt"}]} -{"id":"bd-801b","title":"Merge: bd-bqcc","description":"branch: polecat/capable\ntarget: main\nsource_issue: bd-bqcc\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-23T00:26:04.306756-08:00","updated_at":"2025-12-23T01:33:25.728087-08:00","closed_at":"2025-12-23T01:33:25.728087-08:00","close_reason":"Merged to main"} -{"id":"bd-89f89fc0","title":"Remove unreachable RPC methods","description":"Several RPC server and client methods are unreachable and should be removed:\n\nServer methods (internal/rpc/server.go):\n- `Server.GetLastImportTime` (line 2116)\n- `Server.SetLastImportTime` (line 2123)\n- `Server.findJSONLPath` (line 2255)\n\nClient methods (internal/rpc/client.go):\n- `Client.Import` (line 311) - RPC import not used (daemon uses autoimport)\n\nEvidence:\n```bash\ngo run golang.org/x/tools/cmd/deadcode@latest -test ./...\n```\n\nImpact: Removes ~80 LOC of unused RPC code","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-28T16:20:02.432202-07:00","updated_at":"2025-12-17T22:58:34.564401-08:00","closed_at":"2025-12-17T22:58:34.564401-08:00","close_reason":"Closed"} -{"id":"bd-8b0x","title":"Remove molecule.go (simple instantiation)","description":"molecule.go uses is_template field for simple single-issue cloning. This is too simple for what molecules should be - full DAG orchestration. The use case is covered by bd mol bond with a single-issue molecule. Delete molecule.go and its commands.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-20T23:52:15.041776-08:00","updated_at":"2025-12-21T00:04:32.335849-08:00","closed_at":"2025-12-21T00:04:32.335849-08:00","close_reason":"Removed molecule.go - use bd mol bond instead","dependencies":[{"issue_id":"bd-8b0x","depends_on_id":"bd-ffjt","type":"blocks","created_at":"2025-12-20T23:52:25.807967-08:00","created_by":"daemon"}]} -{"id":"bd-8ca7","title":"Merge: bd-au0.6","description":"branch: polecat/furiosa\ntarget: main\nsource_issue: bd-au0.6\nrig: beads","status":"closed","priority":1,"issue_type":"merge-request","created_at":"2025-12-23T20:42:30.870178-08:00","updated_at":"2025-12-23T21:21:57.695179-08:00","closed_at":"2025-12-23T21:21:57.695179-08:00","close_reason":"stale - no code pushed"} -{"id":"bd-8e0q","title":"Merge: beads-ocs","description":"branch: polecat/valkyrie\ntarget: main\nsource_issue: beads-ocs\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-19T23:24:45.281478-08:00","updated_at":"2025-12-20T23:17:26.995706-08:00","closed_at":"2025-12-20T23:17:26.995706-08:00","close_reason":"Branches nuked, MRs obsolete"} -{"id":"bd-8fgn","title":"test hash length","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-12-16T13:49:32.113843-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} -{"id":"bd-8g8","title":"Fix G304 potential file inclusion in cmd/bd/tips.go:259","description":"Linting issue: G304: Potential file inclusion via variable (gosec) at cmd/bd/tips.go:259:18. Error: if data, err := os.ReadFile(settingsPath); err == nil {","status":"closed","priority":0,"issue_type":"bug","created_at":"2025-12-07T15:34:57.189730843-07:00","updated_at":"2025-12-17T23:13:40.534569-08:00","closed_at":"2025-12-17T16:46:11.029837-08:00"} -{"id":"bd-8hy","title":"Kill running daemons","description":"Stop all bd daemons before release:\n\n```bash\npkill -f 'bd.*daemon' || true\nsleep 1\npgrep -lf 'bd.*daemon' # Should show nothing\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T22:42:58.255478-08:00","updated_at":"2025-12-18T22:43:55.394966-08:00","closed_at":"2025-12-18T22:43:55.394966-08:00","dependencies":[{"issue_id":"bd-8hy","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T22:43:16.23168-08:00","created_by":"daemon"}]} -{"id":"bd-8pyn","title":"Version Bump: 0.30.7","description":"Release checklist for version 0.30.7. This molecule ensures all release steps are completed properly.","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-19T22:56:48.648694-08:00","updated_at":"2025-12-20T00:49:51.927518-08:00","closed_at":"2025-12-20T00:25:59.529183-08:00"} -{"id":"bd-8v2","title":"Add {{version}} to versionChanges in info.go","description":"Add new entry at TOP of versionChanges in cmd/bd/info.go with release notes from CHANGELOG.md. Must do before bump-version.sh --commit.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T22:43:00.482846-08:00","updated_at":"2025-12-18T22:45:21.465817-08:00","closed_at":"2025-12-18T22:45:21.465817-08:00","dependencies":[{"issue_id":"bd-8v2","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T22:43:16.496649-08:00","created_by":"daemon"},{"issue_id":"bd-8v2","depends_on_id":"bd-kyo","type":"blocks","created_at":"2025-12-18T22:43:20.69619-08:00","created_by":"daemon"}]} -{"id":"bd-8wgo","title":"bd merge omits priority:0 due to omitempty JSON tag","description":"GitHub issue #671. The merge code in internal/merge/merge.go uses 'omitempty' on the Priority field, which causes priority:0 (P0/critical) to be dropped from JSON output since 0 is Go's zero value for int. Fix: either remove omitempty from Priority field or use a pointer (*int). This affects the git merge driver and causes P0 issues to lose their priority.","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-21T14:35:15.083146-08:00","updated_at":"2025-12-21T15:41:14.522554-08:00","closed_at":"2025-12-21T15:41:14.522554-08:00","close_reason":"Fixed: removed omitempty from Priority field in types.Issue and merge.Issue"} -{"id":"bd-8x3w","title":"Add composite index (issue_id, type) on dependencies table","description":"GetBlockedIssues uses EXISTS clauses that filter by issue_id AND type together.\n\n**Query pattern (ready.go:427-429):**\n```sql\nEXISTS (\n SELECT 1 FROM dependencies d2\n WHERE d2.issue_id = i.id AND d2.type = 'blocks'\n)\n```\n\n**Problem:** Only idx_dependencies_issue exists. SQLite must filter type after index lookup.\n\n**Solution:** Add migration:\n```sql\nCREATE INDEX IF NOT EXISTS idx_dependencies_issue_type ON dependencies(issue_id, type);\n```\n\n**Note:** This complements the existing idx_dependencies_depends_on_type for the reverse direction.","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-22T22:58:52.876846-08:00","updated_at":"2025-12-22T23:15:13.840789-08:00","closed_at":"2025-12-22T23:15:13.840789-08:00","close_reason":"Implemented in migration 026_additional_indexes.go","dependencies":[{"issue_id":"bd-8x3w","depends_on_id":"bd-h0we","type":"discovered-from","created_at":"2025-12-22T22:58:52.877536-08:00","created_by":"daemon"}]} -{"id":"bd-90v","title":"bd prime: AI context loading and Claude Code integration","description":"Implement `bd prime` command and Claude Code hooks for context recovery. Hooks work with BOTH MCP server and CLI approaches - they solve the context memory problem (keeping bd workflow fresh after compaction) not the tool access problem (MCP vs CLI).","status":"open","priority":2,"issue_type":"epic","created_at":"2025-11-11T23:31:12.119012-08:00","updated_at":"2025-11-12T00:11:07.743189-08:00"} -{"id":"bd-95k8","title":"Pinned field available in beads v0.37.0","description":"Hey max,\n\nHeads up on your mail overhaul work:\n\n1. **Pinned field is available** - beads v0.37.0 (released by dave earlier) includes the pinned field on issues. You'll want to add this to BeadsMessage in types.go.\n\n2. **Database migration** - Check if existing .beads databases need migration to support the pinned field. Run `bd doctor` to see if it flags anything.\n\n3. **Sorting task** - Once you have the pinned field, gt-ngu1 (pinned beads first in mail inbox) needs implementing. Since messages now come from `bd list --type=message`, you'll need to either:\n - Sort in listBeads() after fetching, or\n - Ensure bd list returns pinned items first (may already do this?)\n\nCheck what version of bd you're building against.\n\n-- Mayor","status":"closed","priority":2,"issue_type":"message","assignee":"gastown/crew/max","created_at":"2025-12-20T17:51:57.315956-08:00","updated_at":"2025-12-21T17:52:18.542169-08:00","closed_at":"2025-12-21T17:52:18.542169-08:00","close_reason":"Stale message - pinned field already available","labels":["from:beads-crew-dave","thread:thread-71ac20c7e432"]} -{"id":"bd-987a","title":"bd mol run: panic slice bounds out of range in mol_run.go:130","description":"## Problem\nbd mol run panics after successfully creating the molecule:\n\n```\nβœ“ Molecule running: created 9 issues\n Root issue: gt-i4lo (pinned, in_progress)\n Assignee: stevey\n\nNext steps:\n bd ready # Find unblocked work in this molecule\npanic: runtime error: slice bounds out of range [:8] with length 7\n\ngoroutine 1 [running]:\nmain.runMolRun(0x1014fc0c0, {0x140001e0f80, 0x1, 0x10089daad?})\n /Users/stevey/gt/beads/crew/dave/cmd/bd/mol_run.go:130 +0xc38\n```\n\n## Reproduction\n```bash\nbd --no-daemon mol run gt-lwuu --var issue=gt-test123\n```\nWhere gt-lwuu is a mol-polecat-work proto with 8 child steps.\n\n## Impact\nThe molecule IS created successfully - the panic happens after creation when formatting the \"Next steps\" output.","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-21T21:48:55.396018-08:00","updated_at":"2025-12-21T22:57:46.827469-08:00","closed_at":"2025-12-21T22:57:46.827469-08:00","close_reason":"Fixed: removed unsafe rootID[:8] slice - now uses full ID"} -{"id":"bd-9cdc","title":"Update docs for import bug fix","description":"Update AGENTS.md, README.md, TROUBLESHOOTING.md with import.orphan_handling config documentation. Document resurrection behavior, tombstones, config modes. Add troubleshooting section for import failures with deleted parents.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-11-04T12:32:30.770415-08:00","updated_at":"2025-12-21T21:14:08.328627-08:00","closed_at":"2025-12-21T21:14:08.328627-08:00","close_reason":"Already completed - documentation in CONFIG.md, CLI_REFERENCE.md, and TROUBLESHOOTING.md"} -{"id":"bd-9g1z","title":"Fix or remove TestFindJSONLPathDefault (issue #356)","description":"Code health review found .test-skip permanently skips TestFindJSONLPathDefault.\n\nThe test references issue #356 about wrong JSONL filename expectations (issues.jsonl vs beads.jsonl).\n\nTest file: internal/beads/beads_test.go\n\nThe underlying migration from beads.jsonl to issues.jsonl may be complete, so either:\n1. Fix the test expectations\n2. Remove the test if no longer needed\n3. Document why it remains skipped","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-16T18:17:31.33975-08:00","updated_at":"2025-12-22T21:24:50.357688-08:00","closed_at":"2025-12-22T21:24:50.357688-08:00","close_reason":"Test now passes - removed from .test-skip. Code was fixed in utils.FindJSONLInDir to return issues.jsonl as default."} -{"id":"bd-9l0h","title":"Run tests and linting","description":"go test -short ./... \u0026\u0026 golangci-lint run ./...","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-20T21:53:19.527602-08:00","updated_at":"2025-12-20T21:55:29.660914-08:00","closed_at":"2025-12-20T21:55:29.660914-08:00","close_reason":"Tests passed, go vet clean","dependencies":[{"issue_id":"bd-9l0h","depends_on_id":"bd-an4s","type":"parent-child","created_at":"2025-12-20T21:53:19.529203-08:00","created_by":"daemon"},{"issue_id":"bd-9l0h","depends_on_id":"bd-gocx","type":"blocks","created_at":"2025-12-20T21:53:29.753682-08:00","created_by":"daemon"}]} -{"id":"bd-9qj5","title":"Merge: bd-c7y5","description":"branch: polecat/toast\ntarget: main\nsource_issue: bd-c7y5\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-23T20:45:02.626929-08:00","updated_at":"2025-12-23T21:21:57.699742-08:00","closed_at":"2025-12-23T21:21:57.699742-08:00","close_reason":"stale - no code pushed"} -{"id":"bd-9usz","title":"Test suite hangs/never finishes","description":"Running 'go test ./... -count=1' hangs indefinitely. The full test suite never completes, making it difficult to verify changes. Need to investigate which tests are hanging and fix or add timeouts.","status":"open","priority":2,"issue_type":"bug","created_at":"2025-12-16T21:56:27.80191-08:00","updated_at":"2025-12-23T21:22:25.810705-08:00"} -{"id":"bd-a0cp","title":"Consider using types.Status in merge package for type safety","description":"The merge package uses string for status comparison (e.g., result.Status == closed, issue.Status == StatusTombstone). The types package defines Status as a type alias with validation. While the merge package needs its own Issue struct for JSONL flexibility, it could import and use types.Status for constants to get compile-time type checking. Current code: if left == closed || right == closed. Could be: if left == string(types.StatusClosed). This is low priority since string comparison works correctly. Files: internal/merge/merge.go:44, 488, 501-521","status":"open","priority":4,"issue_type":"task","created_at":"2025-12-05T16:37:10.690424-08:00","updated_at":"2025-12-05T16:37:10.690424-08:00"} -{"id":"bd-a15d","title":"Add test files for internal/storage","description":"The internal/storage package has no test files at all. This package provides the storage interface abstraction.\n\nCurrent coverage: N/A (no test files)\nTarget: Add basic interface tests","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-13T20:43:11.363017-08:00","updated_at":"2025-12-13T21:01:20.925779-08:00"} -{"id":"bd-a3sj","title":"RemoveDependency fails on external deps - FK violation in dirty_issues","description":"In dependencies.go:225, RemoveDependency marks BOTH issueID and dependsOnID as dirty. For external refs (e.g., external:project:capability), dependsOnID doesn't exist in the issues table. This causes FK violation since dirty_issues.issue_id has FK constraint to issues.id.\n\nFix: Check if dependsOnID starts with 'external:' and only mark source issue as dirty, matching the logic in AddDependency (lines 162-170).\n\nRepro: bd dep rm \u003cissue\u003e external:project:capability","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-21T23:44:51.981138-08:00","updated_at":"2025-12-22T17:48:29.062424-08:00","closed_at":"2025-12-22T17:48:29.062424-08:00","close_reason":"Fixed: check for external: prefix before marking dependsOnID as dirty in RemoveDependency. Added test.","dependencies":[{"issue_id":"bd-a3sj","depends_on_id":"bd-zmmy","type":"discovered-from","created_at":"2025-12-21T23:44:51.982343-08:00","created_by":"daemon"}]} -{"id":"bd-a62m","title":"Update version to 0.33.2 in version.go","description":"Edit cmd/bd/version.go line 17:\n\n```go\nVersion = \"0.33.2\"\n```\n\nVerify with: `grep 'Version =' cmd/bd/version.go`","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.760384-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","close_reason":"Version already 0.33.2 in version.go","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task","wisp":true} -{"id":"bd-a9y3","title":"Add composite index (status, priority) for common list queries","description":"SearchIssues and GetReadyWork frequently filter by status and sort by priority. Currently uses two separate indexes.\n\n**Common query pattern (queries.go:1646-1647):**\n```sql\nWHERE status = ? \nORDER BY priority ASC, created_at DESC\n```\n\n**Problem:** Index merge or full scan when both columns are used.\n\n**Solution:** Add migration:\n```sql\nCREATE INDEX IF NOT EXISTS idx_issues_status_priority ON issues(status, priority);\n```\n\n**Expected impact:** Faster bd list, bd ready with filters. Particularly noticeable at 10K+ issues.","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-22T22:58:50.515275-08:00","updated_at":"2025-12-22T23:15:13.838976-08:00","closed_at":"2025-12-22T23:15:13.838976-08:00","close_reason":"Implemented in migration 026_additional_indexes.go","dependencies":[{"issue_id":"bd-a9y3","depends_on_id":"bd-h0we","type":"discovered-from","created_at":"2025-12-22T22:58:50.516072-08:00","created_by":"daemon"}]} -{"id":"bd-aay","title":"Warn on invalid depends_on references in workflow templates","description":"workflow.go:780-781 silently skips invalid dependency names. Should log a warning when a depends_on reference doesn't match any task ID in the template.","status":"closed","priority":4,"issue_type":"task","created_at":"2025-12-17T22:23:04.325253-08:00","updated_at":"2025-12-17T22:34:07.309495-08:00","closed_at":"2025-12-17T22:34:07.309495-08:00"} -{"id":"bd-abjw","title":"Consider consolidating config.yaml parsing into shared utility","description":"Multiple places parse config.yaml with custom structs:\n\n1. **autoimport.go:148** - `localConfig{SyncBranch}`\n2. **main.go:310** - strings.Contains for no-db (fragile, see bd-r6k2)\n3. **doctor.go:863** - strings.Contains for no-db (fragile, see bd-r6k2)\n4. **internal/config/config.go** - Uses viper (but caches at startup, problematic for tests)\n\nConsider creating a shared utility in `internal/configfile/` or extending the viper config:\n\n```go\n// internal/configfile/yaml.go\ntype YAMLConfig struct {\n SyncBranch string `yaml:\"sync-branch\"`\n NoDb bool `yaml:\"no-db\"`\n IssuePrefix string `yaml:\"issue-prefix\"`\n Author string `yaml:\"author\"`\n}\n\nfunc LoadYAML(beadsDir string) (*YAMLConfig, error) {\n // Parse config.yaml with proper YAML library\n}\n```\n\nBenefits:\n- Single source of truth for config.yaml structure\n- Proper YAML parsing everywhere\n- Easier to add new config fields\n\nTrade-off: May add complexity for simple one-off reads.","status":"open","priority":4,"issue_type":"task","created_at":"2025-12-07T02:03:26.067311-08:00","updated_at":"2025-12-07T02:03:26.067311-08:00"} -{"id":"bd-adoe","title":"Add --hard flag to bd cleanup to permanently cull tombstones before cutoff date","description":"Currently tombstones persist for 30 days before cleanup prunes them. Need an official way to force-cull tombstones earlier than the default TTL, for scenarios like cleaning house after extended absence where resurrection from old clones is not a concern. Proposed: bd cleanup --hard --older-than N to bypass the 30-day tombstone TTL.","status":"tombstone","priority":2,"issue_type":"feature","created_at":"2025-12-16T01:17:31.064914-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"feature"} -{"id":"bd-aec5439f","title":"Update LINTING.md with current baseline","description":"After cleanup, document the remaining acceptable baseline in LINTING.md so we can track regression.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-27T18:53:10.38679-07:00","updated_at":"2025-12-17T22:58:34.564854-08:00","closed_at":"2025-12-17T22:58:34.564854-08:00","close_reason":"Closed"} -{"id":"bd-ahot","title":"HANDOFF: Molecule bonding - spawn done, bond next","description":"## Context\n\nContinuing work on bd-o5xe (Molecule bonding epic).\n\n## Completed This Session\n\n- bd-mh4w: Renamed bond to spawn in mol.go\n- bd-rnnr: Added BondRef data model to types.go\n\n## Now Unblocked\n\n1. bd-o91r: Polymorphic bond command [P1]\n2. bd-iw4z: Compound visualization [P2] \n3. bd-iq19: Distill command [P2]\n\n## Key Files\n\n- cmd/bd/mol.go\n- internal/types/types.go\n\n## Next Step\n\nStart with bd-o91r. Run bd show bd-o5xe for context.","status":"closed","priority":1,"issue_type":"message","created_at":"2025-12-21T01:32:13.940757-08:00","updated_at":"2025-12-21T11:24:30.171048-08:00","closed_at":"2025-12-21T11:24:30.171048-08:00","close_reason":"Stale handoff - work completed in later sessions"} -{"id":"bd-ajdv","title":"Push release v0.33.2 to remote","description":"Push the commit and tag:\n\n```bash\ngit push \u0026\u0026 git push --tags\n```\n\nVerify on GitHub that the tag appears in releases.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.762058-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","close_reason":"Already pushed to remote","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task","wisp":true} -{"id":"bd-akcq","title":"Design molecule step hooks","description":"Hooks that fire between molecule steps. When a bead in a molecule closes, trigger hook that can spawn agent attention to prompts/requests. This enables reactive orchestration - the molecule drives, hooks respond. Gas Town feature built on Beads data plane.","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-12-20T23:52:18.63487-08:00","updated_at":"2025-12-21T17:53:19.284064-08:00","closed_at":"2025-12-21T17:53:19.284064-08:00","close_reason":"Moved to gastown: gt-ne1t","dependencies":[{"issue_id":"bd-akcq","depends_on_id":"bd-icnf","type":"blocks","created_at":"2025-12-20T23:52:25.935274-08:00","created_by":"daemon"}]} -{"id":"bd-aks","title":"Add tests for import/export functionality","description":"Import/export functions like ImportIssues, exportToJSONLWithStore, and AutoImportIfNewer have low coverage. These are critical for data integrity and multi-repo synchronization.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T07:00:53.067006711-07:00","updated_at":"2025-12-19T09:54:57.011374404-07:00","closed_at":"2025-12-18T10:13:11.821944156-07:00","dependencies":[{"issue_id":"bd-aks","depends_on_id":"bd-6ss","type":"discovered-from","created_at":"2025-12-18T07:00:53.07185201-07:00","created_by":"matt"}]} -{"id":"bd-an4s","title":"Version Bump: 0.32.1","description":"Release checklist for version 0.32.1. Patch release with MCP output control params and pin field fix.","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-20T21:53:01.315592-08:00","updated_at":"2025-12-20T21:57:13.909864-08:00","closed_at":"2025-12-20T21:57:13.909864-08:00","close_reason":"Version 0.32.1 released"} -{"id":"bd-ao0s","title":"bd graph crashes with --no-daemon on closed issues","description":"The `bd graph` command panics with nil pointer dereference when using `--no-daemon` flag on an issue with closed children.\n\n**Reproduction:**\n```bash\nbd graph bd-qqc --no-daemon\n# panic: runtime error: invalid memory address or nil pointer dereference\n# in main.computeDependencyCounts\n```\n\n**Stack trace:**\n```\npanic: runtime error: invalid memory address or nil pointer dereference\n[signal SIGSEGV: segmentation violation code=0x2 addr=0x20 pc=0x1010bdfb0]\n\ngoroutine 1 [running]:\nmain.computeDependencyCounts(...)\n /Users/stevey/gt/beads/crew/emma/cmd/bd/graph.go:428\nmain.renderGraph(0x1400033bb80, 0x0)\n /Users/stevey/gt/beads/crew/emma/cmd/bd/graph.go:307 +0x300\n```\n\n**Location:** cmd/bd/graph.go:428 - computeDependencyCounts() not handling nil case","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-18T22:57:36.972585-08:00","updated_at":"2025-12-20T01:13:29.206821-08:00","closed_at":"2025-12-20T01:13:29.206821-08:00","close_reason":"Fixed: pass subgraph instead of nil to renderGraph"} -{"id":"bd-aq3s","title":"Merge: bd-u2sc.3","description":"branch: polecat/Modular\ntarget: main\nsource_issue: bd-u2sc.3\nrig: beads","status":"closed","priority":3,"issue_type":"merge-request","created_at":"2025-12-23T13:47:14.281479-08:00","updated_at":"2025-12-23T19:12:08.354548-08:00","closed_at":"2025-12-23T19:12:08.354548-08:00","close_reason":"Stale merge-requests from orphaned polecat branches - refinery not processing"} -{"id":"bd-au0","title":"Command Set Standardization \u0026 Flag Consistency","description":"Comprehensive improvements to bd command set based on 2025 audit findings.\n\n## Background\nSee docs/command-audit-2025.md for detailed analysis.\n\n## Goals\n1. Standardize flag naming and behavior across all commands\n2. Add missing flags for feature parity\n3. Fix naming confusion\n4. Improve consistency in JSON output\n\n## Success Criteria\n- All mutating commands support --dry-run (no --preview variants)\n- bd update supports label operations\n- bd search has filter parity with bd list\n- Priority flags accept both int and P0-P4 format everywhere\n- JSON output is consistent across all commands","status":"open","priority":2,"issue_type":"epic","created_at":"2025-11-21T21:05:55.672749-05:00","updated_at":"2025-11-21T21:05:55.672749-05:00"} -{"id":"bd-au0.10","title":"Add global verbosity flags (--verbose, --quiet)","description":"Add consistent verbosity controls across all commands.\n\n**Current state:**\n- bd init has --quiet flag\n- No other commands have verbosity controls\n- Debug output controlled by BD_VERBOSE env var\n\n**Proposal:**\nAdd persistent flags:\n- --verbose / -v: Enable debug output\n- --quiet / -q: Suppress non-essential output\n\n**Implementation:**\n- Add to rootCmd.PersistentFlags()\n- Replace BD_VERBOSE checks with flag checks\n- Standardize output levels:\n * Quiet: Errors only\n * Normal: Errors + success messages\n * Verbose: Errors + success + debug info\n\n**Files to modify:**\n- cmd/bd/main.go (add flags)\n- internal/debug/debug.go (respect flags)\n- Update all commands to respect quiet mode\n\n**Testing:**\n- Verify --verbose shows debug output\n- Verify --quiet suppresses normal output\n- Ensure errors always show regardless of mode","status":"open","priority":3,"issue_type":"task","created_at":"2025-11-21T21:08:21.600209-05:00","updated_at":"2025-11-21T21:08:21.600209-05:00","dependencies":[{"issue_id":"bd-au0.10","depends_on_id":"bd-au0","type":"parent-child","created_at":"2025-11-21T21:08:21.602557-05:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-au0.5","title":"Add date and priority filters to bd search","description":"Add date and priority filters to bd search for parity with bd list.\n\n## Current State\nbd search supports: --status, --type, --assignee, --label, --limit\nbd list supports: all of the above PLUS date ranges and priority filters\n\n## Filters to Add\n\n### Priority Filters\n```bash\nbd search \"query\" --priority 1 # Exact priority\nbd search \"query\" --priority-min 0 # P0 and above (higher priority)\nbd search \"query\" --priority-max 2 # P2 and below (lower priority)\n```\n\n### Date Filters\n```bash\nbd search \"query\" --created-after 2025-01-01\nbd search \"query\" --created-before 2025-12-31\nbd search \"query\" --updated-after 2025-01-01\nbd search \"query\" --closed-after 2025-01-01\n```\n\n### Content Filters\n```bash\nbd search \"query\" --desc-contains \"bug\"\nbd search \"query\" --notes-contains \"todo\"\nbd search \"query\" --empty-description # Issues with no description\nbd search \"query\" --no-assignee # Unassigned issues\nbd search \"query\" --no-labels # Issues without labels\n```\n\n## Files to Modify\n\n### 1. cmd/bd/search.go\nAdd flag definitions in init():\n```go\nsearchCmd.Flags().IntP(\"priority\", \"p\", -1, \"Filter by exact priority (0-4)\")\nsearchCmd.Flags().Int(\"priority-min\", -1, \"Filter by minimum priority\")\nsearchCmd.Flags().Int(\"priority-max\", -1, \"Filter by maximum priority\")\nsearchCmd.Flags().String(\"created-after\", \"\", \"Filter by creation date (YYYY-MM-DD)\")\nsearchCmd.Flags().String(\"created-before\", \"\", \"Filter by creation date\")\nsearchCmd.Flags().String(\"updated-after\", \"\", \"Filter by update date\")\nsearchCmd.Flags().String(\"updated-before\", \"\", \"Filter by update date\")\nsearchCmd.Flags().String(\"closed-after\", \"\", \"Filter by close date\")\nsearchCmd.Flags().String(\"closed-before\", \"\", \"Filter by close date\")\nsearchCmd.Flags().String(\"desc-contains\", \"\", \"Filter by description content\")\nsearchCmd.Flags().String(\"notes-contains\", \"\", \"Filter by notes content\")\nsearchCmd.Flags().Bool(\"empty-description\", false, \"Filter issues with empty description\")\nsearchCmd.Flags().Bool(\"no-assignee\", false, \"Filter unassigned issues\")\nsearchCmd.Flags().Bool(\"no-labels\", false, \"Filter issues without labels\")\n```\n\n### 2. internal/rpc/protocol.go\nUpdate SearchArgs struct:\n```go\ntype SearchArgs struct {\n Query string\n Filter types.IssueFilter\n // Already has most fields via IssueFilter\n}\n```\n\nNote: types.IssueFilter already has these fields - just need to wire them up!\n\n### 3. cmd/bd/search.go Run function\nParse flags and populate filter:\n```go\nif priority, _ := cmd.Flags().GetInt(\"priority\"); priority \u003e= 0 {\n filter.Priority = \u0026priority\n}\nif createdAfter, _ := cmd.Flags().GetString(\"created-after\"); createdAfter != \"\" {\n t, err := time.Parse(\"2006-01-02\", createdAfter)\n if err != nil {\n FatalError(\"invalid date format for --created-after: %v\", err)\n }\n filter.CreatedAfter = \u0026t\n}\n// ... similar for other flags\n```\n\n## Implementation Steps\n\n1. **Check types.IssueFilter** - verify all needed fields exist\n2. **Add flags to search.go** init()\n3. **Parse flags** in Run function\n4. **Pass to SearchIssues** via filter\n5. **Test all combinations**\n\n## Testing\n```bash\n# Create test issues\nbd create \"Test P1\" -p 1\nbd create \"Test P2\" -p 2 --description \"Has description\"\n\n# Test filters\nbd search \"\" --priority 1\nbd search \"\" --priority-min 0 --priority-max 1\nbd search \"\" --empty-description\nbd search \"\" --desc-contains \"description\"\n```\n\n## Success Criteria\n- All filters work in both direct and daemon mode\n- Date parsing handles YYYY-MM-DD format\n- --json output includes filtered results\n- Help text documents all new flags","status":"closed","priority":1,"issue_type":"task","assignee":"beads/Searcher","created_at":"2025-11-21T21:07:05.496726-05:00","updated_at":"2025-12-23T13:38:28.475606-08:00","closed_at":"2025-12-23T13:38:28.475606-08:00","close_reason":"Implemented all date, priority, and content filters for bd search","dependencies":[{"issue_id":"bd-au0.5","depends_on_id":"bd-au0","type":"parent-child","created_at":"2025-11-21T21:07:05.497762-05:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-au0.5","depends_on_id":"bd-iz5t","type":"parent-child","created_at":"2025-12-23T12:44:07.657303-08:00","created_by":"daemon"}]} -{"id":"bd-au0.6","title":"Add comprehensive filters to bd export","description":"Enhance bd export with filtering options for selective exports.\n\n**Currently only has:**\n- --status\n\n**Add filters:**\n- --label, --label-any\n- --assignee\n- --type\n- --priority, --priority-min, --priority-max\n- --created-after, --created-before\n- --updated-after, --updated-before\n\n**Use case:**\n- Export only open issues: bd export --status open\n- Export high-priority bugs: bd export --type bug --priority-max 1\n- Export recent issues: bd export --created-after 2025-01-01\n\n**Files to modify:**\n- cmd/bd/export.go\n- Reuse filter logic from list.go","status":"open","priority":1,"issue_type":"task","created_at":"2025-11-21T21:07:19.431307-05:00","updated_at":"2025-12-23T21:22:14.757819-08:00","dependencies":[{"issue_id":"bd-au0.6","depends_on_id":"bd-au0","type":"parent-child","created_at":"2025-11-21T21:07:19.432983-05:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-au0.7","title":"Audit and standardize JSON output across all commands","description":"Ensure consistent JSON format and error handling when --json flag is used.\n\n**Scope:**\n1. Verify all commands respect --json flag\n2. Standardize success response format\n3. Standardize error response format\n4. Document JSON schemas\n\n**Commands to audit:**\n- Core CRUD: create, update, delete, show, list, search βœ“\n- Queries: ready, blocked, stale, count, stats, status\n- Deps: dep add/remove/tree/cycles\n- Labels: label commands\n- Comments: comments add/list/delete\n- Epics: epic status/close-eligible\n- Export/import: already support --json βœ“\n\n**Testing:**\n- Success cases return valid JSON\n- Error cases return valid JSON (not plain text)\n- Consistent field naming (snake_case vs camelCase)\n- Array vs object wrapping consistency","status":"open","priority":1,"issue_type":"task","created_at":"2025-11-21T21:07:35.304424-05:00","updated_at":"2025-12-23T21:22:13.69621-08:00","dependencies":[{"issue_id":"bd-au0.7","depends_on_id":"bd-au0","type":"parent-child","created_at":"2025-11-21T21:07:35.305663-05:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-au0.8","title":"Improve clean vs cleanup command naming/documentation","description":"Clarify the difference between bd clean and bd cleanup to reduce user confusion.\n\n**Current state:**\n- bd clean: Remove temporary artifacts (.beads/bd.sock, logs, etc.)\n- bd cleanup: Delete old closed issues from database\n\n**Options:**\n1. Rename for clarity:\n - bd clean β†’ bd clean-temp\n - bd cleanup β†’ bd cleanup-issues\n \n2. Keep names but improve help text and documentation\n\n3. Add prominent warnings in help output\n\n**Preferred approach:** Option 2 (improve documentation)\n- Update short/long descriptions in commands\n- Add examples to help text\n- Update README.md\n- Add cross-references in help output\n\n**Files to modify:**\n- cmd/bd/clean.go\n- cmd/bd/cleanup.go\n- README.md or ADVANCED.md","status":"open","priority":2,"issue_type":"task","created_at":"2025-11-21T21:07:49.960534-05:00","updated_at":"2025-11-21T21:07:49.960534-05:00","dependencies":[{"issue_id":"bd-au0.8","depends_on_id":"bd-au0","type":"parent-child","created_at":"2025-11-21T21:07:49.962743-05:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-au0.9","title":"Review and document rarely-used commands","description":"Document use cases or consider deprecation for infrequently-used commands.\n\n**Commands to review:**\n1. bd rename-prefix - How often is this used? Document use cases\n2. bd detect-pollution - Consider integrating into bd validate\n3. bd migrate-hash-ids - One-time migration, keep but document as legacy\n\n**For each command:**\n- Document typical use cases\n- Add examples to help text\n- Consider if it should be a subcommand instead\n- Add deprecation warning if appropriate\n\n**Not changing:**\n- duplicates βœ“ (useful for data quality)\n- repair-deps βœ“ (useful for fixing broken refs)\n- restore βœ“ (critical for compacted issues)\n- compact βœ“ (performance feature)\n\n**Deliverable:**\n- Updated help text\n- Documentation in ADVANCED.md\n- Deprecation plan if needed","status":"open","priority":3,"issue_type":"task","created_at":"2025-11-21T21:08:05.588275-05:00","updated_at":"2025-11-21T21:08:05.588275-05:00","dependencies":[{"issue_id":"bd-au0.9","depends_on_id":"bd-au0","type":"parent-child","created_at":"2025-11-21T21:08:05.59003-05:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-awmf","title":"Merge: bd-dtl8","description":"branch: polecat/dag\ntarget: main\nsource_issue: bd-dtl8\nrig: beads","status":"closed","priority":1,"issue_type":"merge-request","created_at":"2025-12-23T20:47:15.147476-08:00","updated_at":"2025-12-23T21:21:57.690692-08:00","closed_at":"2025-12-23T21:21:57.690692-08:00","close_reason":"stale - no code pushed"} -{"id":"bd-aydr","title":"Add bd reset command for clean slate restart","description":"Implement a `bd reset` command to reset beads to a clean starting state.\n\n## Context\nGitHub issue #479 - users sometimes get beads into an invalid state after updates, and there's no clean way to start fresh. The git backup/restore mechanism that protects against accidental deletion also makes it hard to intentionally reset.\n\n## Design\n\n### Command Interface\n```\nbd reset [--hard] [--force] [--backup] [--dry-run] [--no-init]\n```\n\n| Flag | Effect |\n|------|--------|\n| `--hard` | Also remove from git index and commit |\n| `--force` | Skip confirmation prompt |\n| `--backup` | Create `.beads-backup-{timestamp}/` first |\n| `--dry-run` | Preview what would happen |\n| `--no-init` | Don't re-initialize after clearing |\n\n### Reset Levels\n1. **Soft Reset (default)** - Kill daemons, clear .beads/, re-init. Git history unchanged.\n2. **Hard Reset (`--hard`)** - Also git rm and commit the removal, then commit fresh state.\n\n### Implementation Flow\n1. Validate .beads/ exists\n2. If not --force: show impact summary, prompt confirmation\n3. If --backup: copy .beads/ to .beads-backup-{timestamp}/\n4. Kill daemons\n5. If --hard: git rm + commit\n6. rm -rf .beads/*\n7. If not --no-init: bd init (and git add+commit if --hard)\n8. Print summary\n\n### Safety Mechanisms\n- Confirmation prompt (skip with --force)\n- Impact summary (issue/tombstone counts)\n- Backup option\n- Dry-run preview\n- Git dirty check warning\n\n### Code Structure\n- `cmd/bd/reset.go` - CLI command\n- `internal/reset/` - Core logic package","status":"closed","priority":2,"issue_type":"epic","created_at":"2025-12-13T08:44:01.38379+11:00","updated_at":"2025-12-13T06:24:29.561294-08:00","closed_at":"2025-12-13T10:18:19.965287+11:00"} -{"id":"bd-aydr.1","title":"Implement core reset package (internal/reset)","description":"Create the core reset logic in internal/reset/ package.\n\n## Responsibilities\n- ResetOptions struct with all flag options\n- CountImpact() - count issues/tombstones that will be deleted\n- ValidateState() - check .beads/ exists, check git dirty state\n- ExecuteReset() - main reset logic (without CLI concerns)\n- Integrate with daemon killall\n\n## Interface Design\n```go\ntype ResetOptions struct {\n Hard bool // Include git operations (git rm, commit)\n Backup bool // Create backup before reset\n DryRun bool // Preview only, don't execute\n SkipInit bool // Don't re-initialize after reset\n}\n\ntype ResetResult struct {\n IssuesDeleted int\n TombstonesDeleted int\n BackupPath string // if backup was created\n DaemonsKilled int\n}\n\ntype ImpactSummary struct {\n IssueCount int\n OpenCount int\n ClosedCount int\n TombstoneCount int\n HasUncommitted bool // git dirty state\n}\n\nfunc Reset(opts ResetOptions) (*ResetResult, error)\nfunc CountImpact() (*ImpactSummary, error)\nfunc ValidateState() error\n```\n\n## IMPORTANT: CLI vs Core Separation\n- `Force` (skip confirmation) is NOT in ResetOptions - that's a CLI concern\n- Core always executes when called; CLI decides whether to prompt first\n- Keep CLI-agnostic: no prompts, no colored output, no user interaction\n- Return errors for CLI to handle with user-friendly messages\n- Unit testable in isolation\n\n## Dependencies\n- Uses daemon.KillAllDaemons() from internal/daemon/\n- Calls bd init logic after reset (unless SkipInit)","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-13T08:44:50.145364+11:00","updated_at":"2025-12-13T10:13:32.610253+11:00","closed_at":"2025-12-13T09:20:06.184893+11:00","dependencies":[{"issue_id":"bd-aydr.1","depends_on_id":"bd-aydr","type":"parent-child","created_at":"2025-12-13T08:44:50.145775+11:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-aydr.2","title":"Implement backup functionality for reset","description":"Add backup capability that can be used by reset command.\n\n## Functionality\n- Copy .beads/ to .beads-backup-{timestamp}/\n- Timestamp format: YYYYMMDD-HHMMSS\n- Preserve file permissions\n- Return backup path for user feedback\n\n## Location\n`internal/reset/backup.go` - keep with reset package for now (YAGNI)\n\n## Interface\n```go\nfunc CreateBackup(beadsDir string) (backupPath string, err error)\n```\n\n## Notes\n- Simple recursive file copy, no compression needed\n- Error if backup dir already exists (unlikely with timestamp)\n- Backup directories SHOULD be gitignored\n- Add `.beads-backup-*/` pattern to .beads/.gitignore template in doctor package\n- Consider: ListBackups() for future `bd backup list` command (not for this PR)","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-13T08:44:51.306103+11:00","updated_at":"2025-12-13T10:13:32.610819+11:00","closed_at":"2025-12-13T09:20:20.590488+11:00","dependencies":[{"issue_id":"bd-aydr.2","depends_on_id":"bd-aydr","type":"parent-child","created_at":"2025-12-13T08:44:51.306474+11:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-aydr.3","title":"Add git operations for --hard reset","description":"Implement git integration for hard reset mode.\n\n## Operations Needed\n1. `git rm -rf .beads/*.jsonl` - remove data files from index\n2. `git commit -m 'beads: reset to clean state'` - commit removal\n3. After re-init: `git add .beads/` and commit fresh state\n\n## Edge Cases to Handle\n- Uncommitted changes in .beads/ - warn or error\n- Detached HEAD state - warn, maybe block\n- Git not initialized - skip git ops, warn\n- Git operations fail mid-way - clear error messaging\n\n## Interface\n```go\ntype GitState struct {\n IsRepo bool\n IsDirty bool // uncommitted changes in .beads/\n IsDetached bool // detached HEAD\n Branch string // current branch name\n}\n\nfunc CheckGitState(beadsDir string) (*GitState, error)\nfunc GitRemoveBeads(beadsDir string) error\nfunc GitCommitReset(message string) error\nfunc GitAddAndCommit(beadsDir, message string) error\n```\n\n## Location\n`internal/reset/git.go` - keep with reset package for now\n\nNote: Codebase has no central git package. internal/compact/git.go is compact-specific.\nFuture refactoring could extract shared git utilities, but YAGNI for now.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-13T08:44:52.798312+11:00","updated_at":"2025-12-13T10:13:32.611131+11:00","closed_at":"2025-12-13T09:17:40.785927+11:00","dependencies":[{"issue_id":"bd-aydr.3","depends_on_id":"bd-aydr","type":"parent-child","created_at":"2025-12-13T08:44:52.798715+11:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-aydr.4","title":"Implement CLI command (cmd/bd/reset.go)","description":"Wire up the reset command with Cobra CLI.\n\n## Responsibilities\n- Define command and all flags\n- User confirmation prompt (unless --force)\n- Display impact summary before confirmation\n- Colored output and progress indicators\n- Call core reset package\n- Handle errors with user-friendly messages\n- Register command with rootCmd in init()\n\n## Flags\n```go\n--hard bool \"Also remove from git and commit\"\n--force bool \"Skip confirmation prompt\"\n--backup bool \"Create backup before reset\"\n--dry-run bool \"Preview what would happen\"\n--skip-init bool \"Do not re-initialize after reset\"\n--verbose bool \"Show detailed progress output\"\n```\n\n## Output Format\n```\n⚠️ This will reset beads to a clean state.\n\nWill be deleted:\n β€’ 47 issues (23 open, 24 closed)\n β€’ 12 tombstones\n\nContinue? [y/N] y\n\nβ†’ Stopping daemons... βœ“\nβ†’ Removing .beads/... βœ“\nβ†’ Initializing fresh... βœ“\n\nβœ“ Reset complete. Run 'bd onboard' to set up hooks.\n```\n\n## Implementation Notes\n- Confirmation logic lives HERE, not in core package\n- Use color package (github.com/fatih/color) for output\n- Follow patterns from other commands (init.go, doctor.go)\n- Add to rootCmd in init() function\n\n## File Location\n`cmd/bd/reset.go`","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-13T08:44:54.318854+11:00","updated_at":"2025-12-13T10:13:32.611434+11:00","closed_at":"2025-12-13T09:59:41.72638+11:00","dependencies":[{"issue_id":"bd-aydr.4","depends_on_id":"bd-aydr","type":"parent-child","created_at":"2025-12-13T08:44:54.319237+11:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-aydr.4","depends_on_id":"bd-aydr.1","type":"blocks","created_at":"2025-12-13T08:45:09.762138+11:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-aydr.4","depends_on_id":"bd-aydr.2","type":"blocks","created_at":"2025-12-13T08:45:09.817854+11:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-aydr.4","depends_on_id":"bd-aydr.3","type":"blocks","created_at":"2025-12-13T08:45:09.883658+11:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-aydr.5","title":"Enhance bd doctor to suggest reset for broken states","description":"Update bd doctor to detect severely broken states and suggest reset.\n\n## Detection Criteria\nSuggest reset when:\n- Multiple unfixable errors detected\n- Corrupted JSONL that can't be repaired\n- Schema version mismatch that can't be migrated\n- Daemon state inconsistent and unkillable\n\n## Implementation\nAdd to doctor's check/fix flow:\n```go\nif unfixableErrors \u003e threshold {\n suggest('State may be too broken to fix. Consider: bd reset')\n}\n```\n\n## Output Example\n```\nβœ— Found 5 unfixable errors\n \n Your beads state may be too corrupted to repair.\n Consider running 'bd reset' to start fresh.\n (Use 'bd reset --backup' to save current state first)\n```\n\n## Notes\n- Don't auto-run reset, just suggest\n- This is lower priority, can be done in parallel with main work","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-13T08:44:55.591986+11:00","updated_at":"2025-12-13T06:24:29.561624-08:00","closed_at":"2025-12-13T10:17:23.4522+11:00","dependencies":[{"issue_id":"bd-aydr.5","depends_on_id":"bd-aydr","type":"parent-child","created_at":"2025-12-13T08:44:55.59239+11:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-aydr.6","title":"Add unit tests for reset package","description":"Comprehensive unit tests for internal/reset package.\n\n## Test Cases\n\n### ValidateState tests\n- .beads/ exists β†’ success\n- .beads/ missing β†’ appropriate error\n- git dirty state detection\n\n### CountImpact tests \n- Empty .beads/ β†’ zero counts\n- With issues β†’ correct count (open vs closed)\n- With tombstones β†’ correct count\n- Returns HasUncommitted correctly\n\n### Backup tests\n- Creates backup with correct timestamp format\n- Preserves all files and permissions\n- Returns correct path\n- Handles missing .beads/ gracefully\n- Errors on pre-existing backup dir\n\n### Git operation tests\n- CheckGitState detects dirty, detached, not-a-repo\n- GitRemoveBeads removes correct files\n- GitCommitReset creates commit with message\n- Operations skip gracefully when not in git repo\n\n### Reset tests (with mocks/temp dirs)\n- Soft reset removes files, calls init\n- Hard reset includes git operations\n- Dry run doesn't modify anything\n- SkipInit flag prevents re-initialization\n- Daemon killall is called\n- Backup is created when requested\n\n## Approach\n- Can start with interface definitions (TDD style)\n- Use testify for assertions\n- Create temp directories for isolation\n- Mock git operations where needed\n- Test completion depends on implementation tasks\n\n## File Location\n`internal/reset/reset_test.go`\n`internal/reset/backup_test.go`\n`internal/reset/git_test.go`","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-13T08:44:57.01739+11:00","updated_at":"2025-12-13T10:13:32.611698+11:00","closed_at":"2025-12-13T09:59:20.820314+11:00","dependencies":[{"issue_id":"bd-aydr.6","depends_on_id":"bd-aydr","type":"parent-child","created_at":"2025-12-13T08:44:57.017813+11:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-aydr.7","title":"Add integration tests for bd reset command","description":"End-to-end integration tests for the reset command.\n\n## Test Scenarios\n\n### Basic reset\n1. Init beads, create some issues\n2. Run bd reset --force\n3. Verify .beads/ is fresh, issues gone\n\n### Hard reset\n1. Init beads, create issues, commit\n2. Run bd reset --hard --force \n3. Verify git history has reset commits\n\n### Backup functionality\n1. Init beads, create issues\n2. Run bd reset --backup --force\n3. Verify backup exists with correct contents\n4. Verify main .beads/ is reset\n\n### Dry run\n1. Init beads, create issues\n2. Run bd reset --dry-run\n3. Verify nothing changed\n\n### Confirmation prompt\n1. Init beads\n2. Run bd reset (no --force)\n3. Verify prompts for confirmation\n4. Test both y and n responses\n\n## Location\ntests/integration/reset_test.go or similar","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-13T08:44:58.479282+11:00","updated_at":"2025-12-13T06:24:29.561908-08:00","closed_at":"2025-12-13T10:15:59.221637+11:00","dependencies":[{"issue_id":"bd-aydr.7","depends_on_id":"bd-aydr","type":"parent-child","created_at":"2025-12-13T08:44:58.479686+11:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-aydr.7","depends_on_id":"bd-aydr.4","type":"blocks","created_at":"2025-12-13T08:45:11.15972+11:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-aydr.8","title":"Respond to GitHub issue #479 with solution","description":"Once bd reset is implemented and released, respond to GitHub issue #479.\n\n## Response should include\n- Announce the new bd reset command\n- Show basic usage examples\n- Link to any documentation\n- Thank the user for the feedback\n\n## Example response\n```\nThanks for raising this! We've added a `bd reset` command to handle this case.\n\nUsage:\n- `bd reset` - Reset to clean state (prompts for confirmation)\n- `bd reset --backup` - Create backup first\n- `bd reset --hard` - Also clean up git history\n\nThis is available in version X.Y.Z.\n```\n\n## Notes\n- Wait until feature is merged and released\n- Consider if issue should be closed or left for user confirmation","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-13T08:45:00.112351+11:00","updated_at":"2025-12-13T06:24:29.562177-08:00","closed_at":"2025-12-13T10:18:06.646796+11:00","dependencies":[{"issue_id":"bd-aydr.8","depends_on_id":"bd-aydr","type":"parent-child","created_at":"2025-12-13T08:45:00.112732+11:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-aydr.8","depends_on_id":"bd-aydr.7","type":"blocks","created_at":"2025-12-13T08:45:12.640243+11:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-aydr.9","title":"Add .beads-backup-* pattern to gitignore template","description":"Update the gitignore template in doctor package to include backup directories.\n\n## Change\nAdd `.beads-backup-*/` to the GitignoreTemplate in `cmd/bd/doctor/gitignore.go`\n\n## Why\nBackup directories created by `bd reset --backup` should not be committed to git.\nThey are local-only recovery tools.\n\n## File\n`cmd/bd/doctor/gitignore.go` - look for GitignoreTemplate constant","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-13T08:49:42.453483+11:00","updated_at":"2025-12-13T09:16:44.201889+11:00","closed_at":"2025-12-13T09:16:44.201889+11:00","dependencies":[{"issue_id":"bd-aydr.9","depends_on_id":"bd-aydr","type":"parent-child","created_at":"2025-12-13T08:49:42.453886+11:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-b3og","title":"Fix TestImportBugIntegration deadlock in importer_test.go","description":"Code health review found internal/importer/importer_test.go has TestImportBugIntegration skipped with:\n\nTODO: Test hangs due to database deadlock - needs investigation\n\nThis indicates a potential unresolved concurrency issue in the importer. The test has been skipped for an unknown duration.\n\nFix: Investigate the deadlock, fix the underlying issue, and re-enable the test.","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-16T18:17:22.103838-08:00","updated_at":"2025-12-17T23:13:40.529671-08:00","closed_at":"2025-12-17T17:25:26.645901-08:00","dependencies":[{"issue_id":"bd-b3og","depends_on_id":"bd-tggf","type":"blocks","created_at":"2025-12-16T18:19:05.740642-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-b6xo","title":"Remove or fix ClearDirtyIssues() - race condition risk (bd-52)","description":"Code health review found internal/storage/sqlite/dirty.go still exposes old ClearDirtyIssues() method (lines 103-108) which clears ALL dirty issues without checking what was actually exported.\n\nData loss risk: If export fails after some issues written to JSONL but before ClearDirtyIssues called, changes to remaining dirty issues will be lost.\n\nThe safer ClearDirtyIssuesByID() (lines 113-132) exists and clears only exported issues.\n\nFix: Either remove old method or mark it deprecated and ensure no code paths use it.","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-16T18:17:20.534625-08:00","updated_at":"2025-12-17T23:13:40.530703-08:00","closed_at":"2025-12-17T18:59:18.693791-08:00","dependencies":[{"issue_id":"bd-b6xo","depends_on_id":"bd-tggf","type":"blocks","created_at":"2025-12-16T18:19:05.633738-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-bdc9","title":"Update Homebrew formula","description":"Update the Homebrew tap with new version:\n\n```bash\n./scripts/update-homebrew.sh 0.33.2\n```\n\nThis script waits for GitHub Actions to complete (~5 min), then updates the formula with new SHA256 hashes.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.762399-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","close_reason":"Formula shows stable 0.33.2","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task","wisp":true} -{"id":"bd-bgm","title":"Fix unparam unused parameter in cmd/bd/doctor.go:1879","description":"Linting issue: checkGitHooks - path is unused (unparam) at cmd/bd/doctor.go:1879:20. Error: func checkGitHooks(path string) doctorCheck {","status":"closed","priority":0,"issue_type":"bug","created_at":"2025-12-07T15:35:25.270293252-07:00","updated_at":"2025-12-17T23:13:40.532991-08:00","closed_at":"2025-12-17T16:46:11.026693-08:00"} -{"id":"bd-bgr","title":"Test stdin 2","description":"Description from stdin test\n","status":"closed","priority":4,"issue_type":"task","created_at":"2025-12-17T17:28:05.41434-08:00","updated_at":"2025-12-17T17:28:33.833288-08:00","closed_at":"2025-12-17T17:28:33.833288-08:00"} -{"id":"bd-bha9","title":"Add missing updated_at index on issues table","description":"GetStaleIssues queries filter by updated_at but there's no index on this column.\n\n**Current query (ready.go:253-254):**\n```sql\nWHERE status != 'closed'\n AND datetime(updated_at) \u003c datetime('now', '-' || ? || ' days')\n```\n\n**Problem:** Full table scan when filtering stale issues.\n\n**Solution:** Add migration to create:\n```sql\nCREATE INDEX IF NOT EXISTS idx_issues_updated_at ON issues(updated_at);\n```\n\n**Note:** The datetime() function wrapper may prevent index usage. Consider also storing updated_at as INTEGER (unix timestamp) for better index efficiency, or test if SQLite can use the index despite the function.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-22T22:58:49.166051-08:00","updated_at":"2025-12-22T23:15:13.837078-08:00","closed_at":"2025-12-22T23:15:13.837078-08:00","close_reason":"Implemented in migration 026_additional_indexes.go","dependencies":[{"issue_id":"bd-bha9","depends_on_id":"bd-h0we","type":"discovered-from","created_at":"2025-12-22T22:58:49.166949-08:00","created_by":"daemon"}]} -{"id":"bd-bhg7","title":"Merge: bd-io8c","description":"branch: polecat/Syncer\ntarget: main\nsource_issue: bd-io8c\nrig: beads","status":"closed","priority":1,"issue_type":"merge-request","created_at":"2025-12-23T13:46:46.954667-08:00","updated_at":"2025-12-23T19:12:08.34433-08:00","closed_at":"2025-12-23T19:12:08.34433-08:00","close_reason":"Stale merge-requests from orphaned polecat branches - refinery not processing"} -{"id":"bd-bijf","title":"Merge: bd-l13p","description":"branch: polecat/nux\ntarget: main\nsource_issue: bd-l13p\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-23T16:41:32.467246-08:00","updated_at":"2025-12-23T19:12:08.348252-08:00","closed_at":"2025-12-23T19:12:08.348252-08:00","close_reason":"Stale merge-requests from orphaned polecat branches - refinery not processing"} -{"id":"bd-bivq","title":"Merge: bd-9usz","description":"branch: polecat/slit\ntarget: main\nsource_issue: bd-9usz\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-23T20:42:19.995419-08:00","updated_at":"2025-12-23T21:21:57.700579-08:00","closed_at":"2025-12-23T21:21:57.700579-08:00","close_reason":"stale - no code pushed"} -{"id":"bd-bqcc","title":"Consolidate maintenance commands into bd doctor --fix","description":"Per rsnodgrass in GH#692:\n\u003e \"The biggest improvement to beads from an ergonomics perspective would be to prune down commands. We have a lot of 'maintenance' commands that probably should just be folded into 'bd doctor --fix' automatically.\"\n\nCurrent maintenance commands that could be consolidated:\n- clean - Clean up temporary git merge artifacts\n- cleanup - Delete closed issues and prune expired tombstones\n- compact - Compact old closed issues\n- detect-pollution - Detect and clean test issues\n- migrate-* (5 commands) - Various migration utilities\n- repair-deps - Fix orphaned dependency references\n- validate - Database health checks\n\nProposal:\n1. Make `bd doctor` the single entry point for health checks\n2. Add `bd doctor --fix` to auto-fix common issues\n3. Deprecate (but keep working) individual commands\n4. Add `bd doctor --all` for comprehensive maintenance\n\nThis would reduce cognitive load for users - they just need to remember 'bd doctor'.\n\nNote: This is higher impact but also higher risk - needs careful design to avoid breaking existing workflows.","status":"closed","priority":2,"issue_type":"feature","assignee":"beads/capable","created_at":"2025-12-22T14:27:31.466556-08:00","updated_at":"2025-12-23T01:33:25.732363-08:00","closed_at":"2025-12-23T01:33:25.732363-08:00","close_reason":"Merged to main"} -{"id":"bd-bs5j","title":"Release v0.33.2","description":"Version bump workflow for beads release 0.33.2.\n\n## Variables\n- `0.33.2` - The new version number (e.g., 0.31.0)\n- `2025-12-21` - Release date (YYYY-MM-DD format)\n\n## Workflow Steps\n1. Kill running daemons\n2. Run tests and linting\n3. Bump version in all files (10 files total)\n4. Update cmd/bd/info.go with release notes\n5. Commit and push version bump\n6. Create and push git tag\n7. Update Homebrew formula\n8. Upgrade local Homebrew installation\n9. Verify installation\n\n## Files Updated by bump-version.sh\n- cmd/bd/version.go\n- .claude-plugin/plugin.json\n- .claude-plugin/marketplace.json\n- integrations/beads-mcp/pyproject.toml\n- integrations/beads-mcp/src/beads_mcp/__init__.py\n- README.md\n- npm-package/package.json\n- cmd/bd/templates/hooks/* (4 files)\n- CHANGELOG.md\n\n## Manual Step Required\n- cmd/bd/info.go - Add versionChanges entry with release notes","status":"tombstone","priority":1,"issue_type":"epic","created_at":"2025-12-21T16:10:13.759062-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"epic","wisp":true} -{"id":"bd-bw6","title":"Fix G104 errors unhandled in internal/storage/sqlite/queries.go:1181","description":"Linting issue: G104: Errors unhandled (gosec) at internal/storage/sqlite/queries.go:1181:4. Error: rows.Close()","status":"closed","priority":0,"issue_type":"bug","created_at":"2025-12-07T15:35:09.008444133-07:00","updated_at":"2025-12-17T23:13:40.536627-08:00","closed_at":"2025-12-17T16:46:11.029355-08:00"} -{"id":"bd-bwk2","title":"Centralize error handling patterns in storage layer","description":"80+ instances of inconsistent error handling across sqlite.go with mix of %w, %v, and no wrapping.\n\nLocation: internal/storage/sqlite/sqlite.go (throughout)\n\nProblem:\n- Some use fmt.Errorf(\"op failed: %w\", err) - correct wrapping\n- Some use fmt.Errorf(\"op failed: %v\", err) - loses error chain\n- Some return err directly - no context\n- Hard to debug production issues\n- Can't distinguish error types\n\nSolution: Create internal/storage/sqlite/errors.go:\n- Define sentinel errors (ErrNotFound, ErrInvalidID, etc.)\n- Create wrapDBError(op string, err error) helper\n- Convert sql.ErrNoRows to ErrNotFound\n- Always wrap with operation context\n\nImpact: Lost error context; inconsistent messages; hard to debug\n\nEffort: 5-7 hours","status":"closed","priority":1,"issue_type":"task","created_at":"2025-11-16T14:51:54.974909-08:00","updated_at":"2025-12-21T21:44:37.237175-08:00","closed_at":"2025-12-21T21:44:37.237175-08:00","close_reason":"Already implemented: errors.go exists with sentinel errors (ErrNotFound, ErrInvalidID, ErrConflict, ErrCycle), wrapDBError/wrapDBErrorf helpers that convert sql.ErrNoRows to ErrNotFound, and IsNotFound/IsConflict/IsCycle checkers. 41 uses of wrapDBError, 347 uses of proper %w wrapping, 0 uses of %v. Added one minor fix to CheckpointWAL."} -{"id":"bd-bxha","title":"Default to YES for git hooks and merge driver installation","description":"Currently bd init prompts user to install git hooks and merge driver, but setup is incomplete if user declines. Change to install by default unless --skip-hooks or --skip-merge-driver flags are passed. Better safe defaults. If installation fails, warn user and suggest bd doctor --fix.","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-11-21T23:16:10.172238-08:00","updated_at":"2025-12-23T04:20:51.885765-08:00","closed_at":"2025-12-23T04:20:51.885765-08:00","close_reason":"Already implemented in commits ec4117d0 and 3a36d0b9 - hooks/merge driver install by default, doctor runs at end of init","dependencies":[{"issue_id":"bd-bxha","depends_on_id":"bd-tbz3","type":"parent-child","created_at":"2025-11-21T23:16:10.173034-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-by0d","title":"Work on beads-ldv: Fix bd graph crashes with nil pointer ...","description":"Work on beads-ldv: Fix bd graph crashes with nil pointer dereference (GH#657). Fix nil pointer in computeDependencyCounts at graph.go:428. When done, submit MR (not PR) to integration branch for Refinery.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-19T22:55:27.829359-08:00","updated_at":"2025-12-19T23:28:32.428314-08:00","closed_at":"2025-12-19T23:20:49.038441-08:00","close_reason":"Fixed nil pointer in computeDependencyCounts by passing subgraph instead of nil"} -{"id":"bd-c2xs","title":"Exclude pinned issues from bd blocked","description":"Update bd blocked to exclude pinned issues. Pinned issues are context markers and should not appear in the blocked work list.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T23:33:44.684242-08:00","updated_at":"2025-12-21T11:29:42.179389-08:00","closed_at":"2025-12-21T11:29:42.179389-08:00","close_reason":"Already implemented in SQLite (line 299) and memory (line 1084). Pinned issues excluded from blocked.","dependencies":[{"issue_id":"bd-c2xs","depends_on_id":"bd-0vg","type":"blocks","created_at":"2025-12-18T23:33:56.521323-08:00","created_by":"daemon"},{"issue_id":"bd-c2xs","depends_on_id":"bd-7h5","type":"blocks","created_at":"2025-12-18T23:34:07.736681-08:00","created_by":"daemon"}]} -{"id":"bd-c3u","title":"Review PR #512: clarify bd ready docs","description":"Review and merge PR #512 from aspiers. This PR clarifies what bd ready does after git pull in README.md. Simple 1-line change. URL: https://github.com/anthropics/beads/pull/512","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-13T08:15:13.405161+11:00","updated_at":"2025-12-13T07:07:29.641265-08:00","closed_at":"2025-12-13T07:07:29.641265-08:00"} -{"id":"bd-c7y5","title":"Optimization: Tombstones still synced, adding overhead","description":"Tombstoned (deleted) issues are still processed during sync, adding overhead.\n\n## Evidence\n```\nImport complete: 0 created, 0 updated, 407 unchanged, 100 skipped\n```\nThose 100 skipped are tombstones - they're read from JSONL, parsed, then filtered out.\n\n## Current State (Beads repo)\n- 408 total issues\n- 99 tombstones (24% of database)\n- Every sync reads and skips these 99 entries\n\n## Impact\n- Sync time increases with tombstone count\n- JSONL file size grows indefinitely\n- Git history accumulates tombstone churn\n\n## Proposed Solutions\n\n### 1. JSONL Compaction (`bd compact`)\nPeriodically rewrite JSONL without tombstones:\n```bash\nbd compact # Removes tombstones, rewrites issues.jsonl\n```\nTrade-off: Loses delete history, but that's in git anyway.\n\n### 2. Tombstone TTL\nAuto-remove tombstones older than N days during sync:\n```go\nif issue.Deleted \u0026\u0026 time.Since(issue.UpdatedAt) \u003e 7*24*time.Hour {\n // Skip writing to new JSONL\n}\n```\n\n### 3. Archive File\nMove old closed issues to `issues-archive.jsonl`:\n- Not synced regularly\n- Available for historical queries\n- Main JSONL stays small\n\n### 4. Lazy Tombstone Handling \nDon't write tombstones to JSONL at all - just remove the line:\n- Simpler, but loses cross-clone delete propagation\n- Would need different delete propagation mechanism\n\n## Recommendation\nStart with `bd compact` command - simple, explicit, user-controlled.\n\n## Related\n- gt-tnss: Analysis - Beads database size and hygiene strategy\n- gt-ox67: Maintenance - Regular cleanup of closed MR/gate beads","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-23T14:41:16.925212-08:00","updated_at":"2025-12-23T21:22:26.662604-08:00"} -{"id":"bd-c83r","title":"Prevent multiple daemons from running on the same repo","description":"Multiple bd daemons running on the same repo clone causes race conditions and data corruption risks.\n\n**Problem:**\n- Nothing prevents spawning multiple daemons for the same repository\n- Multiple daemons watching the same files can conflict during sync operations\n- Observed: 4 daemons running simultaneously caused sync race condition\n\n**Solution:**\nImplement daemon singleton enforcement per repo:\n1. Use a lock file (e.g., .beads/.daemon.lock) with PID\n2. On daemon start, check if lock exists and process is alive\n3. If stale lock (dead PID), clean up and acquire lock\n4. If active daemon exists, either:\n - Exit with message 'daemon already running (PID xxx)'\n - Or offer --replace flag to kill existing and take over\n5. Release lock on graceful shutdown\n\n**Edge cases to handle:**\n- Daemon crashes without releasing lock (stale PID detection)\n- Multiple repos in same directory tree (each repo gets own lock)\n- Race between two daemons starting simultaneously (atomic lock acquisition)","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-13T06:37:23.377131-08:00","updated_at":"2025-12-16T01:14:49.50347-08:00","closed_at":"2025-12-14T17:34:14.990077-08:00"} -{"id":"bd-cb64c226.1","title":"Performance Validation","description":"Confirm no performance regression from cache removal","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-10-28T10:50:15.126019-07:00","updated_at":"2025-12-17T23:18:29.108883-08:00","close_reason":"Closed","deleted_at":"2025-12-17T23:18:29.108883-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} -{"id":"bd-cb64c226.10","title":"Delete server_cache_storage.go","description":"Remove the entire cache implementation file (~286 lines)","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-10-27T22:55:38.729299-07:00","updated_at":"2025-12-17T23:18:29.110716-08:00","close_reason":"Closed","deleted_at":"2025-12-17T23:18:29.110716-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} -{"id":"bd-cb64c226.12","title":"Remove Storage Cache from Server Struct","description":"Eliminate cache fields and use s.storage directly","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-10-27T22:55:25.474412-07:00","updated_at":"2025-12-17T23:18:29.111039-08:00","close_reason":"Closed","deleted_at":"2025-12-17T23:18:29.111039-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} -{"id":"bd-cb64c226.13","title":"Audit Current Cache Usage","description":"**Summary:** Comprehensive audit of storage cache usage revealed minimal dependency across server components, with most calls following a consistent pattern. Investigation confirmed cache was largely unnecessary in single-repository daemon architecture.\n\n**Key Decisions:** \n- Remove all cache-related environment variables\n- Delete server struct cache management fields\n- Eliminate cache-specific test files\n- Deprecate req.Cwd routing logic\n\n**Resolution:** Cache system will be completely removed, simplifying server storage access and reducing unnecessary complexity with negligible performance impact.","notes":"AUDIT COMPLETE\n\ngetStorageForRequest() callers: 17 production + 11 test\n- server_issues_epics.go: 8 calls\n- server_labels_deps_comments.go: 4 calls \n- server_export_import_auto.go: 2 calls\n- server_compact.go: 2 calls\n- server_routing_validation_diagnostics.go: 1 call\n- server_eviction_test.go: 11 calls (DELETE entire file)\n\nPattern everywhere: store, err := s.getStorageForRequest(req) β†’ store := s.storage\n\nreq.Cwd usage: Only for multi-repo routing. Local daemon always serves 1 repo, so routing is unused.\n\nMCP server: Uses separate daemons per repo (no req.Cwd usage found). NOT affected by cache removal.\n\nCache env vars to deprecate:\n- BEADS_DAEMON_MAX_CACHE_SIZE (used in server_core.go:63)\n- BEADS_DAEMON_CACHE_TTL (used in server_core.go:72)\n- BEADS_DAEMON_MEMORY_THRESHOLD_MB (used in server_cache_storage.go:47)\n\nServer struct fields to remove:\n- storageCache, cacheMu, maxCacheSize, cacheTTL, cleanupTicker, cacheHits, cacheMisses\n\nTests to delete:\n- server_eviction_test.go (entire file - 9 tests)\n- limits_test.go cache assertions\n\nSpecial consideration: ValidateDatabase endpoint uses findDatabaseForCwd() outside cache. Verify if used, then remove or inline.\n\nSafe to proceed with removal - cache always had 1 entry in local daemon model.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-10-27T22:55:19.3723-07:00","updated_at":"2025-12-17T23:18:29.111369-08:00","close_reason":"Closed","deleted_at":"2025-12-17T23:18:29.111369-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} -{"id":"bd-cb64c226.6","title":"Verify MCP Server Compatibility","description":"Ensure MCP server works with cache-free daemon","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-10-27T22:56:03.241615-07:00","updated_at":"2025-12-17T23:18:29.109644-08:00","close_reason":"Closed","deleted_at":"2025-12-17T23:18:29.109644-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} -{"id":"bd-cb64c226.8","title":"Update Metrics and Health Endpoints","description":"Remove cache-related metrics from health/metrics endpoints","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-10-27T22:55:49.212047-07:00","updated_at":"2025-12-17T23:18:29.110022-08:00","close_reason":"Closed","deleted_at":"2025-12-17T23:18:29.110022-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} -{"id":"bd-cb64c226.9","title":"Remove Cache-Related Tests","description":"Delete or update tests that assume multi-repo caching","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-10-27T22:55:44.511897-07:00","updated_at":"2025-12-17T23:18:29.110385-08:00","close_reason":"Closed","deleted_at":"2025-12-17T23:18:29.110385-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} -{"id":"bd-cbed9619.1","title":"Fix multi-round convergence for N-way collisions","description":"**Summary:** Multi-round collision resolution was identified as a critical issue preventing complete synchronization across distributed clones. The problem stemmed from incomplete final pulls that didn't fully propagate all changes between system instances.\n\n**Key Decisions:**\n- Implement multi-round sync mechanism\n- Ensure bounded convergence (≀N rounds)\n- Guarantee idempotent import without data loss\n\n**Resolution:** Developed a sync strategy that ensures all clones converge to the same complete set of issues, unblocking the bd-cbed9619 epic and improving distributed system reliability.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-10-28T21:22:21.486109-07:00","updated_at":"2025-12-17T23:18:29.111713-08:00","close_reason":"Closed","deleted_at":"2025-12-17T23:18:29.111713-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} -{"id":"bd-cbed9619.2","title":"Implement content-first idempotent import","description":"**Summary:** Refactored issue import to be content-first and idempotent, ensuring consistent data synchronization across multiple import rounds by prioritizing content hash matching over ID-based updates.\n\n**Key Decisions:** \n- Implement content hash as primary matching mechanism\n- Create global collision resolution algorithm\n- Ensure importing same data multiple times results in no-op\n\n**Resolution:** The new import strategy guarantees predictable convergence across distributed systems, solving rename detection and collision handling while maintaining data integrity during multi-stage imports.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-10-28T18:38:25.671302-07:00","updated_at":"2025-12-17T23:18:29.112032-08:00","close_reason":"Closed","dependencies":[{"issue_id":"bd-cbed9619.2","depends_on_id":"bd-cbed9619.5","type":"blocks","created_at":"2025-10-28T18:39:28.360026-07:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-cbed9619.2","depends_on_id":"bd-cbed9619.4","type":"blocks","created_at":"2025-10-28T18:39:28.383624-07:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-cbed9619.2","depends_on_id":"bd-cbed9619.3","type":"blocks","created_at":"2025-10-28T18:39:28.407157-07:00","created_by":"daemon","metadata":"{}"}],"deleted_at":"2025-12-17T23:18:29.112032-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} -{"id":"bd-cbed9619.3","title":"Implement global N-way collision resolution algorithm","description":"**Summary:** Replaced pairwise collision resolution with a global N-way algorithm that deterministically resolves issue ID conflicts across multiple clones. The new approach groups collisions, deduplicates by content hash, and assigns sequential IDs to ensure consistent synchronization.\n\n**Key Decisions:**\n- Use content hash for global, stable sorting\n- Group collisions by base ID\n- Assign sequential IDs based on sorted unique versions\n- Eliminate order-dependent remapping logic\n\n**Resolution:** Implemented ResolveNWayCollisions function that guarantees deterministic issue ID assignment across multiple synchronization scenarios, solving the core challenge of maintaining consistency in distributed systems with potential conflicts.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-10-28T18:37:42.85616-07:00","updated_at":"2025-12-17T23:18:29.112335-08:00","close_reason":"Closed","dependencies":[{"issue_id":"bd-cbed9619.3","depends_on_id":"bd-cbed9619.5","type":"blocks","created_at":"2025-10-28T18:39:28.30886-07:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-cbed9619.3","depends_on_id":"bd-cbed9619.4","type":"blocks","created_at":"2025-10-28T18:39:28.336312-07:00","created_by":"daemon","metadata":"{}"}],"deleted_at":"2025-12-17T23:18:29.112335-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} -{"id":"bd-cbed9619.4","title":"Make DetectCollisions read-only (separate detection from modification)","description":"**Summary:** The project restructured the collision detection process in the database to separate read-only detection from state modification, eliminating race conditions and improving system reliability. This was achieved by introducing a two-phase approach: first detecting potential collisions, then applying resolution separately.\n\n**Key Decisions:**\n- Create read-only DetectCollisions method\n- Add RenameDetail to track potential issue renames\n- Implement atomic ApplyCollisionResolution function\n- Separate detection logic from database modification\n\n**Resolution:** The refactoring creates a more robust, composable collision handling mechanism that prevents partial failures and maintains database consistency during complex issue import scenarios.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-10-28T18:37:09.652326-07:00","updated_at":"2025-12-17T23:18:29.112637-08:00","close_reason":"Closed","dependencies":[{"issue_id":"bd-cbed9619.4","depends_on_id":"bd-cbed9619.5","type":"blocks","created_at":"2025-10-28T18:39:28.285653-07:00","created_by":"daemon","metadata":"{}"}],"deleted_at":"2025-12-17T23:18:29.112637-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} -{"id":"bd-cbed9619.5","title":"Add content-addressable identity to Issue type","description":"**Summary:** Added content-addressable identity to Issue type by implementing a ContentHash field that generates a unique SHA256 fingerprint based on semantic issue content. This resolves issue identification challenges when multiple system instances create issues with identical IDs but different contents.\n\n**Key Decisions:**\n- Use SHA256 for content hashing\n- Hash excludes ID and timestamps\n- Compute hash automatically at creation/import time\n- Add database column for hash storage\n\n**Resolution:** Successfully implemented a deterministic content hashing mechanism that enables reliable issue identification across distributed systems, improving data integrity and collision detection.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-10-28T18:36:44.914967-07:00","updated_at":"2025-12-17T23:18:29.112933-08:00","close_reason":"Closed","deleted_at":"2025-12-17T23:18:29.112933-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} -{"id":"bd-cils","title":"Work on beads-2nh: Fix gt spawn --issue to find issues in...","description":"Work on beads-2nh: Fix gt spawn --issue to find issues in rig's beads database. When done, submit MR (not PR) to integration branch for Refinery.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-19T22:55:47.573854-08:00","updated_at":"2025-12-20T00:49:51.927884-08:00","closed_at":"2025-12-19T23:28:28.605343-08:00","close_reason":"Completed - fixed spawn beads path in gastown"} -{"id":"bd-cnwx","title":"Refactor mol.go: split 1200+ line file into subcommands","description":"## Problem\n\ncmd/bd/mol.go has grown to 1200+ lines with all molecule subcommands in one file.\n\n## Current State\n- mol.go: 1218 lines (bond, spawn, run, distill, catalog, show, etc.)\n- Hard to navigate, review, and maintain\n\n## Proposed Structure\nSplit into separate files by subcommand:\n```\ncmd/bd/\nβ”œβ”€β”€ mol.go # Root command, shared helpers\nβ”œβ”€β”€ mol_bond.go # bd mol bond\nβ”œβ”€β”€ mol_spawn.go # bd mol spawn \nβ”œβ”€β”€ mol_run.go # bd mol run\nβ”œβ”€β”€ mol_distill.go # bd mol distill\nβ”œβ”€β”€ mol_catalog.go # bd mol catalog\nβ”œβ”€β”€ mol_show.go # bd mol show\n└── mol_test.go # Tests (already separate)\n```\n\n## Benefits\n- Easier code review\n- Better separation of concerns\n- Simpler navigation\n- Each subcommand self-contained","status":"closed","priority":2,"issue_type":"chore","created_at":"2025-12-21T11:30:58.832192-08:00","updated_at":"2025-12-21T11:42:49.390824-08:00","closed_at":"2025-12-21T11:42:49.390824-08:00","close_reason":"Refactored mol.go into 7 files. Build and tests pass."} -{"id":"bd-co29","title":"Merge: bd-n386","description":"branch: polecat/immortan\ntarget: main\nsource_issue: bd-n386\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-23T20:41:45.644113-08:00","updated_at":"2025-12-23T21:21:57.70152-08:00","closed_at":"2025-12-23T21:21:57.70152-08:00","close_reason":"stale - no code pushed"} -{"id":"bd-crgr","title":"GH#517: Claude sets priority wrong on new install","description":"Claude uses 'medium/high/low' for priority instead of P0-P4. Update bd prime/onboard output to be clearer about priority syntax. See GitHub issue #517.","status":"tombstone","priority":2,"issue_type":"bug","created_at":"2025-12-16T01:03:34.803084-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} -{"id":"bd-csnr","title":"activity --follow: Silent error handling","description":"In activity.go:175-179, when the daemon is down or errors occur during polling in --follow mode, errors are silently ignored:\n\n```go\nnewEvents, err := fetchMutations(lastPoll)\nif err != nil {\n // Daemon might be down, continue trying\n continue\n}\n```\n\nThis means:\n- Users won't know if the daemon is unreachable\n- Could appear frozen when actually failing\n- No indication of lost events\n\nShould at least show a warning after N consecutive failures, or show '...' indicator to show polling status.\n\nDiscovered during code review of bd-xo1o implementation.","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-23T04:06:18.590743-08:00","updated_at":"2025-12-23T04:16:04.64978-08:00","closed_at":"2025-12-23T04:16:04.64978-08:00","close_reason":"Added error tracking with warning after 5 consecutive failures, reconnection message on recovery, rate-limited warnings (max once per 30s)"} -{"id":"bd-czss","title":"Update CHANGELOG.md with release notes","description":"Add meaningful release notes to CHANGELOG.md describing what changed in {{version}}","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T22:55:59.909641-08:00","updated_at":"2025-12-20T17:59:26.262153-08:00","closed_at":"2025-12-20T01:23:51.407302-08:00","dependencies":[{"issue_id":"bd-czss","depends_on_id":"bd-6s61","type":"parent-child","created_at":"2025-12-19T22:56:14.862724-08:00","created_by":"daemon"},{"issue_id":"bd-czss","depends_on_id":"bd-qkw9","type":"blocks","created_at":"2025-12-19T22:56:23.145894-08:00","created_by":"daemon"}]} -{"id":"bd-d148","title":"GH#483: Pre-commit hook fails unnecessarily when .beads removed","description":"Pre-commit hook fails on bd sync when .beads directory exists but user is on branch without beads. Should exit gracefully. See GitHub issue #483.","status":"tombstone","priority":2,"issue_type":"bug","created_at":"2025-12-16T01:03:40.049785-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} -{"id":"bd-d28c","title":"Test createTombstone and deleteIssue wrappers","description":"Add tests for the createTombstone and deleteIssue wrapper functions in cmd/bd/delete.go.\n\n## Functions under test\n- createTombstone (cmd/bd/delete.go:335) - Wrapper around SQLite CreateTombstone\n- deleteIssue (cmd/bd/delete.go:349) - Wrapper around SQLite DeleteIssue\n\n## Test scenarios for createTombstone\n1. Successful tombstone creation\n2. Tombstone with reason and actor tracking\n3. Error when issue doesn't exist\n4. Verify tombstone status set correctly\n5. Verify audit trail recorded\n6. Rollback/error handling\n\n## Test scenarios for deleteIssue\n1. Successful issue deletion\n2. Error on non-existent issue\n3. Verify issue removed from database\n4. Error handling when storage backend doesn't support delete\n\n## Coverage target\nCurrent: 0%\nTarget: \u003e85%\n\n## Related\n- Parent epic: bd-kyll\n- Original issue: bd-7z4","status":"closed","priority":1,"issue_type":"task","assignee":"beads/testcat","created_at":"2025-12-18T13:08:37.669214532-07:00","updated_at":"2025-12-23T21:44:33.169062-08:00","closed_at":"2025-12-23T21:44:33.169062-08:00","close_reason":"Tests merged from polecat/testcat branch and verified passing. 9 test cases added for createTombstone and deleteIssue wrappers.","dependencies":[{"issue_id":"bd-d28c","depends_on_id":"bd-kyll","type":"parent-child","created_at":"2025-12-18T13:08:37.70588226-07:00","created_by":"mhwilkie"}]} -{"id":"bd-d3e5","title":"Test issue 2","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-14T11:21:13.878680387-07:00","updated_at":"2025-12-14T11:21:13.878680387-07:00","closed_at":"2025-12-14T00:32:13.890274-08:00"} -{"id":"bd-d4jl","title":"Commit and push release","description":"git add -A \u0026\u0026 git commit -m 'chore: bump version to 0.32.1' \u0026\u0026 git push","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-20T21:53:21.928138-08:00","updated_at":"2025-12-20T21:57:12.81943-08:00","closed_at":"2025-12-20T21:57:12.81943-08:00","close_reason":"Committed and pushed to origin","dependencies":[{"issue_id":"bd-d4jl","depends_on_id":"bd-an4s","type":"parent-child","created_at":"2025-12-20T21:53:21.930015-08:00","created_by":"daemon"},{"issue_id":"bd-d4jl","depends_on_id":"bd-tj00","type":"blocks","created_at":"2025-12-20T21:53:29.884457-08:00","created_by":"daemon"}]} -{"id":"bd-d73u","title":"Re: Thread Test 2","description":"Great! Thread is working well.","status":"tombstone","priority":2,"issue_type":"message","created_at":"2025-12-16T18:21:46.655093-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","dependencies":[{"issue_id":"bd-d73u","depends_on_id":"bd-vpan","type":"replies-to","created_at":"2025-12-18T13:45:31.137191-08:00","created_by":"migration"}],"deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"message"} -{"id":"bd-d9mu","title":"Cross-rig external dependency support","description":"Support dependencies on issues in other rigs/repos.\n\n## Use Case\n\nGas Town issues often depend on Beads issues (and vice versa). Currently we use labels like `external:beads/bd-xxx` as documentation, but:\n- `bd blocked` doesn't recognize external deps\n- `bd ready` doesn't filter them out\n- No way to query cross-rig status\n\n## Proposed UX\n\n### Adding external deps\n```bash\n# New syntax for bd dep add\nbd dep add gt-a07f external:beads:bd-kwjh\n\n# Or maybe cleaner\nbd dep add gt-a07f --external beads:bd-kwjh\n```\n\n### Showing blocked status\n```bash\nbd blocked\n# β†’ gt-a07f blocked by external:beads:bd-kwjh (unverified)\n\n# With optional cross-rig query\nbd blocked --resolve-external\n# β†’ gt-a07f blocked by external:beads:bd-kwjh (closed) ← unblocked!\n```\n\n### Storage\nCould use:\n- Special dependency type: `type: external`\n- Label convention: `external:rig:id`\n- New field: `external_deps: [\"beads:bd-kwjh\"]`\n\n## Implementation Notes\n\nCross-rig queries would need:\n- Known rig locations (config or discovery)\n- Read-only beads access to external rigs\n- Caching to avoid repeated queries\n\nFor MVP, just recognizing external deps and marking them as 'unverified' blockers would be valuable.","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-12-22T02:27:23.892706-08:00","updated_at":"2025-12-22T22:32:49.261551-08:00","closed_at":"2025-12-22T22:32:49.261551-08:00","close_reason":"Superseded: Cross-rig external dependency support was fully implemented through child issues: bd-om4a (external: prefix syntax), bd-zmmy (bd ready resolution), bd-396j (bd blocked filtering), bd-66w1 (external_projects config), bd-vks2 (dep tree display), bd-mv6h (test coverage). External deps are auto-resolved when external_projects config is set. The --resolve-external flag is not needed."} -{"id":"bd-db72","title":"Upgrade local Homebrew installation","description":"Upgrade bd via Homebrew:\n\n```bash\nbrew update\nbrew upgrade bd\n/opt/homebrew/bin/bd version # Verify shows 0.33.2\n```","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.760552-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","close_reason":"Blocked by macOS 26 CLT issue - local dev build (0.33.2) used instead","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task","wisp":true} -{"id":"bd-de6","title":"Fix FindBeadsDir to prioritize main repo .beads for worktrees","description":"The FindBeadsDir function should prioritize finding .beads in the main repository root when accessed from a worktree, rather than finding worktree-local .beads directories. This ensures proper sharing of the database across all worktrees.","status":"in_progress","priority":2,"issue_type":"bug","assignee":"beads/golf","created_at":"2025-12-07T16:48:36.883117467-07:00","updated_at":"2025-12-23T22:29:35.76825-08:00"} -{"id":"bd-dhza","title":"Reduce global state in cmd/bd/main.go (25+ variables)","description":"Code health review found main.go has 25+ global variables (lines 57-112):\n\n- dbPath, actor, store, jsonOutput, daemonClient, noDaemon\n- rootCtx, rootCancel, autoFlushEnabled\n- isDirty (marked 'USED BY LEGACY CODE')\n- needsFullExport (marked 'USED BY LEGACY CODE')\n- flushTimer (marked 'DEPRECATED')\n- flushMutex, storeMutex, storeActive\n- flushFailureCount, lastFlushError, flushManager\n- skipFinalFlush, autoImportEnabled\n- versionUpgradeDetected, previousVersion, upgradeAcknowledged\n\nImpact:\n- Hard to test individual commands\n- Race conditions possible\n- State leakage between commands\n\nFix: Move toward dependency injection. Remove deprecated variables. Consider cmd/bd/internal package.","notes":"Investigation found flushTimer, isDirty, needsFullExport are actively used by both legacy autoflush.go and new flush_manager.go. Requires coordinated refactor to migrate all callers to FlushManager first. Estimated: significant effort.","status":"in_progress","priority":3,"issue_type":"task","assignee":"beads/kilo","created_at":"2025-12-16T18:17:29.643293-08:00","updated_at":"2025-12-23T22:29:35.811067-08:00"} -{"id":"bd-dju6","title":"Commit and push release","description":"git add -A \u0026\u0026 git commit \u0026\u0026 git push to trigger CI","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T13:52:33.065863-08:00","updated_at":"2025-12-21T13:53:49.957804-08:00","deleted_at":"2025-12-21T13:53:49.957804-08:00","deleted_by":"stevey","delete_reason":"manual delete","original_type":"task","wisp":true} -{"id":"bd-dp4w","title":"Test message","description":"This is a test message body","status":"tombstone","priority":2,"issue_type":"message","created_at":"2025-12-16T18:11:58.467876-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"message"} -{"id":"bd-dqck","title":"Version Bump: test-squash","description":"Release checklist for version test-squash. This molecule ensures all release steps are completed properly.","status":"tombstone","priority":1,"issue_type":"epic","created_at":"2025-12-21T13:52:33.065408-08:00","updated_at":"2025-12-21T13:53:41.946036-08:00","deleted_at":"2025-12-21T13:53:41.946036-08:00","deleted_by":"stevey","delete_reason":"manual delete","original_type":"epic","wisp":true} -{"id":"bd-dqu8","title":"Restart running daemons","description":"Kill and restart any running bd daemons to pick up new version: pkill -f 'bd daemon' \u0026\u0026 bd daemon --start","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-20T00:32:26.559311-08:00","updated_at":"2025-12-20T00:32:59.123766-08:00","closed_at":"2025-12-20T00:32:59.123766-08:00","close_reason":"Daemons restarted - now running 0.30.7","dependencies":[{"issue_id":"bd-dqu8","depends_on_id":"bd-6s61","type":"parent-child","created_at":"2025-12-20T00:32:39.36213-08:00","created_by":"daemon"},{"issue_id":"bd-dqu8","depends_on_id":"bd-fgw3","type":"blocks","created_at":"2025-12-20T00:32:39.427846-08:00","created_by":"daemon"}]} -{"id":"bd-drxs","title":"Make merge requests ephemeral wisps instead of permanent issues","description":"## Problem\n\nMerge requests (MRs) are currently created as regular beads issues (type: merge-request). This means they:\n- Sync to JSONL and propagate via git\n- Accumulate in the issue database indefinitely\n- Clutter `bd list` output with closed MRs\n- Create permanent records for inherently transient artifacts\n\nMRs are process artifacts, not work products. They exist briefly while code awaits merge, then their purpose is fulfilled. The git merge commit and GitHub PR (if applicable) provide the permanent audit trail - the beads MR is redundant.\n\n## Proposed Solution\n\nMake MRs ephemeral wisps that exist only during the merge process:\n\n1. **Create MRs as wisps**: When a polecat completes work and requests merge, create the MR in `.beads-wisp/` instead of `.beads/`\n\n2. **Refinery visibility**: This works because all clones within a rig share the same database:\n ```\n beads/ ← Rig root\n β”œβ”€β”€ .beads/ ← Permanent issues (synced to JSONL)\n β”œβ”€β”€ .beads-wisp/ ← Ephemeral wisps (NOT synced)\n β”œβ”€β”€ crew/dave/ ← Uses rig's shared DB\n β”œβ”€β”€ polecats/*/ ← Uses rig's shared DB\n └── refinery/ ← Uses rig's shared DB\n ```\n The refinery can see wisp MRs immediately - same SQLite database.\n\n3. **On merge completion**: Burn the wisp (delete without digest). The git merge commit IS the permanent record. No digest needed since:\n - Digest wouldn't be smaller than the MR itself (~200-300 bytes either way)\n - Git history provides complete audit trail\n - GitHub PR (if used) provides discussion/approval record\n\n4. **On merge rejection/abandonment**: Burn the wisp. Optionally notify the source polecat via mail.\n\n## Benefits\n\n- **Clean JSONL**: MRs never pollute the permanent issue history\n- **No accumulation**: Wisps are burned on completion, no cleanup needed\n- **Correct semantics**: Wisps are for \"operational ephemera\" - MRs fit perfectly\n- **Reduced sync churn**: Fewer JSONL updates, faster `bd sync`\n- **Cleaner queries**: `bd list` shows work items, not process artifacts\n\n## Implementation Notes\n\n### Where MRs are created\n\nCurrently MRs are created by the witness or polecat when work is ready for merge. This code needs to:\n- Set `wisp: true` on the MR issue\n- Or use a dedicated wisp creation path\n\n### Refinery changes\n\nThe refinery queries for pending MRs to process. It needs to:\n- Query wisp storage as well as (or instead of) permanent storage\n- Use `bd mol burn` or equivalent to delete processed MRs\n\n### What about cross-rig MRs?\n\nIf an MR needs to be visible outside the rig (e.g., external collaborators):\n- They would see the GitHub PR anyway\n- Or we could create a permanent \"merge completed\" notification issue\n- But this is likely unnecessary - MRs are internal coordination\n\n### Migration\n\nExisting MRs in permanent storage:\n- Can be cleaned up with `bd cleanup` or manual deletion\n- Or left to age out naturally\n- No migration of open MRs needed (they'll complete under old system\n\n## Alternatives Considered\n\n1. **Auto-cleanup of closed MRs**: Keep MRs as permanent issues but auto-delete after 24h. Simpler but still creates sync churn and temporary JSONL pollution.\n\n2. **MRs as mail only**: Polecat sends mail to refinery with merge details, no MR issue at all. Loses queryability (bd-801b [P2] [merge-request] closed - Merge: bd-bqcc\nbd-pvu0 [P2] [merge-request] closed - Merge: bd-4opy\nbd-i0rx [P2] [merge-request] closed - Merge: bd-ao0s\nbd-u0sb [P2] [merge-request] closed - Merge: bd-uqfn\nbd-8e0q [P2] [merge-request] closed - Merge: beads-ocs\nbd-hvng [P2] [merge-request] closed - Merge: bd-w193\nbd-4sfl [P2] [merge-request] closed - Merge: bd-14ie\nbd-sumr [P2] [merge-request] closed - Merge: bd-t4sb\nbd-3x9o [P2] [merge-request] closed - Merge: bd-by0d\nbd-whgv [P2] [merge-request] closed - Merge: bd-401h\nbd-f3ll [P2] [merge-request] closed - Merge: bd-ot0w\nbd-fmdy [P3] [merge-request] closed - Merge: bd-kzda).\n\n3. **Separate merge queue**: Refinery maintains internal state for pending merges, not in beads at all. Clean but requires new infrastructure.\n\nWisps are the cleanest solution - they already exist, have the right semantics, and require minimal changes.\n\n## Related\n\n- Wisp architecture: \n- Current MR creation: witness/refinery code paths\n- bd-pvu0, bd-801b: Example MRs currently in permanent storage\nEOF\n)","status":"tombstone","priority":0,"issue_type":"feature","created_at":"2025-12-23T01:39:25.4918-08:00","updated_at":"2025-12-23T01:58:23.550668-08:00","deleted_at":"2025-12-23T01:58:23.550668-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"feature"} -{"id":"bd-dsdh","title":"Document sync.branch 'always dirty' working tree behavior","description":"## Context\n\nWhen sync.branch is configured, the .beads/issues.jsonl file in main's working tree is ALWAYS dirty. This is by design:\n\n1. bd sync commits to beads-sync branch (via worktree)\n2. bd sync copies JSONL to main's working tree (so CLI commands work)\n3. This copy is NOT committed to main (to reduce commit noise)\n\nContributors who watch main branch history pushed for sync.branch to avoid constant beads commit noise. But users need to understand the trade-off.\n\n## Documentation Needed\n\nUpdate README.md sync.branch section with:\n\n1. **Clear explanation** of why .beads/ is always dirty on main\n2. **\"Be Zen about it\"** - this is expected, not a bug\n3. **Workflow options:**\n - Accept dirty state, use `bd sync --merge` periodically to snapshot to main\n - Or disable sync.branch if clean working tree is more important\n4. **Shell alias tip** to hide beads from git status:\n ```bash\n alias gs='git status -- \":!.beads/\"'\n ```\n5. **When to merge**: releases, milestones, or periodic snapshots\n\n## Related\n\n- bd-7b7h: Fix that allows bd sync --merge to work with dirty .beads/\n- bd-elqd: Investigation that identified this as expected behavior","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-12-16T23:16:12.253559-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} -{"id":"bd-dsp","title":"Test stdin body-file","status":"closed","priority":4,"issue_type":"task","created_at":"2025-12-17T17:27:32.098806-08:00","updated_at":"2025-12-17T17:28:33.832749-08:00","closed_at":"2025-12-17T17:28:33.832749-08:00"} -{"id":"bd-dtl8","title":"Test deleteViaDaemon RPC client integration","description":"Add comprehensive tests for the deleteViaDaemon function (cmd/bd/delete.go:21) which handles client-side RPC deletion calls.\n\n## Function under test\n- deleteViaDaemon: CLI command handler that sends delete requests to daemon via RPC\n\n## Test scenarios needed\n1. Successful deletion via daemon\n2. Cascade deletion through daemon\n3. Force deletion through daemon\n4. Dry-run mode (no actual deletion)\n5. Error handling:\n - Daemon unavailable\n - Invalid issue IDs\n - Dependency conflicts\n6. JSON output validation\n7. Human-readable output formatting\n\n## Coverage target\nCurrent: 0%\nTarget: \u003e80%\n\n## Related\n- Parent epic: bd-kyll\n- Original issue: bd-7z4","status":"open","priority":1,"issue_type":"task","created_at":"2025-12-18T13:08:29.805706253-07:00","updated_at":"2025-12-23T21:22:12.35566-08:00","dependencies":[{"issue_id":"bd-dtl8","depends_on_id":"bd-kyll","type":"parent-child","created_at":"2025-12-18T13:08:29.807984381-07:00","created_by":"mhwilkie"}]} -{"id":"bd-du9h","title":"Add Validation type and validations field to Issue","description":"Add Validation struct (Validator *EntityRef, Outcome string, Timestamp time.Time, Score *float32) and Validations []Validation field to Issue. Tracks who validated/approved work completion. Core to HOP proof-of-stake concept - validators stake reputation on approvals.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-22T17:53:37.725701-08:00","updated_at":"2025-12-22T20:08:59.925028-08:00","closed_at":"2025-12-22T20:08:59.925028-08:00","close_reason":"Added Validation type with Validator, Outcome, Timestamp, Score fields. Added Validations []Validation to Issue struct. Included in content hash. Full test coverage.","dependencies":[{"issue_id":"bd-du9h","depends_on_id":"bd-7pwh","type":"parent-child","created_at":"2025-12-22T17:53:43.470984-08:00","created_by":"daemon"},{"issue_id":"bd-du9h","depends_on_id":"bd-nmch","type":"blocks","created_at":"2025-12-22T17:53:47.896552-08:00","created_by":"daemon"}]} -{"id":"bd-dwh","title":"Implement or remove ExpectExit/ExpectStdout verification fields","description":"The Verification struct in internal/types/workflow.go has ExpectExit and ExpectStdout fields that are never used by workflowVerifyCmd. Either implement the functionality or remove the dead fields.","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-17T22:23:02.708627-08:00","updated_at":"2025-12-17T22:34:07.300348-08:00","closed_at":"2025-12-17T22:34:07.300348-08:00"} -{"id":"bd-dxtc","title":"Test daemon RPC delete handler","description":"Add tests for the daemon-side RPC delete handler that processes delete requests from clients.\n\n## What needs testing\n- Daemon's Delete RPC handler implementation\n- Processing delete requests from RPC clients\n- Cascade deletion at daemon level\n- Force deletion at daemon level\n- Dry-run mode validation\n- Error responses to clients\n- Dependency validation before deletion\n- Tombstone creation via daemon\n\n## Test scenarios\n1. Delete single issue via RPC\n2. Delete multiple issues via RPC\n3. Cascade deletion of dependents\n4. Force delete with orphaned dependents\n5. Dry-run returns what would be deleted without actual deletion\n6. Error: invalid issue IDs\n7. Error: insufficient permissions\n8. Error: dependency blocks deletion (without force/cascade)\n\n## Related\n- Parent epic: bd-kyll\n- Original issue: bd-7z4","status":"open","priority":1,"issue_type":"task","created_at":"2025-12-18T13:08:33.532111042-07:00","updated_at":"2025-12-23T21:22:11.26397-08:00","dependencies":[{"issue_id":"bd-dxtc","depends_on_id":"bd-kyll","type":"parent-child","created_at":"2025-12-18T13:08:33.534367367-07:00","created_by":"mhwilkie"}]} -{"id":"bd-dyy","title":"Review PR #513: fix hooks install docs","description":"Review and merge PR #513 from aspiers. This PR fixes incorrect docs for how to install git hooks - updates README to use bd hooks install instead of removed install.sh. Simple 1-line change. URL: https://github.com/anthropics/beads/pull/513","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-13T08:15:14.838772+11:00","updated_at":"2025-12-13T07:07:19.718544-08:00","closed_at":"2025-12-13T07:07:19.718544-08:00"} -{"id":"bd-e1085716","title":"bd validate - Comprehensive health check","description":"Run all validation checks in one command.\n\nChecks:\n- Duplicates\n- Orphaned dependencies\n- Test pollution\n- Git conflicts\n\nSupports --fix-all for auto-repair.\n\nDepends on bd-cbed9619.1, bd-0dcea000, bd-31aab707, bd-9826b69a.\n\nFiles: cmd/bd/validate.go (new)","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-29T23:05:13.980679-07:00","updated_at":"2025-12-17T22:58:34.562008-08:00","closed_at":"2025-12-17T22:58:34.562008-08:00","close_reason":"Closed"} -{"id":"bd-e7ou","title":"Fix --as flag: uses title instead of ID in mol bond","description":"In bondProtoProto, the --as flag is documented as 'Custom ID for compound proto' but the implementation uses it as the title, not the issue ID.\n\n**Current behavior (mol.go:637-638):**\n```go\nif customID != '' {\n compoundTitle = customID // Used as title, not ID\n}\n```\n\n**Options:**\n1. Change flag description to say 'Custom title' (documentation fix)\n2. Actually use it as a custom ID prefix or full ID (feature change)\n3. Add separate --title flag and make --as actually set ID\n\nRecommend option 1 for simplest fix - change 'Custom ID' to 'Custom title' in the flag description.","status":"closed","priority":3,"issue_type":"bug","created_at":"2025-12-21T10:22:59.069368-08:00","updated_at":"2025-12-21T21:18:48.514513-08:00","closed_at":"2025-12-21T21:18:48.514513-08:00","close_reason":"Fixed - renamed customID to customTitle and updated dry-run output"} -{"id":"bd-eijl","title":"bd ship command for publishing capabilities","description":"Add `bd ship \u003ccapability\u003e` command that:\n\n1. Finds issue with `export:\u003ccapability\u003e` label\n2. Validates issue is closed (or --force to override)\n3. Adds `provides:\u003ccapability\u003e` label\n4. Protects `provides:*` namespace (only bd ship can add these labels)\n\nExample:\n```bash\nbd ship mol-run-assignee\n# Output: Shipped mol-run-assignee (bd-xyz)\n```\n\nPart of cross-project dependency system.\nSee: gastown/docs/cross-project-deps.md","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-12-21T22:37:19.123024-08:00","updated_at":"2025-12-21T23:11:47.498859-08:00","closed_at":"2025-12-21T23:11:47.498859-08:00","close_reason":"Implemented: bd ship command with export:/provides: labels, namespace protection in label add"} -{"id":"bd-elqd","title":"Systematic bd sync stability investigation","description":"## Context\n\nbd sync has chronic instability issues that have persisted since inception:\n- issues.jsonl is always dirty after push\n- bd sync often creates messes requiring manual cleanup\n- Problems escalating despite accumulated bug fixes\n- Workarounds are getting increasingly draconian\n\n## Goal\n\nSystematically observe and diagnose bd sync failures rather than applying band-aid fixes.\n\n## Approach\n\n1. Start fresh session with latest binary (all fixes applied)\n2. Run bd sync and carefully observe what happens\n3. Document exact sequence of events when things go wrong\n4. File specific issues for each discrete problem identified\n5. Track the root causes, not just symptoms\n\n## Test Environment\n\n- Fresh clone or clean state\n- Latest bd binary with all bug fixes\n- Monitor both local and remote JSONL state\n- Check for timing issues, race conditions, merge conflicts\n\n## Success Criteria\n\n- Identify root causes of sync instability\n- Create actionable issues for each problem\n- Eventually achieve stable bd sync (no manual intervention needed)","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-16T22:57:25.35289-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} -{"id":"bd-etyv","title":"Smart --var detection for mol distill","description":"Implemented bidirectional syntax support for mol distill --var flag.\n\n**Problem:**\n- spawn uses: --var variable=value (assignment style)\n- distill used: --var value=variable (substitution style)\n- Agents would naturally guess spawn-style for both\n\n**Solution:**\nSmart detection that accepts BOTH syntaxes by checking which side appears in the epic text:\n- --var branch=feature-auth β†’ finds 'feature-auth' in text β†’ works\n- --var feature-auth=branch β†’ finds 'feature-auth' in text β†’ also works\n\n**Changes:**\n- Added parseDistillVar() with smart detection\n- Added collectSubgraphText() helper\n- Restructured runMolDistill to load subgraph before parsing vars\n- Updated help text to document both syntaxes\n- Added comprehensive tests in mol_test.go\n\n**Edge cases handled:**\n- Both sides found: prefers spawn-style (more common guess)\n- Neither found: helpful error message\n- Empty sides: validation error\n- Values containing '=' (e.g., KEY=VALUE): works via SplitN\n\nEmbodies the Beads philosophy: watch what agents do, make their guess correct.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-21T11:08:50.83923-08:00","updated_at":"2025-12-21T11:08:56.432536-08:00","closed_at":"2025-12-21T11:08:56.432536-08:00","close_reason":"Implemented"} -{"id":"bd-eyrh","title":"🀝 HANDOFF: Review remaining beads PRs","description":"## Current State\nJust merged PR #653 (doctor refactor) and added tests to restore coverage.\n\n## Remaining Open PRs to Review\nRun `gh pr list --repo steveyegge/beads` to see current list. As of handoff:\n\n1. #655 - feat: Linear Integration (jblwilliams)\n2. #651 - feat(audit): agent audit trail (dchichkov)\n3. #648 - Stop init creating redundant @AGENTS.md (maphew)\n4. #646 - fix(unix): handle Statfs field types (jordanhubbard)\n5. #645 - feat: /plan-to-beads Claude Code command (petebytes)\n6. #642, #641, #640 - sync branch fixes (cpdata)\n\n## Review Checklist\n- Check CI status with `gh pr checks \u003cnum\u003e --repo steveyegge/beads`\n- Verify no .beads/ data leaking (we have a hook now)\n- Review code quality\n- Merge good ones, request changes on problematic ones\n\n## Notes\n- User wants us to be proactive about merging good PRs\n- Can add tests ourselves if coverage drops","status":"tombstone","priority":2,"issue_type":"message","created_at":"2025-12-19T17:44:34.149837-08:00","updated_at":"2025-12-21T13:53:33.613805-08:00","deleted_at":"2025-12-21T13:53:33.613805-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"message","sender":"Steve Yegge","wisp":true} -{"id":"bd-eyto","title":"Time-dependent tests may be flaky near TTL boundary","description":"Several tombstone merge tests use time.Now() to create test data: time.Now().Add(-24 * time.Hour), time.Now().Add(-60 * 24 * time.Hour), etc. While these work reliably in practice (24h vs 30d TTL has large margin), they could theoretically be flaky if: 1) Tests run slowly, 2) System clock changes during test, 3) TTL constants change. Recommendation: Consider using a fixed reference time or time injection for deterministic tests. Lower priority since current margin is large. Files: internal/merge/merge_test.go:1337-1338, 1352-1353, 1548-1549, 1590-1591","status":"open","priority":3,"issue_type":"task","created_at":"2025-12-05T16:37:02.348143-08:00","updated_at":"2025-12-05T16:37:02.348143-08:00"} -{"id":"bd-f2lb","title":"Update CHANGELOG.md with release notes","description":"Add meaningful release notes to CHANGELOG.md describing what changed in test-squash","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T13:52:33.066065-08:00","updated_at":"2025-12-21T13:53:49.858742-08:00","deleted_at":"2025-12-21T13:53:49.858742-08:00","deleted_by":"stevey","delete_reason":"manual delete","original_type":"task","wisp":true} -{"id":"bd-f3ll","title":"Merge: bd-ot0w","description":"branch: polecat/dementus\ntarget: main\nsource_issue: bd-ot0w\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-19T23:20:33.495772-08:00","updated_at":"2025-12-20T23:17:27.000252-08:00","closed_at":"2025-12-20T23:17:27.000252-08:00","close_reason":"Branches nuked, MRs obsolete"} -{"id":"bd-f5cc","title":"Thread Test","description":"Testing the thread feature","status":"tombstone","priority":2,"issue_type":"message","created_at":"2025-12-16T18:21:01.244501-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","dependencies":[{"issue_id":"bd-f5cc","depends_on_id":"bd-x36g","type":"supersedes","created_at":"2025-12-18T13:45:31.137191-08:00","created_by":"migration"}],"deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"message"} -{"id":"bd-f616","title":"Digest: Version Bump: test-squash","description":"## Molecule Execution Summary\n\n**Molecule**: Version Bump: test-squash\n**Steps**: 8\n\n**Completed**: 0/8\n\n---\n\n### Steps\n\n1. **[open]** Verify release artifacts\n Check GitHub releases page - binaries for darwin/linux/windows should be available\n\n2. **[open]** Commit and push release\n git add -A \u0026\u0026 git commit \u0026\u0026 git push to trigger CI\n\n3. **[open]** Update CHANGELOG.md with release notes\n Add meaningful release notes to CHANGELOG.md describing what changed in test-squash\n\n4. **[open]** Wait for CI to pass\n Monitor GitHub Actions - all checks must pass before release artifacts are built\n\n5. **[open]** Restart running daemons\n Kill and restart any running bd daemons to pick up new version: pkill -f 'bd daemon' \u0026\u0026 bd daemon --start\n\n6. **[open]** Update local installation\n Run install script or brew upgrade to get new version locally: curl -fsSL .../install.sh | bash\n\n7. **[open]** Run bump-version.sh test-squash\n Run ./scripts/bump-version.sh test-squash to update version in all files\n\n8. **[open]** Update info.go versionChanges\n Add entry to versionChanges in cmd/bd/info.go with agent-actionable changes for test-squash\n\n","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T13:53:18.471919-08:00","updated_at":"2025-12-21T13:53:35.256043-08:00","close_reason":"Squashed from 8 ephemeral steps","deleted_at":"2025-12-21T13:53:35.256043-08:00","deleted_by":"stevey","delete_reason":"manual delete","original_type":"task"} -{"id":"bd-f7p1","title":"Add tests for mol spawn --attach","description":"Code review (bd-obep) found no tests for the spawn --attach functionality.\n\n**Test cases needed:**\n1. Basic attach - spawn proto with one --attach\n2. Multiple attachments - spawn with --attach A --attach B\n3. Attach types - verify sequential vs parallel bonding\n4. Error case: attaching non-proto (missing template label)\n5. Variable aggregation - vars from primary + attachments combined\n6. Dry-run output includes attachment info\n\n**Implementation notes:**\n- Tests should use in-memory storage\n- Create test protos, spawn with attachments, verify dependency structure\n- Check that sequential uses 'blocks' type, parallel uses 'parent-child'","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-21T10:58:16.766461-08:00","updated_at":"2025-12-21T21:33:12.136215-08:00","closed_at":"2025-12-21T21:33:12.136215-08:00","close_reason":"Added 6 tests for mol spawn --attach: basic attach, multiple attachments, sequential/parallel bond types, non-proto validation, variable aggregation, and dry-run output","dependencies":[{"issue_id":"bd-f7p1","depends_on_id":"bd-obep","type":"discovered-from","created_at":"2025-12-21T10:58:16.767616-08:00","created_by":"daemon"}]} -{"id":"bd-fa2h","title":"🀝 HANDOFF: v0.31.0 released, molecules discussion","description":"Session completed 0.31.0 release and had important molecules discussion.\n\n## Completed\n- v0.31.0 released (deferred status, audit trail, directory labels, etc.)\n- Fixed lint issues, hook version markers, codesigning\n- All CI green, artifacts verified\n\n## Filed Issues\n- bd-usro: Rename template instantiate β†’ bd mol bond\n- bd-y8bj: Auto-detect identity for bd mail (P1 bug)\n- gt-975: Molecule execution support for polecats/crew\n- gt-976: Crew lifecycle support in Deacon\n\n## Key Insight\nMolecules are the future - TodoWrite is ephemeral, molecules are persistent institutional memory on the world chain. I tried to use TodoWrite for version bump and missed steps (codesigning, MCP verification). Molecules would have caught this.\n\n## Next Steps\n- bd mol bond implementation is priority\n- Max has gt-976 for crew lifecycle (enables automated refresh mid-molecule)\n\nCheck bd ready and gt-975/976 status.","status":"closed","priority":2,"issue_type":"message","assignee":"beads/crew/dave","created_at":"2025-12-20T17:23:09.889562-08:00","updated_at":"2025-12-21T17:52:18.467069-08:00","closed_at":"2025-12-21T17:52:18.467069-08:00","close_reason":"Stale handoff message - work completed","sender":"Steve Yegge","wisp":true} -{"id":"bd-fcl1","title":"Merge: bd-au0.5","description":"branch: polecat/Searcher\ntarget: main\nsource_issue: bd-au0.5\nrig: beads","status":"closed","priority":1,"issue_type":"merge-request","created_at":"2025-12-23T13:39:11.946667-08:00","updated_at":"2025-12-23T19:12:08.346454-08:00","closed_at":"2025-12-23T19:12:08.346454-08:00","close_reason":"Stale merge-requests from orphaned polecat branches - refinery not processing"} -{"id":"bd-ffjt","title":"Unify template.go and mol.go under bd mol","description":"Consolidate the two DAG-template systems into one under the mol command. mol.go (on rictus branch) has the right UX (catalog/show/bond), template.go has the mechanics. Merge them, deprecate bd template commands.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-20T23:52:13.208972-08:00","updated_at":"2025-12-21T00:01:59.283765-08:00","closed_at":"2025-12-21T00:01:59.283765-08:00","close_reason":"Implemented mol commands with deprecation for template commands"} -{"id":"bd-fgw3","title":"Update local installation","description":"Run install script or brew upgrade to get new version locally: curl -fsSL .../install.sh | bash","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T22:56:05.052016-08:00","updated_at":"2025-12-20T00:49:51.928221-08:00","closed_at":"2025-12-20T00:25:52.805029-08:00","dependencies":[{"issue_id":"bd-fgw3","depends_on_id":"bd-6s61","type":"parent-child","created_at":"2025-12-19T22:56:15.248427-08:00","created_by":"daemon"},{"issue_id":"bd-fgw3","depends_on_id":"bd-si4g","type":"blocks","created_at":"2025-12-19T22:56:23.497325-08:00","created_by":"daemon"}]} -{"id":"bd-fi05","title":"bd sync fails with orphaned issues and duplicate ID conflict","description":"After fixing the deleted_at TEXT column scanning bug (commit 18b1eb2), bd sync still fails with two issues:\n\n1. Orphan Detection Warning: 12 orphaned child issues whose parents no longer exist (bd-cb64c226.* and bd-cbed9619.*)\n\n2. Import Failure: UNIQUE constraint failed for bd-360 - this tombstone exists in both DB and JSONL\n\nError: \"Import failed: error creating depth-0 issues: bulk insert issues: failed to insert issue bd-360: sqlite3: constraint failed: UNIQUE constraint failed: issues.id\"\n\nFix options:\n- Delete orphaned child issues with bd delete\n- Resolve bd-360 duplicate (in deletions.jsonl vs tombstone in DB)\n- Reset sync branch: git branch -f beads-sync main \u0026\u0026 git push --force-with-lease origin beads-sync","notes":"Fixed tombstone constraint violation bug. When deleting closed issues, the CHECK constraint (status = 'closed') = (closed_at IS NOT NULL) was violated because CreateTombstone didn't clear closed_at. Fix: set closed_at = NULL in tombstone creation SQL.\n\nThe sync data corruption (orphaned issues in beads-sync branch) requires manual cleanup: reset sync branch with 'git branch -f beads-sync main \u0026\u0026 git push --force-with-lease origin beads-sync'","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-13T07:14:33.831346-08:00","updated_at":"2025-12-13T10:50:48.545465-08:00","closed_at":"2025-12-13T07:30:33.843986-08:00"} -{"id":"bd-fmdy","title":"Merge: bd-kzda","description":"branch: polecat/toast\ntarget: main\nsource_issue: bd-kzda\nrig: beads","status":"closed","priority":3,"issue_type":"merge-request","created_at":"2025-12-23T00:27:28.952413-08:00","updated_at":"2025-12-23T01:33:25.731326-08:00","closed_at":"2025-12-23T01:33:25.731326-08:00","close_reason":"Merged to main"} -{"id":"bd-fom","title":"Remove all deletions.jsonl code except migration","description":"There's deletions manifest code spread across the entire codebase that should have been removed after tombstone migration:\n\nFiles with deletions code (non-migration):\n- internal/deletions/ - entire package\n- cmd/bd/sync.go - 25+ references, auto-compact, sanitize\n- cmd/bd/delete.go - dual-writes to deletions.jsonl\n- internal/importer/importer.go - checks deletions manifest\n- internal/syncbranch/worktree.go - merges deletions.jsonl\n- cmd/bd/doctor/fix/sync.go - cleanupDeletionsManifest\n- cmd/bd/doctor/fix/deletions.go - HydrateDeletionsManifest\n- cmd/bd/integrity.go - checks deletions for data loss\n- cmd/bd/deleted.go - entire command\n- cmd/bd/compact.go - pruneDeletionsManifest\n- cmd/bd/doctor.go - checkDeletionsManifest\n- Plus many more\n\nAction: Aggressively remove all non-migration deletions code. Tombstones are the only deletion mechanism now.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-16T13:29:04.960863-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} -{"id":"bd-fu83","title":"Fix daemon/direct mode inconsistency in relate and duplicate commands","description":"The relate.go and duplicate.go commands have inconsistent daemon/direct mode handling:\n\nWhen daemonClient is connected, they resolve IDs via RPC but then perform updates directly via store.UpdateIssue(), bypassing the daemon.\n\nAffected locations:\n- relate.go:125-139 (runRelate update)\n- relate.go:235-246 (runUnrelate update) \n- duplicate.go:120 (runDuplicate update)\n- duplicate.go:207 (runSupersede update)\n\nShould either use RPC for updates when daemon is running, or document why direct access is intentional.","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-16T20:52:54.164189-08:00","updated_at":"2025-12-21T21:47:14.10222-08:00","closed_at":"2025-12-21T21:47:14.10222-08:00","close_reason":"Already implemented in commit 3ec517cc: relate, unrelate, duplicate, supersede now use RPC Update when daemon available"} -{"id":"bd-fx7v","title":"Improve test coverage for cmd/bd/doctor/fix (23.9% β†’ 50%)","description":"The doctor/fix package has only 23.9% test coverage. The doctor fix functionality is important for troubleshooting.\n\nCurrent coverage: 23.9%\nTarget coverage: 50%","status":"in_progress","priority":2,"issue_type":"task","assignee":"beads/bravo","created_at":"2025-12-13T20:43:05.67127-08:00","updated_at":"2025-12-23T22:29:35.403854-08:00"} -{"id":"bd-fy4q","title":"Phase 1.2 follow-up: Clarify format storage","description":"Phase 1.2 created the bdt executable structure but issues.toon is currently stored in JSONL format, not TOON format.\n\nThis is intentional for now:\n- Phase 1.2 (bd-jv4w): Just infrastructure - separate binary, separate directory\n- Phase 1.3 (bd-j0tr): Implement actual TOON encoding/writing\n\nFor now, keep as-is: filename '.toon' signals intent, content is JSONL (interim format). Phase 1.3 will switch to actual TOON.","status":"open","priority":3,"issue_type":"task","created_at":"2025-12-19T14:03:19.491040345-07:00","updated_at":"2025-12-19T14:03:19.491040345-07:00","dependencies":[{"issue_id":"bd-fy4q","depends_on_id":"bd-jv4w","type":"discovered-from","created_at":"2025-12-19T14:03:19.498933555-07:00","created_by":"daemon"}]} -{"id":"bd-g4b4","title":"bd close hooks: context check and notifications","description":"Add hook system to bd close for notifications and custom actions.\n\n## Scope (MVP)\n\nImplement **command hooks only** for bd close. Deferred: notify, webhook types.\n\n## Implementation\n\n### 1. Config Schema\n\nAdd to internal/configfile/config.go:\n\n```go\ntype HooksConfig struct {\n OnClose []HookEntry `yaml:\"on_close,omitempty\"`\n}\n\ntype HookEntry struct {\n Command string `yaml:\"command\"` // Shell command to run\n Name string `yaml:\"name,omitempty\"` // Optional display name\n}\n```\n\nAdd `Hooks HooksConfig` field to Config struct.\n\n### 2. Hook Execution\n\nCreate internal/hooks/close_hooks.go:\n\n```go\nfunc RunCloseHooks(ctx context.Context, cfg *configfile.Config, issue *types.Issue) error {\n for _, hook := range cfg.Hooks.OnClose {\n cmd := exec.CommandContext(ctx, \"sh\", \"-c\", hook.Command)\n cmd.Env = append(os.Environ(),\n \"BEAD_ID=\"+issue.ID,\n \"BEAD_TITLE=\"+issue.Title,\n \"BEAD_TYPE=\"+string(issue.IssueType),\n \"BEAD_PRIORITY=\"+strconv.Itoa(issue.Priority),\n \"BEAD_CLOSE_REASON=\"+issue.CloseReason,\n )\n cmd.Stdout = os.Stdout\n cmd.Stderr = os.Stderr\n if err := cmd.Run(); err \\!= nil {\n // Log warning but dont fail the close\n fmt.Fprintf(os.Stderr, \"Warning: close hook %q failed: %v\\n\", hook.Name, err)\n }\n }\n return nil\n}\n```\n\n### 3. Integration Point\n\nIn cmd/bd/close.go, after successful close:\n\n```go\n// Run close hooks\nif cfg := configfile.Load(); cfg \\!= nil {\n hooks.RunCloseHooks(ctx, cfg, closedIssue)\n}\n```\n\n### 4. Example Config\n\n```yaml\n# .beads/config.yaml\nhooks:\n on_close:\n - name: show-next\n command: bd ready --limit 1\n - name: context-check \n command: echo \"Issue $BEAD_ID closed. Check context if nearing limit.\"\n```\n\n## Environment Variables\n\n| Variable | Description |\n|----------|-------------|\n| BEAD_ID | Issue ID (e.g., bd-abc1) |\n| BEAD_TITLE | Issue title |\n| BEAD_TYPE | Issue type (task, bug, feature, etc.) |\n| BEAD_PRIORITY | Priority (0-4) |\n| BEAD_CLOSE_REASON | Close reason if provided |\n\n## Testing\n\nAdd test in internal/hooks/close_hooks_test.go:\n- Test hook execution with mock config\n- Test env vars are set correctly\n- Test hook failure doesnt block close\n\n## Files to Create/Modify\n\n1. **Create:** internal/hooks/close_hooks.go\n2. **Create:** internal/hooks/close_hooks_test.go \n3. **Modify:** internal/configfile/config.go (add HooksConfig)\n4. **Modify:** cmd/bd/close.go (call RunCloseHooks)\n5. **Modify:** docs/CONFIG.md (document hooks config)\n\n## Out of Scope (Future)\n\n- notify hook type (gt mail integration)\n- webhook type (HTTP POST)\n- on_create, on_update hooks\n- Hook timeout configuration\n- Parallel hook execution","status":"closed","priority":3,"issue_type":"feature","assignee":"beads/Hooker","created_at":"2025-12-22T17:03:56.183461-08:00","updated_at":"2025-12-23T13:38:15.898746-08:00","closed_at":"2025-12-23T13:38:15.898746-08:00","close_reason":"Implemented config-based close hooks","dependencies":[{"issue_id":"bd-g4b4","depends_on_id":"bd-iz5t","type":"parent-child","created_at":"2025-12-23T12:44:07.811793-08:00","created_by":"daemon"}]} -{"id":"bd-g9eu","title":"Investigate TestRoutingIntegration failure","description":"TestRoutingIntegration/maintainer_with_SSH_remote failed during pre-commit check with \"expected role maintainer, got contributor\".\nThis occurred while running `go test -short ./...` on darwin/arm64.\nThe failure appears unrelated to storage/sqlite changes.\nNeed to investigate if this is a flaky test or environmental issue.","status":"open","priority":2,"issue_type":"task","created_at":"2025-11-20T15:55:19.337094-08:00","updated_at":"2025-11-20T15:55:19.337094-08:00"} -{"id":"bd-gfo3","title":"Merge: bd-ykd9","description":"branch: polecat/Doctor\ntarget: main\nsource_issue: bd-ykd9\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-23T13:34:43.778808-08:00","updated_at":"2025-12-23T19:12:08.353427-08:00","closed_at":"2025-12-23T19:12:08.353427-08:00","close_reason":"Stale merge-requests from orphaned polecat branches - refinery not processing"} -{"id":"bd-gjla","title":"Test Thread","description":"Initial message for threading test","status":"tombstone","priority":2,"issue_type":"message","created_at":"2025-12-16T18:19:51.704324-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","dependencies":[{"issue_id":"bd-gjla","depends_on_id":"bd-f5cc","type":"duplicates","created_at":"2025-12-18T13:45:31.137191-08:00","created_by":"migration"}],"deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"message"} -{"id":"bd-gocx","title":"Run bump-version.sh 0.32.1","description":"Execute ./scripts/bump-version.sh 0.32.1 to update all version references","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-20T21:53:18.470174-08:00","updated_at":"2025-12-20T21:54:54.500836-08:00","closed_at":"2025-12-20T21:54:54.500836-08:00","close_reason":"Version bumped to 0.32.1","dependencies":[{"issue_id":"bd-gocx","depends_on_id":"bd-an4s","type":"parent-child","created_at":"2025-12-20T21:53:18.471793-08:00","created_by":"daemon"},{"issue_id":"bd-gocx","depends_on_id":"bd-x3j8","type":"blocks","created_at":"2025-12-20T21:53:29.688436-08:00","created_by":"daemon"}]} -{"id":"bd-gqxd","title":"Enrich MutationEvent with title and assignee","description":"Current MutationEvent only has IssueID, no context. Add Title and Assignee fields so activity feeds can display meaningful info without extra lookups. Emit these fields when creating mutation events in server_core.go.","status":"closed","priority":2,"issue_type":"feature","assignee":"beads/furiosa","created_at":"2025-12-23T16:26:34.907259-08:00","updated_at":"2025-12-23T16:39:39.229462-08:00","closed_at":"2025-12-23T16:39:39.229462-08:00","close_reason":"Added Title and Assignee fields to MutationEvent, updated all callers"} -{"id":"bd-gxq","title":"Simplify bd onboard to minimal AGENTS.md snippet pointing to bd prime","description":"## Context\nGH#604 raised concerns about bd onboard bloating AGENTS.md with ~100+ lines of static instructions that:\n- Load every session whether beads is being used or not\n- Get stale when bd upgrades\n- Waste tokens\n\n## Solution\nSimplify `bd onboard` to output a minimal snippet (~2 lines) that points to `bd prime`:\n\n```markdown\n## Issue Tracking\nThis project uses beads (`bd`) for issue tracking.\nRun `bd prime` for workflow context, or hooks auto-inject it.\n```\n\n## Rationale\n- `bd prime` is dynamic, concise (~80 lines), and always matches installed bd version\n- Hooks already auto-inject `bd prime` at session start when .beads/ detected\n- AGENTS.md only needs to mention beads exists, not contain full instructions\n\n## Implementation\n1. Update `cmd/bd/onboard.go` to output minimal snippet\n2. Keep `--output` flag for BD_GUIDE.md generation (may still be useful)\n3. Update help text to explain the new approach","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T11:42:38.604891-08:00","updated_at":"2025-12-18T11:47:28.020419-08:00","closed_at":"2025-12-18T11:47:28.020419-08:00","close_reason":"Implemented: bd onboard now outputs minimal snippet pointing to bd prime"} -{"id":"bd-h0we","title":"Review SQLite indexes and scaling bottlenecks","description":"Audit the beads SQLite schema for:\n\n## Index Review\n- Are all frequently-queried columns indexed?\n- Are compound indexes needed for common query patterns?\n- Any missing indexes on foreign keys or filter columns?\n\n## Scaling Bottlenecks\n- How does performance degrade with 10k, 100k, 1M issues?\n- Full table scans in hot paths?\n- JSONL export/import performance at scale\n- Transaction contention in multi-agent scenarios\n\n## Common Query Patterns to Optimize\n- bd ready (status + blocked_by resolution)\n- bd list with filters (status, type, priority, labels)\n- bd show with dependency graph traversal\n- bd sync import/export\n\n## Deliverables\n- Document current indexes\n- Identify missing indexes\n- Benchmark key operations at scale\n- Recommend schema improvements","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-21T23:41:06.481881-08:00","updated_at":"2025-12-22T22:59:25.178175-08:00","closed_at":"2025-12-22T22:59:25.178175-08:00","close_reason":"Completed comprehensive review. Created 7 follow-up issues:\n- bd-bha9: updated_at index (P2)\n- bd-a9y3: (status, priority) composite (P3) \n- bd-jke6: labels covering index (P3)\n- bd-8x3w: dependencies (issue_id, type) (P3)\n- bd-lk39: events composite (P4)\n- bd-zw72: cache scaling investigation (P3)\n- bd-m964: FTS5 consideration (P4)\n\nKey findings:\n1. Current schema has good coverage for hot paths\n2. blocked_issues_cache provides 25x speedup for GetReadyWork\n3. Main gaps are composite indexes for common filter combinations\n4. Scaling concerns start at 100K+ issues, primarily around text search and cache rebuild"} -{"id":"bd-h27p","title":"Merge: bd-g4b4","description":"branch: polecat/Hooker\ntarget: main\nsource_issue: bd-g4b4\nrig: beads","status":"closed","priority":3,"issue_type":"merge-request","created_at":"2025-12-23T13:38:50.707153-08:00","updated_at":"2025-12-23T19:12:08.357806-08:00","closed_at":"2025-12-23T19:12:08.357806-08:00","close_reason":"Stale merge-requests from orphaned polecat branches - refinery not processing"} -{"id":"bd-h807","title":"Cross-project dependency support","description":"Enable tracking dependencies across project boundaries.\n\n## Mechanism\n- Producer: `bd ship \u003ccapability\u003e` adds `provides:\u003ccapability\u003e` label\n- Consumer: `blocked_by: external:\u003cproject\u003e:\u003ccapability\u003e`\n- Resolution: `bd ready` checks external deps via config\n\n## Design Doc\nSee: gastown/docs/cross-project-deps.md\n\n## Children\n- bd-eijl: bd ship command\n- bd-om4a: external: prefix in blocked_by\n- bd-66w1: external_projects config\n- bd-zmmy: bd ready resolution","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-21T22:38:01.116241-08:00","updated_at":"2025-12-22T00:02:09.271076-08:00","closed_at":"2025-12-22T00:02:09.271076-08:00","close_reason":"All children completed: bd ship, external: prefix, config, and bd ready resolution"} -{"id":"bd-h8q","title":"Add tests for validation functions","description":"Validation functions like ParseIssueType have 0% coverage. These are critical for ensuring data quality and preventing invalid data from entering the system.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T07:01:02.843488344-07:00","updated_at":"2025-12-18T07:03:53.561016965-07:00","closed_at":"2025-12-18T07:03:53.561016965-07:00","dependencies":[{"issue_id":"bd-h8q","depends_on_id":"bd-6ss","type":"discovered-from","created_at":"2025-12-18T07:01:02.846419747-07:00","created_by":"matt"}]} -{"id":"bd-h8ym","title":"Wait for CI to pass","description":"Monitor GitHub Actions - all checks must pass before release artifacts are built","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T13:52:33.066792-08:00","updated_at":"2025-12-21T13:53:49.454536-08:00","deleted_at":"2025-12-21T13:53:49.454536-08:00","deleted_by":"stevey","delete_reason":"manual delete","original_type":"task","wisp":true} -{"id":"bd-haxi","title":"Restart running daemons","description":"Kill and restart any running bd daemons to pick up new version: pkill -f 'bd daemon' \u0026\u0026 bd daemon --start","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T13:52:33.066262-08:00","updated_at":"2025-12-21T13:53:49.757078-08:00","deleted_at":"2025-12-21T13:53:49.757078-08:00","deleted_by":"stevey","delete_reason":"manual delete","original_type":"task","wisp":true} -{"id":"bd-haze","title":"Fix beads-9yc: pinned column missing from schema. gt mail...","description":"Fix beads-9yc: pinned column missing from schema. gt mail send fails because some beads DBs lack the pinned column. Add migration to ensure it exists.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-19T15:05:33.394801-08:00","updated_at":"2025-12-21T15:26:35.171757-08:00","closed_at":"2025-12-21T15:26:35.171757-08:00","close_reason":"Already fixed - migration 023_pinned_column.go adds the column if missing. Verified working."} -{"id":"bd-hhv3","title":"Test and document molecular chemistry commands","description":"## Context\n\nImplemented the molecular chemistry UX commands per the design docs:\n- gastown/mayor/rig/docs/molecular-chemistry.md\n- gastown/mayor/rig/docs/chemistry-design-changes.md\n\nCommit: cadf798b\n\n## New Commands to Test\n\n| Command | Purpose |\n|---------|---------|\n| `bd pour \u003cproto\u003e` | Instantiate proto as persistent mol |\n| `bd wisp create \u003cproto\u003e` | Instantiate proto as ephemeral wisp |\n| `bd hook [--agent]` | Inspect what's on an agent's hook |\n\n## Enhanced Commands to Test\n\n| Command | Changes |\n|---------|---------|\n| `bd mol spawn --pour` | New flag, `--persistent` deprecated |\n| `bd mol bond --pour` | Force liquid phase on wisp target |\n| `bd pin --for \u003cagent\u003e --start` | Chemistry workflow support |\n\n## Test Scenarios\n\n1. **bd pour**: Create persistent mol from a proto\n - Verify creates in .beads/ (not .beads-wisp/)\n - Verify variable substitution works\n - Verify --dry-run works\n\n2. **bd wisp create**: Create ephemeral wisp from proto\n - Verify creates in .beads-wisp/\n - Verify bd wisp list shows it\n - Verify bd mol squash works\n - Verify bd mol burn works\n\n3. **bd hook**: Inspect pinned work\n - Pin something, verify bd hook shows it\n - Test --agent flag\n - Test --json output\n\n4. **bd pin --for**: Assign work to agent\n - Verify sets pinned=true\n - Verify sets assignee\n - Verify --start sets status=in_progress\n\n5. **bd mol bond --pour**: Force liquid on wisp target\n - Bond a proto to a wisp with --pour\n - Verify spawned issues are in .beads/\n\n## Documentation\n\n- Update CLAUDE.md with new commands\n- Add examples to --help output (already done)\n- Consider adding to docs/CLI_REFERENCE.md\n\n## Code Review\n\n- Check for edge cases\n- Verify error messages are helpful\n- Ensure --json output is consistent","status":"closed","priority":1,"issue_type":"task","assignee":"beads/dave","created_at":"2025-12-22T02:22:10.906646-08:00","updated_at":"2025-12-22T02:55:37.983703-08:00","closed_at":"2025-12-22T02:55:37.983703-08:00","close_reason":"All commands tested and documented"} -{"id":"bd-hkr6","title":"GH#518: Document bd setup command","description":"bd setup is undiscoverable. Add to README/docs. Currently only findable by grepping source. See GitHub issue #518.","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-12-16T01:03:54.664668-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} -{"id":"bd-hlsw","title":"Add sync resilience guardrails for forced pushes and prefix mismatches","description":"Beads can get into unrecoverable sync states when remote forces pushes occur (e.g., rebases) combined with prefix mismatches from multi-worker scenarios. Add detection, prevention, and auto-recovery features to handle this gracefully.","status":"open","priority":2,"issue_type":"epic","created_at":"2025-12-14T10:40:14.872875259-07:00","updated_at":"2025-12-14T10:40:14.872875259-07:00"} -{"id":"bd-hlsw.3","title":"Auto-recovery mode (bd sync --auto-recover)","description":"Add bd sync --auto-recover flag that: detects problematic sync state, backs up .beads/issues.db with timestamp, rebuilds DB from JSONL atomically, verifies consistency, reports what was fixed. Provides safety valve when sync integrity fails.","status":"open","priority":3,"issue_type":"task","created_at":"2025-12-14T10:40:20.599836875-07:00","updated_at":"2025-12-14T10:40:20.599836875-07:00","dependencies":[{"issue_id":"bd-hlsw.3","depends_on_id":"bd-hlsw","type":"parent-child","created_at":"2025-12-14T10:40:20.600435888-07:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-hlsw.4","title":"Sync branch integrity guards","description":"Track sync branch parent commit. If sync branch was force-pushed, warn user and require confirmation before proceeding. Add option to reset to remote if user accepts rebase. Prevents silent corruption from forced pushes.","status":"open","priority":3,"issue_type":"task","created_at":"2025-12-14T10:40:20.645402352-07:00","updated_at":"2025-12-14T10:40:20.645402352-07:00","dependencies":[{"issue_id":"bd-hlsw.4","depends_on_id":"bd-hlsw","type":"parent-child","created_at":"2025-12-14T10:40:20.646425761-07:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-hlyr","title":"Merge: bd-m8ro","description":"branch: polecat/max\ntarget: main\nsource_issue: bd-m8ro\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-23T20:45:40.218445-08:00","updated_at":"2025-12-23T21:21:57.69886-08:00","closed_at":"2025-12-23T21:21:57.69886-08:00","close_reason":"stale - no code pushed"} -{"id":"bd-hnkg","title":"GH#540: Add silent quick-capture mode (bd q)","description":"Add bd q alias for quick capture that outputs only issue ID. Useful for piping/scripting. See GitHub issue #540.","status":"tombstone","priority":2,"issue_type":"feature","created_at":"2025-12-16T01:03:38.260135-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"feature"} -{"id":"bd-hvng","title":"Merge: bd-w193","description":"branch: polecat/nux\ntarget: main\nsource_issue: bd-w193\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-19T23:23:47.496139-08:00","updated_at":"2025-12-20T23:17:26.996479-08:00","closed_at":"2025-12-20T23:17:26.996479-08:00","close_reason":"Branches nuked, MRs obsolete"} -{"id":"bd-hw3w","title":"Update info.go versionChanges","description":"Add entry to versionChanges in cmd/bd/info.go with agent-actionable changes for {{version}}","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T22:56:01.016558-08:00","updated_at":"2025-12-20T17:59:26.262511-08:00","closed_at":"2025-12-20T01:23:50.3879-08:00","dependencies":[{"issue_id":"bd-hw3w","depends_on_id":"bd-6s61","type":"parent-child","created_at":"2025-12-19T22:56:14.941855-08:00","created_by":"daemon"},{"issue_id":"bd-hw3w","depends_on_id":"bd-czss","type":"blocks","created_at":"2025-12-19T22:56:23.219257-08:00","created_by":"daemon"}]} -{"id":"bd-hy9p","title":"Add --body-file flag to bd create for reading descriptions from files","description":"## Problem\n\nCreating issues with long/complex descriptions via CLI requires shell escaping gymnastics:\n\n```bash\n# Current workaround - awkward heredoc quoting\nbd create --title=\"...\" --description=\"$(cat \u003c\u003c'EOF'\n...markdown...\nEOF\n)\"\n\n# Often fails with quote escaping errors in eval context\n# Agents resort to writing temp files then reading them\n```\n\n## Proposed Solution\n\nAdd `--body-file` and `--description-file` flags to read description from a file, matching `gh` CLI pattern.\n\n```bash\n# Natural pattern that aligns with training data\ncat \u003e /tmp/desc.md \u003c\u003c 'EOF'\n...markdown content...\nEOF\n\nbd create --title=\"...\" --body-file=/tmp/desc.md\n```\n\n## Implementation\n\n### 1. Add new flags to `bd create`\n\n```go\ncreateCmd.Flags().String(\"body-file\", \"\", \"Read description from file (use - for stdin)\")\ncreateCmd.Flags().String(\"description-file\", \"\", \"Alias for --body-file\")\n```\n\n### 2. Flag precedence\n\n- If `--body-file` or `--description-file` is provided, read from file\n- If value is `-`, read from stdin\n- Otherwise fall back to `--body` or `--description` flag\n- If neither provided, description is empty (current behavior)\n\n### 3. Error handling\n\n- File doesn't exist β†’ clear error message\n- File not readable β†’ clear error message\n- stdin specified but not available β†’ clear error message\n\n## Benefits\n\nβœ… **Matches training data**: `gh issue create --body-file file.txt` is a common pattern\nβœ… **No shell escaping issues**: File content is read directly\nβœ… **Works with any content**: Markdown, special characters, quotes, etc.\nβœ… **Agent-friendly**: Agents already write complex content to temp files\nβœ… **User-friendly**: Easier for humans too when pasting long descriptions\n\n## Related Commands\n\nConsider adding similar support to:\n- `bd update --body-file` (for updating descriptions)\n- `bd comment --body-file` (if/when we add comments)\n\n## Examples\n\n```bash\n# From file\nbd create --title=\"Add new feature\" --body-file=feature.md\n\n# From stdin\necho \"Quick description\" | bd create --title=\"Bug fix\" --body-file=-\n\n# With other flags\nbd create \\\n --title=\"Security issue\" \\\n --type=bug \\\n --priority=0 \\\n --body-file=security-report.md \\\n --label=security\n```\n\n## Testing\n\n- Test with normal files\n- Test with stdin (`-`)\n- Test with non-existent files (error handling)\n- Test with binary files (should handle gracefully)\n- Test with empty files (valid - empty description)\n- Test that `--description-file` and `--body-file` are equivalent aliases","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-11-22T00:02:08.762684-08:00","updated_at":"2025-12-17T23:13:40.536024-08:00","closed_at":"2025-12-17T17:28:52.505239-08:00"} -{"id":"bd-hyp6","title":"Gate: timer:1m","status":"open","priority":1,"issue_type":"gate","assignee":"deacon/","created_at":"2025-12-23T13:41:18.201653-08:00","updated_at":"2025-12-23T13:41:18.201653-08:00","wisp":true} -{"id":"bd-hzvz","title":"Update info.go versionChanges","description":"Add entry to versionChanges in cmd/bd/info.go with agent-actionable changes for 0.30.7","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T22:56:48.649359-08:00","updated_at":"2025-12-19T22:57:31.604229-08:00","closed_at":"2025-12-19T22:57:31.604229-08:00","dependencies":[{"issue_id":"bd-hzvz","depends_on_id":"bd-8pyn","type":"parent-child","created_at":"2025-12-19T22:56:48.652068-08:00","created_by":"stevey"},{"issue_id":"bd-hzvz","depends_on_id":"bd-2ep8","type":"blocks","created_at":"2025-12-19T22:56:48.652376-08:00","created_by":"stevey"}]} -{"id":"bd-i0rx","title":"Merge: bd-ao0s","description":"branch: polecat/rictus\ntarget: main\nsource_issue: bd-ao0s\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-20T01:13:42.716658-08:00","updated_at":"2025-12-20T23:17:26.993744-08:00","closed_at":"2025-12-20T23:17:26.993744-08:00","close_reason":"Branches nuked, MRs obsolete"} -{"id":"bd-ia3g","title":"BondRef.ProtoID field name is misleading for mol+mol bonds","description":"In bondMolMol, the BondRef.ProtoID field is used to store molecule IDs:\n\n```go\nBondedFrom: append(molA.BondedFrom, types.BondRef{\n ProtoID: molB.ID, // This is a molecule, not a proto\n ...\n})\n```\n\nThis is semantically confusing since ProtoID suggests it should only hold proto references.\n\n**Options:**\n1. Rename ProtoID to SourceID (breaking change, needs migration)\n2. Add documentation clarifying ProtoID can hold molecule IDs in bond context\n3. Leave as-is, accept the naming is imprecise\n\nLow priority since it's just naming, not functionality.","status":"open","priority":4,"issue_type":"task","created_at":"2025-12-21T10:23:00.755067-08:00","updated_at":"2025-12-21T10:23:00.755067-08:00"} -{"id":"bd-ibl9","title":"Merge: bd-4qfb","description":"branch: polecat/Polish\ntarget: main\nsource_issue: bd-4qfb\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-23T13:37:57.255125-08:00","updated_at":"2025-12-23T19:12:08.352249-08:00","closed_at":"2025-12-23T19:12:08.352249-08:00","close_reason":"Stale merge-requests from orphaned polecat branches - refinery not processing"} -{"id":"bd-icfe","title":"gt spawn/crew setup should create .beads/redirect for worktrees","description":"Crew clones and polecats need a .beads/redirect file pointing to the shared beads database (../../mayor/rig/.beads). Currently:\n\n- redirect files can get deleted by git clean\n- not auto-created during gt spawn or worktree setup\n- missing redirects cause 'no beads database found' errors\n\nFound missing in: gastown/joe, beads/zoey (after git clean)\n\nFix options:\n1. gt spawn creates redirect during worktree setup\n2. gt prime regenerates missing redirects\n3. bd commands auto-detect worktree and find shared beads\n\nThis should be standard Gas Town rig configuration.","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-21T01:30:26.115872-08:00","updated_at":"2025-12-21T17:51:25.740811-08:00","closed_at":"2025-12-21T17:51:25.740811-08:00","close_reason":"Moved to gastown: gt-b6qm"} -{"id":"bd-icnf","title":"Add bd mol run command (bond + assign + pin)","description":"bd mol run = bond + assign root to caller + pin to startup mail. This is the Gas Town integration point. When agent restarts, check startup mail, find pinned molecule root, query bd ready for next step. Makes molecules immortal.","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-12-20T23:52:17.462882-08:00","updated_at":"2025-12-21T00:07:25.803058-08:00","closed_at":"2025-12-21T00:07:25.803058-08:00","close_reason":"Implemented bd mol run command","dependencies":[{"issue_id":"bd-icnf","depends_on_id":"bd-ffjt","type":"blocks","created_at":"2025-12-20T23:52:25.871742-08:00","created_by":"daemon"}]} -{"id":"bd-ieyy","title":"bd close --continue: auto-advance to next molecule step","description":"Add --continue flag to bd close for seamless molecule step transitions.\n\n## Usage\n\nbd close \u003cstep-id\u003e --continue [--no-auto]\n\n## Behavior\n\n1. Closes the specified step\n2. Finds next ready step in same molecule (sibling/child)\n3. By default, marks it in_progress (--no-auto to skip)\n4. Outputs the transition\n\n## Output\n\n[done] Closed gt-abc.3: Implement feature\n\nNext ready in molecule:\n gt-abc.4: Write tests\n\n[arrow] Marked in_progress (use --no-auto to skip)\n\n## If no next step\n\n[done] Closed gt-abc.6: Exit decision\n\nMolecule gt-abc complete! All steps closed.\nConsider: bd mol squash gt-abc --summary '...'\n\n## Key behaviors\n- Detects parent molecule from closed step\n- Finds next unblocked sibling\n- Auto-claims by default (propulsion principle)\n- Graceful handling when molecule is complete\n\n## Gas Town integration\n- gt-lz13: Update templates with nav workflow\n- gt-um6q: Update docs with nav workflow","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-12-22T17:03:44.238243-08:00","updated_at":"2025-12-22T17:36:31.937727-08:00","closed_at":"2025-12-22T17:36:31.937727-08:00","close_reason":"Implemented mol current and close --continue"} -{"id":"bd-ifuw","title":"test hook pin fix","status":"tombstone","priority":2,"issue_type":"task","assignee":"dave","created_at":"2025-12-23T04:43:15.598698-08:00","updated_at":"2025-12-23T04:51:29.438139-08:00","deleted_at":"2025-12-23T04:51:29.438139-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} -{"id":"bd-iic1","title":"Phase 2.2: Switch bdt storage to TOON format","description":"Currently bdt stores issues in JSONL format in issues.toon file. Phase 2.2 must implement actual TOON format storage - this is the fundamental goal of the bdtoon project.\n\n## Current State (Phase 2.1)\n- issues.toon stores JSONL (intermediate format)\n- --toon flag allows output in TOON format for LLM consumption\n- Problem: We're not actually using TOON as the fundamental storage format\n\n## Required Work (Phase 2.2)\n1. Switch issue file I/O to write TOON format instead of JSONL\n - Update cmd/bdt/storage.go to use EncodeTOON for writing\n - Update cmd/bdt/storage.go to decode TOON (currently decodes JSON)\n - Ensure round-trip: write TOON β†’ read TOON β†’ write TOON is byte-identical\n\n2. Update command implementations\n - cmd/bdt/create.go: Write newly created issues to TOON format\n - cmd/bdt/list.go: Read issues from TOON format\n - cmd/bdt/show.go: Read from TOON format\n - cmd/bdt/import.go: Convert imported JSONL to TOON\n - cmd/bdt/export.go: Export TOON to JSONL (for bd compatibility)\n\n3. Implement TOON parser that handles gotoon's encoder-only limitation\n - Since gotoon doesn't decode TOON, need custom TOONβ†’JSON decoder\n - OR continue storing TOON but decoding via intermediate JSON conversion\n\n4. Git merge driver optimization\n - TOON is line-oriented, better for 3-way merges than binary formats\n - Configure git merge driver for .toon files\n\n5. Comprehensive testing\n - Round-trip tests: Issue β†’ TOON β†’ storage β†’ read β†’ Issue\n - Merge conflict resolution tests with TOON format\n - Large issue set performance tests\n\n## Success Criteria\n- issues.toon stores actual TOON format (not JSONL)\n- bdt list reads from TOON file\n- bdt create writes to TOON file\n- Round-trip: create issue β†’ list β†’ show returns identical data\n- All 65+ tests still passing\n- Performance comparable to JSONL storage","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T14:05:41.394964404-07:00","updated_at":"2025-12-19T14:37:17.879612634-07:00","closed_at":"2025-12-19T14:37:17.879612634-07:00"} -{"id":"bd-in7","title":"Test message","description":"Hello world","status":"closed","priority":2,"issue_type":"message","created_at":"2025-12-17T23:16:13.184946-08:00","updated_at":"2025-12-18T17:42:26.000073-08:00","closed_at":"2025-12-17T23:37:38.563369-08:00"} -{"id":"bd-indn","title":"bd template commands fail with daemon mode","description":"The `bd template show` and `bd template instantiate` commands fail with 'Error loading template: no database connection' when daemon is running.\n\n**Reproduction:**\n```bash\nbd daemon --start\nbd template show bd-qqc # Error: no database connection\nbd template show bd-qqc --no-daemon # Works\n```\n\n**Expected:** Template commands should work with daemon like other commands.\n\n**Workaround:** Use `--no-daemon` flag.\n\n**Location:** Likely in cmd/bd/template.go - daemon RPC path not implemented for template operations.","status":"in_progress","priority":2,"issue_type":"bug","assignee":"beads/india","created_at":"2025-12-18T22:57:35.16596-08:00","updated_at":"2025-12-23T22:29:35.689875-08:00"} -{"id":"bd-io8c","title":"Improve test coverage for internal/syncbranch (33.0% β†’ 70%)","description":"Improve test coverage for internal/syncbranch package from 27% to 70%.\n\n## Current State\n- Coverage: 27.0%\n- Files: syncbranch.go, worktree.go\n- Tests: syncbranch_test.go (basic tests exist)\n\n## Functions Needing Tests\n\n### syncbranch.go (config management)\n- [x] ValidateBranchName - has tests\n- [ ] Get - needs store mock tests\n- [ ] GetFromYAML - needs YAML parsing tests\n- [ ] IsConfigured - needs file system tests\n- [ ] IsConfiguredWithDB - needs DB path tests\n- [ ] Set - needs store mock tests\n- [ ] Unset - needs store mock tests\n\n### worktree.go (git operations) - PRIORITY\n- [ ] CommitToSyncBranch - needs git repo fixture tests\n- [ ] PullFromSyncBranch - needs merge scenario tests\n- [ ] CheckDivergence - needs ahead/behind tests\n- [ ] ResetToRemote - needs reset scenario tests\n- [ ] performContentMerge - needs 3-way merge tests\n- [ ] extractJSONLFromCommit - needs git show tests\n- [ ] hasChangesInWorktree - needs dirty state tests\n- [ ] commitInWorktree - needs commit scenario tests\n\n## Implementation Guide\n\n1. **Use testutil fixtures:**\n ```go\n import \"github.com/steveyegge/beads/internal/testutil/fixtures\"\n \n func TestCommitToSyncBranch(t *testing.T) {\n repo := fixtures.NewGitRepo(t)\n defer repo.Cleanup()\n // ... test scenarios\n }\n ```\n\n2. **Test scenarios for worktree.go:**\n - Clean commit (no conflicts)\n - Non-fast-forward push (diverged)\n - Merge conflict resolution\n - Empty changes (nothing to commit)\n\n3. **Mock storage for syncbranch.go:**\n ```go\n store := memory.New()\n // Set up test config\n syncbranch.Set(ctx, store, \"beads-sync\")\n ```\n\n## Success Criteria\n- Coverage β‰₯ 70%\n- All public functions have at least one test\n- Edge cases covered for git operations\n- Tests pass with `go test -race ./internal/syncbranch`\n\n## Run Tests\n```bash\ngo test -v -cover ./internal/syncbranch\ngo test -race ./internal/syncbranch\n```","status":"closed","priority":1,"issue_type":"task","assignee":"beads/Syncer","created_at":"2025-12-13T20:43:02.079145-08:00","updated_at":"2025-12-23T13:46:10.191435-08:00","closed_at":"2025-12-23T13:46:10.191435-08:00","close_reason":"Improved coverage from 27% to 67% (close to 70% target)","dependencies":[{"issue_id":"bd-io8c","depends_on_id":"bd-iz5t","type":"parent-child","created_at":"2025-12-23T12:44:07.213092-08:00","created_by":"daemon"}]} -{"id":"bd-ipj7","title":"enhance 'bd status' to show recent activity","description":"It would be nice to be able to quickly view the last N changes in the database, to see wha's recently been worked on. I'm imagining something like 'bd status activity'.","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-21T11:08:50.996541974-07:00","updated_at":"2025-12-21T17:54:00.279039-08:00","closed_at":"2025-12-21T17:54:00.279039-08:00","close_reason":"Already implemented - bd status includes Recent Activity section"} -{"id":"bd-ipva","title":"Update go install bd to 0.33.2","description":"Rebuild and install bd to ~/go/bin:\n\n```bash\ngo install ./cmd/bd\n~/go/bin/bd version # Verify shows 0.33.2\n```\n\nNote: If ~/go/bin is in PATH before /opt/homebrew/bin, this is the version that runs by default.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.760715-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","close_reason":"Local dev build used instead of go install","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task","wisp":true} -{"id":"bd-iq19","title":"Distill: promote ad-hoc epic to proto","description":"Extract a reusable proto from an existing ad-hoc epic.\n\nCOMMAND: bd mol distill \u003cepic-id\u003e [--as \u003cproto-name\u003e]\n\nBEHAVIOR:\n- Clone the epic and all children as a new proto\n- Set is_template=true on all cloned issues\n- Replace concrete values with {{variable}} placeholders (interactive or --var flags)\n- Add to proto catalog\n\nFLAGS:\n- --as NAME: Custom proto ID (default: proto-\u003cepic-id\u003e)\n- --var field=placeholder: Replace value with variable placeholder\n- --interactive: Prompt for each field that looks parameterizable\n- --dry-run: Preview the proto structure\n\nEXAMPLE:\n bd mol distill bd-o5xe --as proto-feature-workflow \\\n --var title=feature_name \\\n --var assignee=worker\n\nUSE CASES:\n- Team develops good workflow organically, wants to reuse it\n- Capture tribal knowledge as executable templates\n- Create starting point for similar future work\n\nThe reverse of spawn: instead of proto β†’ molecule, it's molecule β†’ proto.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-21T01:05:07.953538-08:00","updated_at":"2025-12-21T10:31:56.814246-08:00","closed_at":"2025-12-21T10:31:56.814246-08:00","close_reason":"Implemented distill command in mol.go","dependencies":[{"issue_id":"bd-iq19","depends_on_id":"bd-o5xe","type":"parent-child","created_at":"2025-12-21T01:05:16.495774-08:00","created_by":"daemon"},{"issue_id":"bd-iq19","depends_on_id":"bd-rnnr","type":"blocks","created_at":"2025-12-21T01:05:16.560404-08:00","created_by":"daemon"}]} -{"id":"bd-iq7n","title":"Audit and fix JSONL filename mismatches across all repo clones","description":"## Problem\n\nMultiple clones of repos are configured with different JSONL filenames (issues.jsonl vs beads.jsonl), causing:\n1. JSONL files to be resurrected after deletion (one clone pushes issues.jsonl, another pushes beads.jsonl)\n2. Agents unable to see issues filed by other agents after sync\n3. Merge conflicts and data inconsistencies\n\n## Root Cause\n\nWhen repos were \"bd doctored\" or initialized at different times, some got issues.jsonl (old default) and others got beads.jsonl (Beads repo specific). These clones push their respective files, creating duplicates.\n\n## Task\n\nScan all repo clones under ~/src/ (1-2 levels deep) and standardize their JSONL configuration.\n\n### Step 1: Find all beads-enabled repos\n\n```bash\n# Find all directories named 'beads' at levels 1-2 under ~/src/\nfind ~/src -maxdepth 2 -type d -name beads\n```\n\n### Step 2: For each repo found, check configuration\n\nFor each directory from Step 1, check:\n- Does `.beads/metadata.json` exist?\n- What is the `jsonl_export` value?\n- What JSONL files actually exist in `.beads/`?\n- Are there multiple JSONL files (problem!)?\n\n### Step 3: Create audit report\n\nGenerate a report showing:\n```\nRepo Path | Config | Actual Files | Status\n----------------------------------- | ------------- | ---------------------- | --------\n~/src/beads | beads.jsonl | beads.jsonl | OK\n~/src/dave/beads | issues.jsonl | issues.jsonl | MISMATCH\n~/src/emma/beads | issues.jsonl | issues.jsonl, beads.jsonl | DUPLICATE!\n```\n\n### Step 4: Determine canonical name for each repo\n\nFor repos that are the SAME git repository (check `git remote -v`):\n- Group them together\n- Determine which JSONL filename should be canonical (majority wins, or beads.jsonl for the beads repo itself)\n- List which clones need to be updated\n\n### Step 5: Generate fix script\n\nCreate a script that for each mismatched clone:\n1. Updates `.beads/metadata.json` to use the canonical name\n2. If JSONL file needs renaming: `git mv .beads/old.jsonl .beads/new.jsonl`\n3. Removes any duplicate JSONL files: `git rm .beads/duplicate.jsonl`\n4. Commits the change\n5. Syncs: `bd sync`\n\n### Expected Output\n\n1. Audit report showing all repos and their config status\n2. List of repos grouped by git remote (same repository)\n3. Fix script or manual instructions for standardizing each repo\n4. Verification that after fixes, all clones of the same repo use the same JSONL filename\n\n### Edge Cases\n\n- Handle repos without metadata.json (use default discovery)\n- Handle repos with no git remote (standalone/local)\n- Handle repos that are not git repositories\n- Don't modify repos with uncommitted changes (warn instead)\n\n### Success Criteria\n\n- All clones of the same git repository use the same JSONL filename\n- No duplicate JSONL files in any repo\n- All configurations documented in metadata.json\n- bd doctor passes on all repos","status":"closed","priority":0,"issue_type":"task","created_at":"2025-11-21T23:58:35.044762-08:00","updated_at":"2025-12-17T23:13:40.531403-08:00","closed_at":"2025-12-17T16:50:59.510972-08:00"} -{"id":"bd-is6m","title":"Add gate checking to Deacon patrol loop","description":"Integrate gate checking into Deacon's patrol cycle.\n\n## Patrol Integration\n```go\nfunc (d *Deacon) checkGates(ctx context.Context) {\n gates, _ := d.store.ListOpenGates(ctx)\n \n for _, gate := range gates {\n // Check timeout\n if time.Since(gate.CreatedAt) \u003e gate.Timeout {\n d.notifyWaiters(gate, \"timeout\")\n d.closeGate(gate, \"timed out\")\n continue\n }\n \n // Check condition\n if d.checkCondition(gate.AwaitType, gate.AwaitID) {\n d.notifyWaiters(gate, \"cleared\")\n d.closeGate(gate, \"condition met\")\n }\n }\n}\n```\n\n## Note\nThis task is in Gas Town (gt), not beads. May need to be moved there.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-23T11:44:36.839709-08:00","updated_at":"2025-12-23T12:19:44.204647-08:00","closed_at":"2025-12-23T12:19:44.204647-08:00","close_reason":"Moved to gastown: gt-dh65","dependencies":[{"issue_id":"bd-is6m","depends_on_id":"bd-udsi","type":"parent-child","created_at":"2025-12-23T11:44:52.909253-08:00","created_by":"daemon"},{"issue_id":"bd-is6m","depends_on_id":"bd-u66e","type":"blocks","created_at":"2025-12-23T11:44:56.428084-08:00","created_by":"daemon"}]} -{"id":"bd-iw4z","title":"Compound visualization in bd mol show","description":"Enhance bd mol show to display compound structure.\n\nENHANCEMENTS:\n- Show constituent protos and how they're bonded\n- Display bond type (sequential/parallel) between components\n- Indicate attachment points\n- Show combined variable requirements across all protos\n\nEXAMPLE OUTPUT:\n\n Compound: proto-feature-with-tests\n Bonded from:\n └─ proto-feature (root)\n └─ proto-testing (sequential, after completion)\n \n Variables: {{name}}, {{version}}, {{test_suite}}\n \n Structure:\n proto-feature-with-tests\n β”œβ”€ Design feature {{name}}\n β”œβ”€ Implement core\n β”œβ”€ Write unit tests ← from proto-testing\n └─ Run test suite {{test_suite}} ← from proto-testing","status":"deferred","priority":2,"issue_type":"task","created_at":"2025-12-21T00:59:26.71318-08:00","updated_at":"2025-12-21T11:12:44.012871-08:00","dependencies":[{"issue_id":"bd-iw4z","depends_on_id":"bd-o5xe","type":"parent-child","created_at":"2025-12-21T00:59:51.500865-08:00","created_by":"daemon"},{"issue_id":"bd-iw4z","depends_on_id":"bd-rnnr","type":"blocks","created_at":"2025-12-21T00:59:51.891643-08:00","created_by":"daemon"}]} -{"id":"bd-iz5t","title":"Swarm: 13 beads backlog issues for polecat execution","description":"## Swarm Overview\n\n13 issues prepared for parallel polecat execution. All issues have been enhanced with concrete implementation guidance, file lists, and success criteria.\n\n## Issue List\n\n### Bug (1) - HIGH PRIORITY\n| ID | Priority | Title |\n|----|----------|-------|\n| bd-phtv | P1 | Pinned field overwritten by subsequent commands |\n\n### Test Coverage (3)\n| ID | Package | Target |\n|----|---------|--------|\n| bd-io8c | internal/syncbranch | 27% β†’ 70% |\n| bd-thgk | internal/compact | 17% β†’ 70% |\n| bd-tvu3 | internal/beads | 48% β†’ 70% |\n\n### Code Quality (3)\n| ID | Task |\n|----|------|\n| bd-qioh | FatalError pattern standardization |\n| bd-rgyd | Split queries.go (1704 lines β†’ 5 files) |\n| bd-u2sc.3 | Split cmd/bd files (sync/init/show/compact) |\n\n### Features (4)\n| ID | Task |\n|----|------|\n| bd-au0.5 | Search date/priority filters |\n| bd-ykd9 | Doctor --fix auto-repair |\n| bd-g4b4 | Close hooks system |\n| bd-likt | Gate daemon RPC |\n\n### Polish (2)\n| ID | Task |\n|----|------|\n| bd-4qfb | Doctor output formatting |\n| bd-u2sc.4 | slog structured logging |\n\n## Issue Details\n\nAll issues have been enhanced with:\n- Concrete file lists to modify\n- Code snippets and patterns\n- Success criteria\n- Test commands\n\nRun `bd show \u003cid\u003e` for full details on any issue.\n\n## Execution Notes\n\n- All issues are independent (no blockers between them)\n- bd-phtv (P1 bug) should get priority - affects bd pin functionality\n- Test coverage tasks are straightforward but time-consuming\n- File split tasks (bd-rgyd, bd-u2sc.3) are mechanical but important\n\n## Completed During Prep\n\n- bd-ucgz (P2 bug) - Fixed inline: external deps orphan check (commit f2db0a1d)\n- Moved 5 gastown issues out of beads backlog (gt-dh65, gt-ng6g, gt-fqcz, gt-gswn, gt-rw2z)\n- Deferred 4 premature/post-1.0 issues\n- Closed bd-udsi epic (core implementation complete)","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-23T12:43:58.427835-08:00","updated_at":"2025-12-23T20:26:50.629471-08:00","closed_at":"2025-12-23T20:26:50.629471-08:00","close_reason":"All 13 swarm issues completed by polecats"} -{"id":"bd-j0tr","title":"Phase 1.3: Basic TOON read/write operations","description":"Add basic TOON read/write operations to bdt executable. Implement create, list, and show commands that use the internal/toon package for encoding/decoding to TOON format.\n\n## Subtasks\n1. Implement bdt create command - Create issues and serialize to TOON format\n2. Implement bdt list command - Read issues.toon and display all issues\n3. Implement bdt show command - Display single issue by ID\n4. Add file I/O operations for issues.toon\n5. Integrate internal/toon package (EncodeTOON, DecodeJSON)\n6. Write tests for create, list, show operations\n\n## Files to Create/Modify\n- cmd/bdt/create.go - Create command\n- cmd/bdt/list.go - List command \n- cmd/bdt/show.go - Show command\n- cmd/bdt/storage.go - File I/O helper\n\n## Success Criteria\n- bdt create \"Issue title\" creates and saves to issues.toon\n- bdt list displays all issues in human-readable format\n- bdt list --json shows JSON output\n- bdt show \u003cid\u003e displays single issue\n- Issues round-trip correctly: create β†’ list β†’ show\n- All tests passing with \u003e80% coverage","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T12:59:54.270296918-07:00","updated_at":"2025-12-19T13:09:00.196045685-07:00","closed_at":"2025-12-19T13:09:00.196045685-07:00"} -{"id":"bd-j3il","title":"Add bd reset command for clean slate restart","description":"Implement a command to reset beads to a clean starting state.\n\n**Context:** GitHub issue #479 - users sometimes get beads into an invalid state after updates, and there's no clean way to start fresh. The git backup/restore mechanism that protects against accidental deletion also makes it hard to intentionally reset.\n\n**Current workaround** (from maphew):\n```bash\nbd daemons killall\ngit rm .beads/*.jsonl\ngit commit -m 'remove old issues'\nrm .beads/*\nbd init\nbd onboard\n```\n\n**Desired:** A proper `bd reset` command that handles this cleanly and safely.","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-12-13T08:41:34.956552+11:00","updated_at":"2025-12-13T08:43:49.970591+11:00","closed_at":"2025-12-13T08:43:49.970591+11:00"} -{"id":"bd-j6lr","title":"GH#402: Add --parent flag documentation to bd onboard","description":"bd onboard output is missing --parent flag for epic subtasks. Agents guess wrong syntax (--deps parent:). See GitHub issue #402.","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-12-16T01:03:56.594829-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} -{"id":"bd-jgxi","title":"Auto-migrate database on CLI version bump","description":"When CLI is upgraded (e.g., 0.24.0 β†’ 0.24.1), database version becomes stale. Add auto-migration in PersistentPreRun or daemon startup. Check dbVersion != CLIVersion and run bd migrate automatically. Fixes recurring UX issue where bd doctor shows version mismatch after every CLI upgrade.","status":"closed","priority":0,"issue_type":"feature","created_at":"2025-11-21T23:16:09.004619-08:00","updated_at":"2025-12-17T23:13:40.535453-08:00","closed_at":"2025-12-17T17:15:43.605762-08:00","dependencies":[{"issue_id":"bd-jgxi","depends_on_id":"bd-tbz3","type":"parent-child","created_at":"2025-11-21T23:16:09.005513-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-jke6","title":"Add covering index (label, issue_id) for label queries","description":"GetIssuesByLabel joins labels table but requires table lookup after using idx_labels_label.\n\n**Query (labels.go:165):**\n```sql\nSELECT ... FROM issues i\nJOIN labels l ON i.id = l.issue_id\nWHERE l.label = ?\n```\n\n**Problem:** Current idx_labels_label index doesn't cover issue_id, requiring row lookup.\n\n**Solution:** Add migration:\n```sql\nCREATE INDEX IF NOT EXISTS idx_labels_label_issue ON labels(label, issue_id);\n```\n\nThis is a covering index - query can be satisfied entirely from the index without touching the labels table rows.","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-22T22:58:51.485354-08:00","updated_at":"2025-12-22T23:15:13.839904-08:00","closed_at":"2025-12-22T23:15:13.839904-08:00","close_reason":"Implemented in migration 026_additional_indexes.go","dependencies":[{"issue_id":"bd-jke6","depends_on_id":"bd-h0we","type":"discovered-from","created_at":"2025-12-22T22:58:51.485984-08:00","created_by":"daemon"}]} -{"id":"bd-jv4w","title":"Phase 1.2: Separate bdt executable - Initial structure","description":"Create minimal bdt command structure completely separate from bd. Must not share code, config, or database.\n\n## Subtasks\n1. Create cmd/bdt/ directory with main.go\n2. Implement bdt version, help, and status commands\n3. Configure separate database location: $HOME/.bdt/ (not $HOME/.beads/)\n4. Create separate issues file: issues.toon (not issues.jsonl)\n5. Update build system:\n - Makefile: Add bdt target\n - .goreleaser.yml: Add bdt binary config\n\n## Files to Create\n- cmd/bdt/main.go - Entry point\n- cmd/bdt/version.go - Version handling\n- cmd/bdt/help.go - Help text (separate from bd)\n\n## Success Criteria\n- `make build` produces both `bd` and `bdt` executables\n- `bdt version` shows distinct version output from bd\n- `bdt --help` shows distinct help text\n- bdt uses $HOME/.bdt/ directory (verify with `bdt info`)\n- bd and bdt completely isolated (no shared imports beyond stdlib)","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T11:48:34.866877282-07:00","updated_at":"2025-12-19T12:59:11.389296015-07:00","closed_at":"2025-12-19T12:59:11.389296015-07:00"} -{"id":"bd-jvu","title":"Add bd update --parent flag to change issue parent","description":"Allow changing an issue's parent with bd update --parent \u003cnew-parent-id\u003e. Useful for reorganizing tasks under different epics or moving issues between hierarchies. Should update the parent-child dependency relationship.","status":"closed","priority":3,"issue_type":"feature","created_at":"2025-12-17T22:24:07.274485-08:00","updated_at":"2025-12-17T22:34:07.318938-08:00","closed_at":"2025-12-17T22:34:07.318938-08:00"} -{"id":"bd-k88w","title":"Push version bump to GitHub","description":"git push origin main - triggers CI but no release yet.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.762574-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","close_reason":"Version bump already pushed","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task","wisp":true} -{"id":"bd-kptp","title":"Merge: bd-qioh","description":"branch: polecat/Errata\ntarget: main\nsource_issue: bd-qioh\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-23T13:46:08.832073-08:00","updated_at":"2025-12-23T19:12:08.350136-08:00","closed_at":"2025-12-23T19:12:08.350136-08:00","close_reason":"Stale merge-requests from orphaned polecat branches - refinery not processing"} -{"id":"bd-kpy","title":"Sync race: rebase-based divergence recovery resurrects tombstones","description":"## Problem\nWhen two repos sync simultaneously, tombstones can be resurrected:\n\n1. Repo A deletes issue (creates tombstone), pushes to sync branch\n2. Repo B (with 'closed' status) exports and tries to push\n3. Push fails (non-fast-forward)\n4. fetchAndRebaseInWorktree does git rebase\n5. Git rebase applies B's 'closed' patch on top of A's 'tombstone'\n6. TEXT-level rebase doesn't invoke beads merge driver\n7. 'closed' overwrites 'tombstone' = resurrection\n\n## Root Cause\nCommitToSyncBranch uses git rebase for divergence recovery, but rebase is text-level, not content-level. The proper content-level merge in PullFromSyncBranch handles tombstones correctly, but it runs AFTER the problematic push.\n\n## Proposed Fix\nOption 1: Don't push in CommitToSyncBranch - let PullFromSyncBranch handle merge+push\nOption 2: Replace git rebase with content-level merge in fetchAndRebaseInWorktree\nOption 3: Reorder sync steps: Export β†’ Pull/Merge β†’ Commit β†’ Push\n\n## Workaround Applied\nExcluded tombstones from orphan detection warnings (commit 1e97d9cc).\n\nSee also: bd-3852 (Add orphan detection migration)","status":"open","priority":2,"issue_type":"bug","created_at":"2025-12-17T23:29:33.049272-08:00","updated_at":"2025-12-17T23:29:33.049272-08:00"} -{"id":"bd-kqo1","title":"Show pin indicator in bd list output","description":"Add a visual indicator (e.g., pin emoji or [P] marker) for pinned issues in bd list output so users can easily identify them.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-18T23:33:47.402549-08:00","updated_at":"2025-12-21T11:30:27.272768-08:00","closed_at":"2025-12-21T11:30:27.272768-08:00","close_reason":"Already implemented - πŸ“Œ emoji shown for pinned issues in bd list output","dependencies":[{"issue_id":"bd-kqo1","depends_on_id":"bd-0vg","type":"blocks","created_at":"2025-12-18T23:33:56.771791-08:00","created_by":"daemon"},{"issue_id":"bd-kqo1","depends_on_id":"bd-7h5","type":"blocks","created_at":"2025-12-18T23:34:07.985271-08:00","created_by":"daemon"}]} -{"id":"bd-kqw0","title":"Update local installation","description":"Run install script or brew upgrade to get new version locally: curl -fsSL .../install.sh | bash","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T13:52:33.066452-08:00","updated_at":"2025-12-21T13:53:49.656073-08:00","deleted_at":"2025-12-21T13:53:49.656073-08:00","deleted_by":"stevey","delete_reason":"manual delete","original_type":"task","wisp":true} -{"id":"bd-kwjh","title":"Wisp storage: ephemeral molecule tracking","description":"Implement ephemeral molecule storage for patrol cycles.\n\n## Architecture\n\nWisps are ephemeral molecules stored in `.beads-wisps/` (gitignored).\nWhen squashed, they create digests in permanent `.beads/`.\n\n**Storage is per-rig, not per-role**: Witness and Refinery share mayor/rig's \n`.beads-wisps/` since they execute from that context.\n\n## Design Doc\nSee: gastown/mayor/rig/docs/wisp-architecture.md\n\n## Key Requirements\n\n1. **Ephemeral storage**: `.beads-wisps/` directory, gitignored\n2. **Bond with --wisp**: Creates in wisps instead of permanent\n3. **Squash**: Deletes wisp, creates digest in permanent beads\n4. **Burn**: Deletes wisp, no digest\n5. **Wisp commands**: `bd wisp list`, `bd wisp gc`\n\n## Storage Locations\n\n| Context | Location |\n|---------|----------|\n| Rig (Deacon, Witness, Refinery) | mayor/rig/.beads-wisps/ |\n| Polecat (if used) | polecats/\u003cname\u003e/.beads-wisps/ |\n\n## Children (to be created)\n- bd mol bond --wisp flag\n- .beads-wisps/ storage backend\n- bd mol squash handles wisp to permanent\n- bd wisp list command\n- bd wisp gc command (orphan cleanup)","status":"closed","priority":1,"issue_type":"epic","assignee":"beads/dave","created_at":"2025-12-21T23:34:47.188806-08:00","updated_at":"2025-12-22T01:12:53.965768-08:00","closed_at":"2025-12-22T01:12:53.965768-08:00","close_reason":"All 6 subtasks completed: wisp storage backend, mol bond --wisp flag, mol squash wispβ†’digest, wisp list, wisp gc, and mol burn commands implemented"} -{"id":"bd-kwjh.1","title":".beads-ephemeral/ storage backend","description":"Implement ephemeral storage layer for wisps.\n\n## Requirements\n- New storage location: .beads-ephemeral/issues.jsonl (sibling to .beads/)\n- Gitignored by default (add to .beads/.gitignore)\n- Same JSONL format as regular beads\n- Config option: ephemeral.directory (relative path)\n- ephemeral.enabled config flag\n\n## Storage Behavior\n- Ephemeral issues have `ephemeral: true` field\n- No sync to remote (local only)\n- No daemon tracking needed (transient)\n\n## Implementation\n- Add EphemeralStore in storage package\n- Initialize on demand when --ephemeral flag used\n- Share Issue struct, just different storage path","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-22T00:06:46.706026-08:00","updated_at":"2025-12-22T00:08:26.009875-08:00","dependencies":[{"issue_id":"bd-kwjh.1","depends_on_id":"bd-kwjh","type":"parent-child","created_at":"2025-12-22T00:06:46.706461-08:00","created_by":"daemon"}],"deleted_at":"2025-12-22T00:08:26.009875-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} -{"id":"bd-kwjh.2","title":".beads-ephemeral/ storage backend","description":"Implement ephemeral storage layer for wisps.\n\n## Requirements\n- New storage location: .beads-ephemeral/issues.jsonl (sibling to .beads/)\n- Gitignored by default (add to .beads/.gitignore)\n- Same JSONL format as regular beads\n- Config option: ephemeral.directory (relative path)\n- ephemeral.enabled config flag\n\n## Storage Behavior\n- Ephemeral issues have ephemeral: true field\n- No sync to remote (local only)\n- No daemon tracking needed (transient)\n\n## Implementation\n- Add EphemeralStore in storage package\n- Initialize on demand when --ephemeral flag used\n- Share Issue struct, just different storage path","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-22T00:06:56.248345-08:00","updated_at":"2025-12-22T00:13:51.281427-08:00","closed_at":"2025-12-22T00:13:51.281427-08:00","close_reason":"Implemented ephemeral storage backend: FindEphemeralDir, FindEphemeralDatabasePath, NewEphemeralStorage, EnsureEphemeralGitignore, IsEphemeralDatabase with tests","dependencies":[{"issue_id":"bd-kwjh.2","depends_on_id":"bd-kwjh","type":"parent-child","created_at":"2025-12-22T00:06:56.248725-08:00","created_by":"daemon"}]} -{"id":"bd-kwjh.3","title":"bd mol bond --ephemeral flag","description":"Add --ephemeral flag to bd mol bond command.\n\n## Behavior\n- `bd mol bond \u003cproto\u003e --ephemeral` creates molecule in .beads-ephemeral/\n- Without flag, creates in .beads/ (current behavior)\n- Ephemeral molecules have `ephemeral: true` in their issue record\n\n## Implementation\n- Add --ephemeral bool flag to mol bond command\n- Route to EphemeralStore when flag set\n- Set ephemeral:true on created issue\n\n## Testing\n- Test mol bond creates in correct location\n- Test ephemeral flag is persisted\n- Test regular mol bond still works","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-22T00:07:26.591728-08:00","updated_at":"2025-12-22T00:17:42.50719-08:00","closed_at":"2025-12-22T00:17:42.50719-08:00","close_reason":"Implemented --ephemeral flag for mol bond: routes spawned molecules to .beads-ephemeral/, updates gitignore, updated help text","dependencies":[{"issue_id":"bd-kwjh.3","depends_on_id":"bd-kwjh","type":"parent-child","created_at":"2025-12-22T00:07:26.592102-08:00","created_by":"daemon"},{"issue_id":"bd-kwjh.3","depends_on_id":"bd-kwjh.2","type":"blocks","created_at":"2025-12-22T00:07:26.592866-08:00","created_by":"daemon"}]} -{"id":"bd-kwjh.4","title":"bd mol squash handles wispβ†’digest","description":"Update bd mol squash to handle ephemeral molecules.\n\n## Behavior for Ephemeral Molecules\n1. Delete wisp from .beads-ephemeral/\n2. Create digest issue in .beads/ (permanent)\n3. Digest has type:digest and squashed_from field\n\n## Digest Format\n```json\n{\n \"id\": \"\u003cparent\u003e.digest-NNN\",\n \"type\": \"digest\",\n \"title\": \"\u003cproto\u003e cycle @ \u003ctimestamp\u003e\",\n \"description\": \"\u003csummary from --summary flag\u003e\",\n \"parent\": \"\u003cproto-id\u003e\",\n \"squashed_from\": \"\u003cwisp-id\u003e\"\n}\n```\n\n## Implementation\n- Detect if molecule is ephemeral (check storage location or flag)\n- Delete from ephemeral store\n- Create digest in permanent store\n- Return digest ID\n\n## Testing\n- Test squash of ephemeral mol creates digest\n- Test wisp is deleted after squash\n- Test digest is queryable","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-22T00:07:27.685116-08:00","updated_at":"2025-12-22T00:53:55.74082-08:00","closed_at":"2025-12-22T00:53:55.74082-08:00","close_reason":"Implemented cross-store wispβ†’digest squash with tests","dependencies":[{"issue_id":"bd-kwjh.4","depends_on_id":"bd-kwjh","type":"parent-child","created_at":"2025-12-22T00:07:27.686798-08:00","created_by":"daemon"},{"issue_id":"bd-kwjh.4","depends_on_id":"bd-kwjh.2","type":"blocks","created_at":"2025-12-22T00:07:27.687773-08:00","created_by":"daemon"}]} -{"id":"bd-kwjh.5","title":"bd wisp list command","description":"Add bd wisp list command to show ephemeral molecules.\n\n## Usage\n```bash\nbd wisp list # List all wisps in current context\nbd wisp list --json # JSON output\nbd wisp list --all # Include orphaned wisps\n```\n\n## Output\n- Shows in-progress ephemeral molecules\n- Columns: ID, Title, Started, Last Update, Status\n- Warns about orphaned wisps (old updated_at)\n\n## Implementation\n- New 'wisp' command group\n- Read from .beads-ephemeral/issues.jsonl\n- Filter to ephemeral:true issues","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-22T00:07:29.514936-08:00","updated_at":"2025-12-22T01:09:03.514376-08:00","closed_at":"2025-12-22T01:09:03.514376-08:00","close_reason":"Implemented bd wisp list command with --all and --json flags, stale detection, and human-readable output","dependencies":[{"issue_id":"bd-kwjh.5","depends_on_id":"bd-kwjh","type":"parent-child","created_at":"2025-12-22T00:07:29.515301-08:00","created_by":"daemon"},{"issue_id":"bd-kwjh.5","depends_on_id":"bd-kwjh.2","type":"blocks","created_at":"2025-12-22T00:07:29.516134-08:00","created_by":"daemon"}]} -{"id":"bd-kwjh.6","title":"bd wisp gc command","description":"Add bd wisp gc command to garbage collect orphaned wisps.\n\n## Usage\n```bash\nbd wisp gc # Clean orphaned wisps\nbd wisp gc --dry-run # Show what would be cleaned\nbd wisp gc --age 1h # Custom orphan threshold (default: 1h)\n```\n\n## Orphan Detection\nA wisp is orphaned if:\n- process_id field exists AND process is dead\n- OR updated_at older than threshold AND not complete\n- AND molecule status is not complete/abandoned\n\n## Behavior\n- Delete orphaned wisps (no digest created)\n- Report count of cleaned wisps\n- --dry-run shows candidates without deleting\n\n## Implementation\n- Add 'gc' subcommand to wisp group\n- Process detection via os.FindProcess or /proc\n- Configurable age threshold","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-22T00:07:30.861155-08:00","updated_at":"2025-12-22T01:12:37.283991-08:00","closed_at":"2025-12-22T01:12:37.283991-08:00","close_reason":"Implemented bd wisp gc command with --dry-run, --age, and --all flags for garbage collecting stale wisps","dependencies":[{"issue_id":"bd-kwjh.6","depends_on_id":"bd-kwjh","type":"parent-child","created_at":"2025-12-22T00:07:30.862681-08:00","created_by":"daemon"},{"issue_id":"bd-kwjh.6","depends_on_id":"bd-kwjh.2","type":"blocks","created_at":"2025-12-22T00:07:30.863721-08:00","created_by":"daemon"}]} -{"id":"bd-kwjh.7","title":"bd mol burn deletes ephemeral without digest","description":"Update bd mol burn to handle ephemeral molecules.\n\n## Behavior for Ephemeral Molecules\n- Delete wisp from .beads-ephemeral/\n- NO digest created (unlike squash)\n- Used for abandoned/crashed cycles\n\n## Difference from Squash\n| Command | Ephemeral Behavior |\n|---------|-------------------|\n| squash | Delete wisp, create digest |\n| burn | Delete wisp, no trace |\n\n## Implementation\n- Detect if molecule is ephemeral\n- Delete from ephemeral store\n- Skip digest creation","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-22T00:07:32.020144-08:00","updated_at":"2025-12-22T01:11:05.487605-08:00","closed_at":"2025-12-22T01:11:05.487605-08:00","close_reason":"Implemented bd mol burn command with --dry-run and --force flags for deleting wisps without creating digests","dependencies":[{"issue_id":"bd-kwjh.7","depends_on_id":"bd-kwjh","type":"parent-child","created_at":"2025-12-22T00:07:32.022117-08:00","created_by":"daemon"},{"issue_id":"bd-kwjh.7","depends_on_id":"bd-kwjh.2","type":"blocks","created_at":"2025-12-22T00:07:32.023217-08:00","created_by":"daemon"}]} -{"id":"bd-kwro","title":"Beads Messaging \u0026 Knowledge Graph (v0.30.2)","description":"Add messaging semantics and extended graph links to Beads, enabling it to serve as\nthe universal substrate for knowledge work - issues, messages, documents, and threads\nas nodes in a queryable graph.\n\n## Motivation\n\nGas Town (GGT) needs inter-agent communication. Rather than a separate mail system,\ncollapse messaging into Beads - one system, one sync, one query interface, all in git.\n\nThis also positions Beads as a foundation for:\n- Company-wide issue tracking (like Notion)\n- Threaded conversations (like Reddit/Slack)\n- Knowledge graphs with loose associations\n- Arbitrary workflow UIs built on top\n\n## New Issue Type\n\n**message** - ephemeral communication between workers\n- sender: who sent it\n- assignee: recipient\n- priority: P0 (urgent) to P4 (routine)\n- status: open (unread) -\u003e closed (read)\n- ephemeral: true = can be bulk-deleted after swarm\n\n## New Graph Links\n\n**replies_to** - conversation threading\n- Messages reply to messages\n- Enables Reddit-style nested threads\n- Different from parent_id (not hierarchy, its conversation flow)\n\n**relates_to** - loose see also associations\n- Bidirectional knowledge graph edges\n- Not blocking, not hierarchical, just related\n- Enables discovery and traversal\n\n**duplicates** - deduplication at scale\n- Mark issue B as duplicate of canonical issue A\n- Close B, link to A\n- Essential for large issue databases\n\n**supersedes** - version chains\n- Design Doc v2 supersedes Design Doc v1\n- Track evolution of artifacts\n\n## New Fields (optional, any issue type)\n\n- sender (string) - who created this (for messages)\n- ephemeral (boolean) - can be bulk-deleted when closed\n\n## New Commands\n\nMessaging:\n- bd mail send \u003crecipient\u003e -s Subject -m Body\n- bd mail inbox (list open messages for me)\n- bd mail read \u003cid\u003e (show message content)\n- bd mail ack \u003cid\u003e (mark as read/close)\n- bd mail reply \u003cid\u003e -m Response (reply to thread)\n\nGraph links:\n- bd relate \u003cid1\u003e \u003cid2\u003e (create relates_to link)\n- bd duplicate \u003cid\u003e --of \u003ccanonical\u003e (mark as duplicate)\n- bd supersede \u003cid\u003e --with \u003cnew\u003e (mark superseded)\n\nCleanup:\n- bd cleanup --ephemeral (delete closed ephemeral issues)\n\n## Identity Configuration\n\nWorkers need identity for sender field:\n- BEADS_IDENTITY env var\n- Or .beads/config.json: identity field\n\n## Hooks (for GGT integration)\n\nBeads as platform - extensible without knowing about GGT.\nHook files in .beads/hooks/:\n- on_create (runs after bd create)\n- on_update (runs after bd update)\n- on_close (runs after bd close)\n- on_message (runs after bd mail send)\n\nGGT registers hooks to notify daemons of new messages.\n\n## Schema Changes (Migration Required)\n\nAdd to issue schema:\n- type: message (new valid type)\n- sender: string (optional)\n- ephemeral: boolean (optional)\n- replies_to: string (issue ID, optional)\n- relates_to: []string (issue IDs, optional)\n- duplicates: string (canonical issue ID, optional)\n- superseded_by: string (new issue ID, optional)\n\nMigration adds fields as optional - existing beads unchanged.\n\n## Success Criteria\n\n1. bd mail send/inbox/read/ack/reply work end-to-end\n2. replies_to creates proper thread structure\n3. relates_to, duplicates, supersedes links queryable\n4. Hooks fire on create/update/close/message\n5. Identity configurable via env or config\n6. Migration preserves all existing data\n7. All new features have tests","status":"tombstone","priority":0,"issue_type":"epic","created_at":"2025-12-16T03:00:53.912223-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"epic"} -{"id":"bd-kwro.1","title":"Schema: Add message type and new fields","description":"Add to internal/storage/sqlite/schema.go and models:\n\nNew issue_type value:\n- message\n\nNew optional fields on Issue struct:\n- Sender string (who sent this)\n- Ephemeral bool (can be bulk-deleted)\n- RepliesTo string (issue ID for threading)\n- RelatesTo []string (issue IDs for knowledge graph)\n- Duplicates string (canonical issue ID)\n- SupersededBy string (replacement issue ID)\n\nUpdate:\n- internal/storage/sqlite/schema.go - add columns\n- internal/models/issue.go - add fields to struct\n- internal/storage/sqlite/sqlite.go - CRUD operations\n- Create migration from v0.30.1\n\nEnsure backward compatibility - all new fields optional.","status":"tombstone","priority":0,"issue_type":"task","created_at":"2025-12-16T03:01:19.777604-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} -{"id":"bd-kwro.10","title":"Tests for messaging and graph links","description":"Comprehensive test coverage for all new features.\n\nTest files:\n- cmd/bd/mail_test.go - mail command tests\n- internal/storage/sqlite/graph_links_test.go - graph link tests\n- internal/hooks/hooks_test.go - hook execution tests\n\nTest cases:\n- Mail send/inbox/read/ack lifecycle\n- Thread creation and traversal (replies_to)\n- Bidirectional relates_to\n- Duplicate marking and queries\n- Supersedes chains\n- Ephemeral cleanup\n- Identity resolution priority\n- Hook execution (mock hooks)\n- Schema migration preserves data\n\nTarget: \u003e80% coverage on new code","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-12-16T03:02:34.050136-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} -{"id":"bd-kwro.11","title":"Documentation for messaging and graph links","description":"Document all new features.\n\nFiles to update:\n- README.md - brief mention of messaging capability\n- AGENTS.md - update for AI agents using bd mail\n- docs/messaging.md (new) - full messaging reference\n- docs/graph-links.md (new) - graph link reference\n- CHANGELOG.md - v0.30.2 release notes\n\nTopics to cover:\n- Mail commands with examples\n- Graph link types and use cases\n- Identity configuration\n- Hooks setup for notifications\n- Migration notes","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-12-16T03:02:39.548518-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} -{"id":"bd-kwro.2","title":"Graph Link: replies_to for conversation threading","description":"Implement replies_to link type for message threading.\n\nNew command:\n- bd mail reply \u003cid\u003e -m 'Response' creates a message with replies_to set\n\nQuery support:\n- bd show \u003cid\u003e --thread shows full conversation thread\n- Thread traversal in storage layer\n\nStorage:\n- replies_to column in issues table\n- Index for efficient thread queries\n\nThis enables Reddit-style nested threads where messages reply to other messages.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-16T03:01:25.292728-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} -{"id":"bd-kwro.3","title":"Graph Link: relates_to for knowledge graph","description":"Implement relates_to link type for loose associations.\n\nNew command:\n- bd relate \u003cid1\u003e \u003cid2\u003e - creates bidirectional relates_to link\n\nQuery support:\n- bd show \u003cid\u003e --related shows related issues\n- bd list --related-to \u003cid\u003e\n\nStorage:\n- relates_to stored as JSON array of issue IDs\n- Consider: separate junction table for efficiency at scale?\n\nThis enables 'see also' connections without blocking or hierarchy.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-16T03:01:30.793115-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} -{"id":"bd-kwro.4","title":"Graph Link: duplicates for deduplication","description":"Implement duplicates link type for marking issues as duplicates.\n\nNew command:\n- bd duplicate \u003cid\u003e --of \u003ccanonical\u003e - marks id as duplicate of canonical\n- Auto-closes the duplicate issue\n\nQuery support:\n- bd show \u003cid\u003e shows 'Duplicate of: \u003ccanonical\u003e'\n- bd list --duplicates shows all duplicate pairs\n\nStorage:\n- duplicates column pointing to canonical issue ID\n\nEssential for large issue databases with many similar reports.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-16T03:01:36.257223-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} -{"id":"bd-kwro.5","title":"Graph Link: supersedes for version chains","description":"Implement supersedes link type for version tracking.\n\nNew command:\n- bd supersede \u003cid\u003e --with \u003cnew\u003e - marks id as superseded by new\n- Auto-closes the superseded issue\n\nQuery support:\n- bd show \u003cid\u003e shows 'Superseded by: \u003cnew\u003e'\n- bd show \u003cnew\u003e shows 'Supersedes: \u003cid\u003e'\n- bd list --superseded shows version chains\n\nStorage:\n- superseded_by column pointing to replacement issue\n\nUseful for design docs, specs, and evolving artifacts.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-16T03:01:41.749294-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} -{"id":"bd-kwro.6","title":"Mail Commands: bd mail send/inbox/read/ack","description":"Implement core mail commands in cmd/bd/mail.go\n\nCommands:\n- bd mail send \u003crecipient\u003e -s 'Subject' -m 'Body' [--urgent]\n - Creates issue with type=message, sender=identity, assignee=recipient\n - --urgent sets priority=0\n \n- bd mail inbox [--from \u003csender\u003e] [--priority \u003cn\u003e]\n - Lists open messages where assignee=my identity\n - Sorted by priority, then date\n \n- bd mail read \u003cid\u003e\n - Shows full message content (subject, body, sender, timestamp)\n - Does NOT close (separate from ack)\n \n- bd mail ack \u003cid\u003e\n - Marks message as read by closing it\n - Can ack multiple: bd mail ack \u003cid1\u003e \u003cid2\u003e ...\n\nRequires: Identity configuration (bd-kwro.7)","status":"tombstone","priority":0,"issue_type":"task","created_at":"2025-12-16T03:02:12.103755-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} -{"id":"bd-kwro.7","title":"Identity Configuration","description":"Implement identity system for sender field.\n\nConfiguration sources (in priority order):\n1. --identity flag on commands\n2. BEADS_IDENTITY environment variable\n3. .beads/config.json: {\"identity\": \"worker-name\"}\n4. Default: git user.name or hostname\n\nNew config file support:\n- .beads/config.json for per-repo settings\n- identity field for messaging\n\nHelper function:\n- GetIdentity() string - resolves identity from sources\n\nUpdate bd mail send to use GetIdentity() for sender field.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-16T03:02:17.603608-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} -{"id":"bd-kwro.8","title":"Hooks System","description":"Implement hook system for extensibility.\n\nHook directory: .beads/hooks/\nHook files (executable scripts):\n- on_create - runs after bd create\n- on_update - runs after bd update \n- on_close - runs after bd close\n- on_message - runs after bd mail send\n\nHook invocation:\n- Pass issue ID as first argument\n- Pass event type as second argument\n- Pass JSON issue data on stdin\n- Run asynchronously (dont block command)\n\nExample hook (GGT notification):\n #!/bin/bash\n gt notify --event=$2 --issue=$1\n\nThis allows GGT to register notification handlers without Beads knowing about GGT.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-16T03:02:23.086393-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} -{"id":"bd-kwro.9","title":"Cleanup: --ephemeral flag","description":"Update bd cleanup to handle ephemeral issues.\n\nNew flag:\n- bd cleanup --ephemeral - deletes all CLOSED issues with ephemeral=true\n\nBehavior:\n- Only deletes if status=closed AND ephemeral=true\n- Respects --dry-run flag\n- Reports count of deleted ephemeral issues\n\nThis allows swarm cleanup to remove transient messages without affecting permanent issues.","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-12-16T03:02:28.563871-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} -{"id":"bd-kyll","title":"Add daemon-side delete operation tests","description":"Follow-up epic for PR #626: Add comprehensive test coverage for delete operations at the daemon/RPC layer. PR #626 successfully added storage layer tests but identified gaps in daemon-side delete operations and RPC integration testing.\n\n## Scope\nTests needed for:\n1. deleteViaDaemon (cmd/bd/delete.go:21) - RPC client-side deletion command\n2. Daemon RPC delete handler - Server-side deletion via daemon\n3. createTombstone wrapper (cmd/bd/delete.go:335) - Tombstone creation wrapper\n4. deleteIssue wrapper (cmd/bd/delete.go:349) - Direct deletion wrapper\n\n## Coverage targets\n- Delete via RPC daemon (both success and error paths)\n- Cascade deletion through daemon\n- Force deletion through daemon\n- Dry-run mode validation\n- Tombstone creation and verification\n- Error handling and edge cases","status":"open","priority":1,"issue_type":"epic","created_at":"2025-12-18T13:08:26.039663309-07:00","updated_at":"2025-12-18T13:08:26.039663309-07:00"} -{"id":"bd-kyo","title":"Run tests and linting","description":"Run the full test suite and linter:\n\n```bash\nTMPDIR=/tmp go test -short ./...\ngolangci-lint run ./...\n```\n\nFix any failures. Linting warnings acceptable (see LINTING.md).","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T22:42:59.290588-08:00","updated_at":"2025-12-18T22:44:36.570262-08:00","closed_at":"2025-12-18T22:44:36.570262-08:00","dependencies":[{"issue_id":"bd-kyo","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T22:43:16.370234-08:00","created_by":"daemon"},{"issue_id":"bd-kyo","depends_on_id":"bd-8hy","type":"blocks","created_at":"2025-12-18T22:43:20.570742-08:00","created_by":"daemon"}]} -{"id":"bd-kzda","title":"Implement conditional bond type for mol bond","description":"The mol bond command accepts 'conditional' as a bond type but doesn't implement any conditional-specific behavior. It currently behaves identically to 'parallel'.\n\n**Expected behavior:**\nConditional bonds should mean 'B runs only if A fails' per the help text (mol.go:318).\n\n**Implementation needed:**\n- Add failure-condition dependency handling\n- Possibly new dependency type or status-based blocking\n- Update bondProtoProto, bondProtoMol, bondMolMol to handle conditional\n\n**Alternative:**\nRemove 'conditional' from valid bond types until implemented.\n\nThis is new functionality, not a regression.","status":"closed","priority":3,"issue_type":"feature","assignee":"beads/toast","created_at":"2025-12-21T10:23:01.966367-08:00","updated_at":"2025-12-23T01:33:25.734264-08:00","closed_at":"2025-12-23T01:33:25.734264-08:00","close_reason":"Merged to main"} -{"id":"bd-l13p","title":"Add GetWorkerStatus RPC endpoint","description":"New RPC endpoint to get all workers and their current molecule/step in one call. Returns: assignee, moleculeID, moleculeTitle, currentStep, totalSteps, stepTitle, lastActivity, status. Enables activity feed TUI to show worker state without multiple round trips.","status":"closed","priority":2,"issue_type":"feature","assignee":"beads/nux","created_at":"2025-12-23T16:26:36.248654-08:00","updated_at":"2025-12-23T16:40:59.772138-08:00","closed_at":"2025-12-23T16:40:59.772138-08:00","close_reason":"Implemented GetWorkerStatus RPC endpoint with tests"} -{"id":"bd-l7y3","title":"bd mol bond --pour should set Wisp=false","description":"In mol_bond.go bondProtoMol(), opts.Wisp is hardcoded to true (line 392). This ignores the --pour flag. When user specifies --pour to make an issue persistent, the Wisp field should be false so the issue is not marked for bulk deletion.\n\nCurrent behavior:\n- --pour flag correctly selects regular storage (not wisp storage)\n- But opts.Wisp=true means spawned issues are still marked for cleanup when closed\n\nExpected behavior:\n- --pour should set Wisp=false so persistent issues are not auto-cleaned\n\nComparison with mol_spawn.go (line 204):\n wisp := !pour // Correctly respects --pour flag\n result, err := spawnMolecule(ctx, store, subgraph, vars, assignee, actor, wisp)\n\nFix: Pass pour flag to bondProtoMol and set opts.Wisp = !pour","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-23T15:15:00.562346-08:00","updated_at":"2025-12-23T15:25:22.53144-08:00","closed_at":"2025-12-23T15:25:22.53144-08:00","close_reason":"Fixed - pour parameter now passed through bondProtoMol chain"} -{"id":"bd-ldb0","title":"Rename ephemeral β†’ wisp throughout codebase","description":"## The Change\n\nRename 'ephemeral' to 'wisp' throughout the beads codebase.\n\n## Why\n\n**Ephemeral** is:\n- 4 syllables (too long)\n- Greek/academic (doesn't match bond/burn/squash)\n- Overused in tech (K8s, networking, storage)\n- Passive/descriptive\n\n**Wisp** is:\n- 1 syllable (matches bond/burn/squash)\n- Evocative - you can SEE a wisp\n- Steam engine metaphor - Gas Town is engines, steam wisps rise and dissipate\n- Will-o'-the-wisp - transient spirits that guide then vanish\n- Unique - nobody else uses it\n\n## The Steam Engine Metaphor\n\n```\nEngine does work β†’ generates steam\nSteam wisps rise β†’ execution trace\nSteam condenses β†’ digest (distillate)\nSteam dissipates β†’ cleaned up (burned)\n```\n\n## Full Vocabulary\n\n| Term | Meaning |\n|------|---------|\n| bond | Attach proto to work (creates wisps) |\n| wisp | Temporary execution step |\n| squash | Condense wisps into digest |\n| burn | Destroy wisps without record |\n| digest | Permanent condensed record |\n\n## Changes Required\n\n### Code\n- `Ephemeral bool` β†’ `Wisp bool` in types/issue.go\n- `--ephemeral` flag β†’ remove (wisp is default)\n- `--persistent` flag β†’ keep as opt-out\n- `bd cleanup --ephemeral` β†’ `bd cleanup --wisps`\n- Update all references in mol_*.go files\n\n### Docs\n- Update all documentation\n- Update CLAUDE.md examples\n- Update CLI help text\n\n### Database Migration\n- Add migration to rename field (or keep internal name, just change API)\n\n## Example Usage After\n\n```bash\nbd mol bond mol-polecat-work # Creates wisps (default)\nbd mol bond mol-xxx --persistent # Creates permanent issues\nbd mol squash bd-xxx # Condenses wisps β†’ digest\nbd cleanup --wisps # Clean old wisps\nbd list --wisps # Show wisp issues\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-21T14:44:41.576068-08:00","updated_at":"2025-12-22T00:32:31.153738-08:00","closed_at":"2025-12-22T00:32:31.153738-08:00","close_reason":"Renamed ephemeral β†’ wisp throughout codebase"} -{"id":"bd-lfak","title":"bd preflight: PR readiness checks for contributors","description":"## Vision\n\nEncode project-specific institutional knowledge into executable checks. CONTRIBUTING.md is documentation that's read once and forgotten; `bd preflight` is documentation that runs at exactly the right moment.\n\n## Problem Statement\n\nContributors face a \"last mile\" problem - they do the work but stumble on project-specific gotchas at PR time:\n- Nix vendorHash gets stale when go.sum changes\n- Beads artifacts leak into PRs (see bd-umbf for namespace solution)\n- Version mismatches between version.go and default.nix\n- Tests/lint not run locally before pushing\n- Other project-specific checks that only surface when CI fails\n\nThese are too obscure to remember, exist in docs nobody reads end-to-end, and waste CI round-trips.\n\n## Why beads?\n\nBeads already has a foothold in the contributor workflow. It knows:\n- Git state (staged files, branch, dirty status)\n- Project structure\n- The specific issue being worked on\n- Project-specific configuration\n\n## Proposed Interface\n\n### Tier 1: Checklist Mode (v1)\n\n $ bd preflight\n PR Readiness Checklist:\n\n [ ] Tests pass: go test -short ./...\n [ ] Lint passes: golangci-lint run ./...\n [ ] No beads pollution: check .beads/issues.jsonl diff\n [ ] Nix hash current: go.sum unchanged or vendorHash updated\n [ ] Version sync: version.go matches default.nix\n\n Run 'bd preflight --check' to validate automatically.\n\n### Tier 2: Check Mode (v2)\n\n $ bd preflight --check\n βœ“ Tests pass\n βœ“ Lint passes\n ⚠ Beads pollution: 3 issues in diff - are these project issues or personal?\n βœ— Nix hash stale: go.sum changed, vendorHash needs update\n Fix: sha256-KRR6dXzsSw8OmEHGBEVDBOoIgfoZ2p0541T9ayjGHlI=\n βœ“ Version sync\n\n 1 error, 1 warning. Run 'bd preflight --fix' to auto-fix where possible.\n\n### Tier 3: Fix Mode (v3)\n\n $ bd preflight --fix\n βœ“ Updated vendorHash in default.nix\n ⚠ Cannot auto-fix beads pollution - manual review needed\n\n## Checks to Implement\n\n| Check | Description | Auto-fixable |\n|-------|-------------|--------------|\n| tests | Run go test -short ./... | No |\n| lint | Run golangci-lint | Partial (gofmt) |\n| beads-pollution | Detect personal issues in diff | No (see bd-umbf) |\n| nix-hash | Detect stale vendorHash | Yes (if nix available) |\n| version-sync | version.go matches default.nix | Yes |\n| no-debug | No TODO/FIXME/console.log | Warn only |\n| clean-stage | No unintended files staged | Warn only |\n\n## Future: Configuration\n\nMake checks configurable per-project via .beads/preflight.yaml:\n\n preflight:\n checks:\n - name: tests\n run: go test -short ./...\n required: true\n - name: no-secrets\n pattern: \"**/*.env\"\n staged: deny\n - name: custom-check\n run: ./scripts/validate.sh\n\nThis lets any project using beads define their own preflight checks.\n\n## Implementation Phases\n\n### Phase 1: Static Checklist\n- Implement bd preflight with hardcoded checklist for beads\n- No execution, just prints what to check\n- Update CONTRIBUTING.md to reference it\n\n### Phase 2: Automated Checks\n- Implement bd preflight --check\n- Run tests, lint, detect stale hashes\n- Clear pass/fail/warn output\n\n### Phase 3: Auto-fix\n- Implement bd preflight --fix\n- Fix vendorHash, version sync\n- Integrate with bd-umbf solution for pollution\n\n### Phase 4: Configuration\n- .beads/preflight.yaml support\n- Make it useful for other projects using beads\n- Plugin/hook system for custom checks\n\n## Dependencies\n\n- bd-umbf: Namespace isolation for beads pollution (blocking for full solution)\n\n## Success Metrics\n\n- Fewer CI failures on first PR push\n- Reduced \"fix nix hash\" commits\n- Contributors report preflight caught issues before CI","status":"open","priority":2,"issue_type":"epic","created_at":"2025-12-13T18:01:39.587078-08:00","updated_at":"2025-12-13T18:01:39.587078-08:00","dependencies":[{"issue_id":"bd-lfak","depends_on_id":"bd-umbf","type":"blocks","created_at":"2025-12-13T18:01:46.059901-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-likt","title":"Add daemon RPC support for gate commands","description":"Add daemon RPC support for gate commands.\n\n## Current State\nGate commands require --no-daemon flag because they use direct SQLite access:\n- Gate create needs to write await_type, await_id, timeout_ns, waiters fields\n- Gate wait needs to update waiters JSON array\n- Daemon RPC doesnt have methods for these operations\n\n## Implementation\n\n### 1. Add RPC methods to internal/rpc/protocol.go\n\n```go\n// Gate operations\ntype GateCreateArgs struct {\n Title string \\`json:\"title\"\\`\n AwaitType string \\`json:\"await_type\"\\`\n AwaitID string \\`json:\"await_id\"\\`\n Timeout time.Duration \\`json:\"timeout\"\\`\n Waiters []string \\`json:\"waiters\"\\`\n}\n\ntype GateCreateResult struct {\n Issue *types.Issue \\`json:\"issue\"\\`\n}\n\ntype GateListArgs struct {\n All bool \\`json:\"all\"\\` // Include closed gates\n}\n\ntype GateListResult struct {\n Gates []*types.Issue \\`json:\"gates\"\\`\n}\n\ntype GateWaitArgs struct {\n GateID string \\`json:\"gate_id\"\\`\n Waiters []string \\`json:\"waiters\"\\` // Additional waiters to add\n}\n\ntype GateWaitResult struct {\n Gate *types.Issue \\`json:\"gate\"\\`\n AddedCount int \\`json:\"added_count\"\\`\n}\n```\n\n### 2. Add handler methods to internal/daemon/rpc_handler.go\n\n```go\nfunc (h *RPCHandler) GateCreate(ctx context.Context, args *rpc.GateCreateArgs) (*rpc.GateCreateResult, error) {\n now := time.Now()\n gate := \u0026types.Issue{\n Title: args.Title,\n IssueType: types.TypeGate,\n Status: types.StatusOpen,\n Priority: 1,\n Assignee: \"deacon/\",\n Wisp: true,\n AwaitType: args.AwaitType,\n AwaitID: args.AwaitID,\n Timeout: args.Timeout,\n Waiters: args.Waiters,\n CreatedAt: now,\n UpdatedAt: now,\n }\n gate.ContentHash = gate.ComputeContentHash()\n \n if err := h.store.CreateIssue(ctx, gate, h.actor); err != nil {\n return nil, err\n }\n \n return \u0026rpc.GateCreateResult{Issue: gate}, nil\n}\n\nfunc (h *RPCHandler) GateList(ctx context.Context, args *rpc.GateListArgs) (*rpc.GateListResult, error) {\n gateType := types.TypeGate\n filter := types.IssueFilter{IssueType: \u0026gateType}\n if !args.All {\n openStatus := types.StatusOpen\n filter.Status = \u0026openStatus\n }\n \n gates, err := h.store.SearchIssues(ctx, \"\", filter)\n if err != nil {\n return nil, err\n }\n \n return \u0026rpc.GateListResult{Gates: gates}, nil\n}\n\nfunc (h *RPCHandler) GateWait(ctx context.Context, args *rpc.GateWaitArgs) (*rpc.GateWaitResult, error) {\n gate, err := h.store.GetIssue(ctx, args.GateID)\n if err != nil {\n return nil, err\n }\n if gate.IssueType != types.TypeGate {\n return nil, fmt.Errorf(\"%s is not a gate\", args.GateID)\n }\n \n // Merge waiters (dedupe)\n waiterSet := make(map[string]bool)\n for _, w := range gate.Waiters {\n waiterSet[w] = true\n }\n added := 0\n for _, w := range args.Waiters {\n if !waiterSet[w] {\n gate.Waiters = append(gate.Waiters, w)\n waiterSet[w] = true\n added++\n }\n }\n \n if added \u003e 0 {\n // Update via store\n updates := map[string]interface{}{\n \"waiters\": gate.Waiters,\n }\n if err := h.store.UpdateIssue(ctx, args.GateID, updates, h.actor); err != nil {\n return nil, err\n }\n }\n \n return \u0026rpc.GateWaitResult{Gate: gate, AddedCount: added}, nil\n}\n```\n\n### 3. Register methods in daemon\n\nIn internal/daemon/server.go, register the new methods:\n```go\nrpc.RegisterMethod(\"gate.create\", h.GateCreate)\nrpc.RegisterMethod(\"gate.list\", h.GateList)\nrpc.RegisterMethod(\"gate.wait\", h.GateWait)\n```\n\n### 4. Add client methods to internal/rpc/client.go\n\n```go\nfunc (c *Client) GateCreate(ctx context.Context, args *GateCreateArgs) (*GateCreateResult, error) {\n var result GateCreateResult\n err := c.Call(ctx, \"gate.create\", args, \u0026result)\n return \u0026result, err\n}\n\nfunc (c *Client) GateList(ctx context.Context, args *GateListArgs) (*GateListResult, error) {\n var result GateListResult\n err := c.Call(ctx, \"gate.list\", args, \u0026result)\n return \u0026result, err\n}\n\nfunc (c *Client) GateWait(ctx context.Context, args *GateWaitArgs) (*GateWaitResult, error) {\n var result GateWaitResult\n err := c.Call(ctx, \"gate.wait\", args, \u0026result)\n return \u0026result, err\n}\n```\n\n### 5. Update cmd/bd/gate.go to use daemon\n\n```go\n// In gateCreateCmd Run:\nif daemonClient != nil {\n result, err := daemonClient.GateCreate(ctx, \u0026rpc.GateCreateArgs{\n Title: title,\n AwaitType: awaitType,\n AwaitID: awaitID,\n Timeout: timeout,\n Waiters: notifyAddrs,\n })\n if err != nil {\n FatalError(\"gate create: %v\", err)\n }\n gate = result.Issue\n} else {\n // Existing direct store code\n}\n```\n\n## Files to Modify\n\n1. **internal/rpc/protocol.go** - Add Gate*Args/Result types\n2. **internal/daemon/rpc_handler.go** - Add handler methods\n3. **internal/daemon/server.go** - Register methods\n4. **internal/rpc/client.go** - Add client methods\n5. **cmd/bd/gate.go** - Use daemon client when available\n\n## Testing\n\n```bash\n# Start daemon\nbd daemon start\n\n# Test via daemon (should work without --no-daemon)\nbd gate create --await timer:5m --notify beads/dave\nbd gate list\nbd gate wait \u003cid\u003e --notify beads/alice\n\n# Verify daemon handled it\nbd daemons logs . | grep gate\n```\n\n## Success Criteria\n- All gate commands work without --no-daemon\n- Same behavior in daemon vs direct mode\n- Waiters array updates correctly via RPC\n- Tests pass for RPC gate operations","status":"closed","priority":3,"issue_type":"task","assignee":"beads/Gater","created_at":"2025-12-23T12:13:25.778412-08:00","updated_at":"2025-12-23T13:45:58.398604-08:00","closed_at":"2025-12-23T13:45:58.398604-08:00","close_reason":"Implemented daemon RPC support for all gate commands","dependencies":[{"issue_id":"bd-likt","depends_on_id":"bd-udsi","type":"discovered-from","created_at":"2025-12-23T12:13:36.174822-08:00","created_by":"daemon"},{"issue_id":"bd-likt","depends_on_id":"bd-iz5t","type":"parent-child","created_at":"2025-12-23T12:44:07.891992-08:00","created_by":"daemon"}]} -{"id":"bd-lk39","title":"Add composite index (issue_id, event_type) on events table","description":"GetCloseReason and GetCloseReasonsForIssues filter by both issue_id and event_type.\n\n**Query (queries.go:355-358):**\n```sql\nSELECT comment FROM events\nWHERE issue_id = ? AND event_type = ?\nORDER BY created_at DESC LIMIT 1\n```\n\n**Problem:** Currently uses idx_events_issue but must filter event_type in memory.\n\n**Solution:** Add migration:\n```sql\nCREATE INDEX IF NOT EXISTS idx_events_issue_type ON events(issue_id, event_type);\n```\n\n**Priority:** Low - events table is typically small relative to issues.","status":"closed","priority":4,"issue_type":"task","created_at":"2025-12-22T22:58:54.070587-08:00","updated_at":"2025-12-22T23:15:13.841988-08:00","closed_at":"2025-12-22T23:15:13.841988-08:00","close_reason":"Implemented in migration 026_additional_indexes.go","dependencies":[{"issue_id":"bd-lk39","depends_on_id":"bd-h0we","type":"discovered-from","created_at":"2025-12-22T22:58:54.071286-08:00","created_by":"daemon"}]} -{"id":"bd-llfl","title":"Improve test coverage for cmd/bd CLI (26.2% β†’ 50%)","description":"The main CLI package (cmd/bd) has only 26.2% test coverage. CLI commands should have at least 50% coverage to ensure reliability.\n\nKey areas with low/no coverage:\n- daemon_autostart.go (multiple 0% functions)\n- compact.go (several 0% functions)\n- Various command handlers\n\nCurrent coverage: 26.2%\nTarget coverage: 50%","status":"in_progress","priority":2,"issue_type":"task","assignee":"beads/charlie","created_at":"2025-12-13T20:43:03.123341-08:00","updated_at":"2025-12-23T22:29:35.540687-08:00"} -{"id":"bd-lo4","title":"Test pinned issue","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-12-18T21:44:49.031385-08:00","updated_at":"2025-12-18T21:47:25.055109-08:00","deleted_at":"2025-12-18T21:47:25.055109-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} -{"id":"bd-lq2o","title":"Rebuild local binary","description":"Build and verify: go build -o bd ./cmd/bd \u0026\u0026 ./bd version","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.759506-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","close_reason":"Binary builds and runs correctly","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task","wisp":true} -{"id":"bd-lsv4","title":"GH#444: Fix inconsistent status naming in_progress vs in-progress","description":"Documentation uses in-progress (hyphen) but code expects in_progress (underscore). Update all docs to use canonical in_progress. See GitHub issue #444.","status":"tombstone","priority":2,"issue_type":"bug","created_at":"2025-12-16T01:03:14.349425-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} -{"id":"bd-lw0x","title":"Fix bd sync race condition with daemon causing dirty working directory","description":"After bd sync completes with sync.branch mode, subsequent bd commands or daemon file watcher would see a hash mismatch and trigger auto-import, which then schedules re-export, dirtying the working directory.\n\n**Root cause:**\n1. bd sync exports JSONL with NEW content (hash H1)\n2. bd sync updates jsonl_content_hash = H1 in DB\n3. bd sync restores JSONL from HEAD (OLD content, hash H0)\n4. Now: file hash = H0, DB hash = H1 (MISMATCH)\n5. Daemon or next CLI command sees mismatch, imports from OLD JSONL\n6. Import triggers re-export β†’ file is dirty\n\n**Fix:**\nAfter restoreBeadsDirFromBranch(), update jsonl_content_hash to match the restored file's hash. This ensures daemon and CLI see file hash = DB hash β†’ no spurious import/export cycle.\n\nRelated: bd-c83r (multiple daemon prevention)","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-13T06:42:17.130839-08:00","updated_at":"2025-12-13T06:43:33.329042-08:00","closed_at":"2025-12-13T06:43:33.329042-08:00"} -{"id":"bd-lxzx","title":"Add close_reason to JSONL export format documentation","description":"PR #551 now persists close_reason to the database, but there's a question about whether this field should be exported to JSONL format.\n\n## Current State\n- close_reason is stored in issues.close_reason column\n- close_reason is also stored in events table (audit trail)\n- The JSONL export format may or may not include close_reason\n\n## Questions\n1. Should close_reason be exported to JSONL format?\n2. If yes, where should it go (root level or nested in events)?\n3. Should there be any special handling to avoid duplication?\n4. How should close_reason be handled during JSONL import?\n\n## Why This Matters\n- JSONL is the git-friendly sync format\n- Other beads instances import from JSONL\n- close_reason is meaningful data that should be preserved across clones\n\n## Suggested Action\n- Check if close_reason is currently exported in JSONL\n- If not, add it to the export schema\n- Document the field in JSONL format spec\n- Add tests for round-trip (export -\u003e import -\u003e verify close_reason)","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-14T14:25:17.414916-08:00","updated_at":"2025-12-14T14:25:17.414916-08:00","dependencies":[{"issue_id":"bd-lxzx","depends_on_id":"bd-z86n","type":"discovered-from","created_at":"2025-12-14T14:25:17.416131-08:00","created_by":"stevey","metadata":"{}"}]} -{"id":"bd-lz49","title":"Add gate fields: await_type, await_id, timeout, waiters","description":"Add gate-specific fields to the Issue type.\n\n## New Fields\n- await_type: string - Type of condition (gh:run, gh:pr, timer, human, mail)\n- await_id: string - Identifier for the condition\n- timeout: duration - Max time to wait before escalation\n- waiters: []string - Mail addresses to notify when gate clears\n\n## Implementation\n- Add fields to Issue struct in internal/types/types.go\n- Update SQLite schema for new columns\n- Add JSONL serialization/deserialization\n- Update import/export logic","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-23T11:44:32.720196-08:00","updated_at":"2025-12-23T12:00:03.837691-08:00","closed_at":"2025-12-23T12:00:03.837691-08:00","close_reason":"Gate fields added to Issue struct, ComputeContentHash, SQLite migration created, all storage queries updated","dependencies":[{"issue_id":"bd-lz49","depends_on_id":"bd-udsi","type":"parent-child","created_at":"2025-12-23T11:44:52.738823-08:00","created_by":"daemon"},{"issue_id":"bd-lz49","depends_on_id":"bd-2v0f","type":"blocks","created_at":"2025-12-23T11:44:56.269351-08:00","created_by":"daemon"}]} -{"id":"bd-m0tl","title":"bd create -f crashes with nil pointer dereference","description":"GitHub issue #674. The markdown import feature crashes at markdown.go:338 because global variables (store, ctx, actor) aren't initialized when createIssuesFromMarkdown is called. The function uses globals set by cobra command framework but is being called before they're ready. Need to either initialize globals at start of function or pass them as parameters.","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-21T14:35:14.813012-08:00","updated_at":"2025-12-21T15:41:14.600953-08:00","closed_at":"2025-12-21T15:41:14.600953-08:00","close_reason":"Fixed: added nil check for store in createIssuesFromMarkdown"} -{"id":"bd-m164","title":"Add 0.33.2 to versionChanges in info.go","description":"Add new entry at the TOP of versionChanges array in cmd/bd/info.go:\n\n```go\n{\n Version: \"0.33.2\",\n Date: \"2025-12-21\",\n Changes: []string{\n // Add notable changes here\n },\n},\n```\n\nCopy changes from CHANGELOG.md [Unreleased] section.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.761218-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task","wisp":true} -{"id":"bd-m7ib","title":"Add creator field to Issue struct","description":"Add Creator *EntityRef field to Issue. Tracks who created the issue. Optional, omitted if nil in JSONL. This enables CV chain tracking - every piece of work is attributed to its creator.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-22T17:53:31.599447-08:00","updated_at":"2025-12-22T20:03:24.264672-08:00","closed_at":"2025-12-22T20:03:24.264672-08:00","close_reason":"Added Creator *EntityRef field to Issue struct. Included in content hash. Test coverage added.","dependencies":[{"issue_id":"bd-m7ib","depends_on_id":"bd-7pwh","type":"parent-child","created_at":"2025-12-22T17:53:43.39957-08:00","created_by":"daemon"},{"issue_id":"bd-m7ib","depends_on_id":"bd-nmch","type":"blocks","created_at":"2025-12-22T17:53:47.826309-08:00","created_by":"daemon"}]} -{"id":"bd-m8ro","title":"Improve test coverage for internal/rpc (47.5% β†’ 60%)","description":"The RPC package has only 47.5% test coverage. RPC is the communication layer for daemon operations.\n\nCurrent coverage: 47.5%\nTarget coverage: 60%","status":"in_progress","priority":2,"issue_type":"task","assignee":"beads/delta","created_at":"2025-12-13T20:43:09.515299-08:00","updated_at":"2025-12-23T22:29:35.837758-08:00"} -{"id":"bd-m964","title":"Consider FTS5 for text search at scale","description":"SearchIssues uses LIKE patterns for text search which can't use indexes.\n\n**Current query (queries.go:1475-1477):**\n```sql\n(title LIKE ? OR description LIKE ? OR id LIKE ?)\n```\n\n**Problem:** Full table scan on every text search. At 100K+ issues, this becomes slow.\n\n**SQLite FTS5 solution:**\n```sql\nCREATE VIRTUAL TABLE issues_fts USING fts5(\n id, title, description, design, notes,\n content='issues',\n content_rowid='rowid'\n);\n\n-- Triggers to keep FTS in sync\nCREATE TRIGGER issues_ai AFTER INSERT ON issues BEGIN\n INSERT INTO issues_fts(rowid, id, title, description, design, notes)\n VALUES (new.rowid, new.id, new.title, new.description, new.design, new.notes);\nEND;\n-- (similar for UPDATE, DELETE)\n```\n\n**Trade-offs:**\n- Database size increase (~30-50% for text content)\n- Additional write overhead (trigger execution)\n- Better search capabilities (ranking, phrase search)\n\n**Decision needed:** Is full-text search a priority feature? Current LIKE search may be acceptable for most use cases.\n\n**Benchmark first:** Measure SearchIssues at 100K scale before implementing.","status":"open","priority":4,"issue_type":"feature","created_at":"2025-12-22T22:58:56.466121-08:00","updated_at":"2025-12-22T22:58:56.466121-08:00","dependencies":[{"issue_id":"bd-m964","depends_on_id":"bd-h0we","type":"discovered-from","created_at":"2025-12-22T22:58:56.466764-08:00","created_by":"daemon"}]} -{"id":"bd-mh4w","title":"Rename 'bond' to 'spawn' for instantiation","description":"Rename the bd mol bond command to bd mol spawn for instantiating protos.\n \n- Rename molBondCmd to molSpawnCmd\n- Update command Use/Short/Long descriptions \n- Keep 'bond' available for the new bonding feature\n- Update all documentation references\n- Add 'protomolecule' as easter egg alias for 'proto'","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-21T00:58:44.529026-08:00","updated_at":"2025-12-21T01:19:42.942819-08:00","closed_at":"2025-12-21T01:19:42.942819-08:00","close_reason":"Renamed 'bond' to 'spawn' in mol.go, updated all user-facing messages and help text","dependencies":[{"issue_id":"bd-mh4w","depends_on_id":"bd-o5xe","type":"parent-child","created_at":"2025-12-21T00:59:51.167902-08:00","created_by":"daemon"}]} -{"id":"bd-mql4","title":"getLocalSyncBranch silently ignores YAML parse errors","description":"In autoimport.go:170-172, YAML parsing errors are silently ignored. If a user has malformed YAML in config.yaml, sync-branch will just silently be empty with no feedback.\n\nRecommendation: Add debug logging since this function is only called during auto-import, and debugging silent failures is painful.\n\nAdd: debug.Logf(\"Warning: failed to parse config.yaml: %v\", err)","status":"open","priority":4,"issue_type":"task","created_at":"2025-12-07T02:03:44.217728-08:00","updated_at":"2025-12-07T02:03:44.217728-08:00"} -{"id":"bd-mrpw","title":"Run tests and verify build","description":"Run the test suite to verify nothing is broken:\n\n```bash\n./scripts/test.sh\n```\n\nOr manually:\n```bash\ngo build ./cmd/bd/...\ngo test ./...\n```\n\nFix any failures before proceeding.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.761563-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","close_reason":"Tests passed for release","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task","wisp":true} -{"id":"bd-muw","title":"Add empty tasks validation in workflow create","description":"workflow.go:321 will panic if wf.Tasks is empty. Add validation that len(wf.Tasks) \u003e 0 before accessing wf.Tasks[0].","status":"closed","priority":3,"issue_type":"bug","created_at":"2025-12-17T22:23:00.75707-08:00","updated_at":"2025-12-17T22:34:07.281133-08:00","closed_at":"2025-12-17T22:34:07.281133-08:00"} -{"id":"bd-mv6h","title":"Add test coverage for external dep edge cases","description":"During code review of bd-zmmy, identified missing test coverage:\n\n1. RemoveDependency with external ref target (will fail - see bd-a3sj)\n2. GetBlockedIssues with mix of local and external blockers\n3. GetDependencyTree with external deps\n4. AddDependency cycle detection with external refs (should be skipped?)\n5. External dep resolution with WAL mode database\n6. External dep resolution when target project has no .beads directory\n7. External dep resolution with invalid external: format variations\n\nPriority 2 because bd-a3sj is a real bug that tests would catch.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-21T23:45:37.50093-08:00","updated_at":"2025-12-22T22:32:09.515096-08:00","closed_at":"2025-12-22T22:32:09.515096-08:00","close_reason":"Added test coverage: TestGetDependencyTreeExternalDeps (dep tree shows external deps), TestCycleDetectionWithExternalRefs (cycle detection ignores external refs), TestCheckExternalDepNoBeadsDirectory (handles missing .beads dir), TestCheckExternalDepInvalidFormats (handles various invalid formats). All edge cases from bd-mv6h description now covered.","dependencies":[{"issue_id":"bd-mv6h","depends_on_id":"bd-zmmy","type":"discovered-from","created_at":"2025-12-21T23:45:37.501495-08:00","created_by":"daemon"}]} -{"id":"bd-n386","title":"Improve test coverage for internal/daemon (27.3% β†’ 60%)","description":"The daemon package has only 27.3% test coverage. The daemon is critical for background operations and reliability.\n\nKey areas needing tests:\n- Daemon autostart logic\n- Socket handling\n- PID file management\n- Health checks\n\nCurrent coverage: 27.3%\nTarget coverage: 60%","status":"in_progress","priority":2,"issue_type":"task","assignee":"beads/echo","created_at":"2025-12-13T20:43:00.895238-08:00","updated_at":"2025-12-23T22:29:35.375236-08:00"} -{"id":"bd-n3v","title":"Error committing to sync branch: failed to create worktree","description":"\u003e bd sync --no-daemon\nβ†’ Exporting pending changes to JSONL...\nβ†’ Committing changes to sync branch 'beads-sync'...\nError committing to sync branch: failed to create worktree: failed to create worktree parent directory: mkdir /var/home/matt/dev/beads/fix-ci/.git: not a directory","notes":"**Problem Diagnosed**: The `bd sync` command was failing with \"mkdir /var/home/matt/dev/beads/fix-ci/.git: not a directory\" because it was being executed from the wrong directory.\n\n**Root Cause**: The command was run from `/var/home/matt/dev/beads` (where the `fix-ci` worktree exists) instead of the main repository directory `/var/home/matt/dev/beads/main`. Since `fix-ci` is a git worktree with a `.git` file (not directory), the worktree creation logic failed when trying to create `\u003ccurrent_dir\u003e/.git/beads-worktrees/\u003cbranch\u003e`.\n\n**Solution Verified**: Execute `bd sync` from the main repository directory:\n```bash\ncd main \u0026\u0026 bd sync --dry-run\n```\n","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-05T15:25:24.514998248-07:00","updated_at":"2025-12-05T15:42:32.910166956-07:00"} -{"id":"bd-n4td","title":"Add warning when staleness check errors","description":"## Problem\n\nWhen ensureDatabaseFresh() calls CheckStaleness() and it errors (corrupted metadata, permission issues, etc.), we silently proceed with potentially stale data.\n\n**Location:** cmd/bd/staleness.go:27-32\n\n**Scenarios:**\n- Corrupted metadata table\n- Database locked by another process \n- Permission issues reading JSONL file\n- Invalid last_import_time format in DB\n\n## Current Code\n\n```go\nisStale, err := autoimport.CheckStaleness(ctx, store, dbPath)\nif err \\!= nil {\n // If we can't determine staleness, allow operation to proceed\n // (better to show potentially stale data than block user)\n return nil\n}\n```\n\n## Fix\n\n```go\nisStale, err := autoimport.CheckStaleness(ctx, store, dbPath)\nif err \\!= nil {\n fmt.Fprintf(os.Stderr, \"Warning: Could not verify database freshness: %v\\n\", err)\n fmt.Fprintf(os.Stderr, \"Proceeding anyway. Data may be stale.\\n\\n\")\n return nil\n}\n```\n\n## Impact\nMedium - users should know when staleness check fails\n\n## Effort\nEasy - 5 minutes","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-11-20T20:16:34.889997-05:00","updated_at":"2025-12-17T23:13:40.531031-08:00","closed_at":"2025-12-17T19:11:12.950618-08:00","dependencies":[{"issue_id":"bd-n4td","depends_on_id":"bd-2q6d","type":"blocks","created_at":"2025-11-20T20:18:20.154723-05:00","created_by":"stevey","metadata":"{}"}]} -{"id":"bd-n5ug","title":"Merge: bd-au0.7","description":"branch: polecat/dementus\ntarget: main\nsource_issue: bd-au0.7\nrig: beads","status":"closed","priority":1,"issue_type":"merge-request","created_at":"2025-12-23T20:43:36.024341-08:00","updated_at":"2025-12-23T21:21:57.692158-08:00","closed_at":"2025-12-23T21:21:57.692158-08:00","close_reason":"stale - no code pushed"} -{"id":"bd-n6fm","title":"witness Handoff","description":"attached_molecule: bd-ndye\nattached_at: 2025-12-23T12:35:02Z","status":"pinned","priority":2,"issue_type":"task","created_at":"2025-12-23T04:35:02.675024-08:00","updated_at":"2025-12-23T04:35:02.99197-08:00"} -{"id":"bd-n777","title":"Timer beads for scheduled agent callbacks","description":"## Problem\n\nAgents frequently need to wait for external events (CI completion, PR reviews, artifact builds) but have no good mechanism:\n- `sleep N` blocks and is unreliable (often times out at 8+ minutes)\n- Polling wastes context and is easy to forget\n- No way to survive session restarts\n\n## Proposal: Timer Beads\n\nA new bead type or field that represents a scheduled callback:\n\n### Creating timers\n```bash\nbd timer create --in 30s --callback \"Check CI run 12345\" --issue bd-xyz\nbd timer create --at \"2025-12-20T08:00:00\" --callback \"Morning standup\"\nbd timer create --in 5m --on-expire \"tmux send-keys -t dave 'bd show bd-xyz'\"\n```\n\n### Timer storage\n- Store in beads (survives restarts)\n- Fields: `expires_at`, `callback_description`, `on_expire_command`, `linked_issue`\n- Status: pending, fired, cancelled\n\n### Deacon integration\nThe Deacon daemon monitors timer beads:\n1. Wakes on next timer expiry\n2. Executes `on_expire` command (e.g., tmux send-keys to interrupt agent)\n3. Marks timer as fired\n4. Optionally updates linked issue\n\n### Use cases\n- CI monitoring: \"ping me when build completes\"\n- PR reviews: \"check back in 1 hour\"\n- Scheduled tasks: \"remind me at EOD to sync\"\n- Blocking waits: agent registers callback instead of sleeping\n\n## Acceptance criteria\n- [ ] Timer bead type or field design\n- [ ] `bd timer create/list/cancel` commands\n- [ ] Deacon timer monitoring loop\n- [ ] tmux integration for agent interrupts\n- [ ] Survives daemon restarts","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-12-19T23:05:33.051861-08:00","updated_at":"2025-12-21T17:19:48.087482-08:00","closed_at":"2025-12-21T17:19:48.087482-08:00","close_reason":"Will not implement - Gas Town uses a different approach for timed events"} -{"id":"bd-ncwo","title":"Ghost resurrection: remote status:closed wins during git merge","description":"During bd sync, the 3-way git merge sometimes keeps remote's status:closed instead of local's status:tombstone. This causes ghost issues to resurrect even when tombstones exist.\n\nRoot cause: Git 3-way merge doesn't understand tombstone semantics. If base had closed, local changed to tombstone, and remote has closed, git might keep remote's version.\n\nThe early tombstone check in importer.go only prevents CREATION when tombstones exist in DB. But if applyDeletionsFromMerge hard-deletes the tombstones before import runs (because they're not in the merged result), the check doesn't help.\n\nPotential fixes:\n1. Make tombstones 'win' in the beads merge driver (internal/merge/merge.go)\n2. Don't hard-delete tombstones in applyDeletionsFromMerge if they're in the DB\n3. Export tombstones to a separate file that's not subject to merge\n\nGhost issues: bd-cb64c226.*, bd-cbed9619.*","status":"tombstone","priority":1,"issue_type":"bug","created_at":"2025-12-16T22:01:03.56423-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} -{"id":"bd-ndye","title":"mergeDependencies uses union instead of 3-way merge","description":"## Critical Bug\n\nThe `mergeDependencies` function in internal/merge/merge.go performs a UNION of left and right dependencies instead of a proper 3-way merge. This causes removed dependencies to be resurrected.\n\n### Root Cause\n\n```go\n// Current code (lines 795-816):\nfunc mergeDependencies(left, right []Dependency) []Dependency {\n // Just unions left + right\n // NEVER REMOVES anything\n // Doesn't even look at base!\n}\n```\n\nAnd `mergeIssue` (line 579) doesn't pass `base`:\n```go\nresult.Dependencies = mergeDependencies(left.Dependencies, right.Dependencies)\n```\n\n### Impact\n\nIf:\n- Base has dependency D\n- Left removes D (intentional)\n- Right still has D (stale)\n\nCurrent: D is in result (resurrection!)\nCorrect: Left removed it, D should NOT be in result\n\nThis breaks Gas Town's workflow and data integrity. Closed means closed.\n\n### Fix\n\nChange `mergeDependencies` to take `base` and do proper 3-way merge:\n- If dep was in base and removed by left β†’ exclude (left wins)\n- If dep was in base and removed by right β†’ exclude (right wins)\n- If dep wasn't in base and added by either β†’ include\n- If dep was in base and both still have it β†’ include\n\nKey principle: **REMOVALS ARE AUTHORITATIVE**\n\n### Files to Change\n\n1. internal/merge/merge.go:\n - `mergeDependencies(left, right)` β†’ `mergeDependencies(base, left, right)`\n - `mergeIssue` line 579: pass `base.Dependencies`\n\n### Related\n\nThis also explains why `ProtectLocalExportIDs` in importer is defined but never used - the protection was never actually implemented.","status":"closed","priority":0,"issue_type":"bug","created_at":"2025-12-18T23:15:54.475872-08:00","updated_at":"2025-12-18T23:21:10.709571-08:00","closed_at":"2025-12-18T23:21:10.709571-08:00"} -{"id":"bd-nl2","title":"No logging/debugging for tombstone resurrection events","description":"Per the design document bd-zvg Open Question 1: Should resurrection log a warning? Recommendation was Yes. Currently, when an expired tombstone loses to a live issue (resurrection), there is no logging or debugging output. This makes it hard to understand why an issue reappeared. Recommendation: Add optional debug logging when resurrection occurs, e.g., Issue bd-abc resurrected (tombstone expired). Files: internal/merge/merge.go:359-366, 371-378, 400-405, 410-415","status":"open","priority":4,"issue_type":"feature","created_at":"2025-12-05T16:36:52.27525-08:00","updated_at":"2025-12-05T16:36:52.27525-08:00"} -{"id":"bd-nmch","title":"Add EntityRef type for structured entity references","description":"Create EntityRef struct with Name, Platform, Org, ID fields. This is the foundation for HOP entity tracking. Can render as entity://hop/\u003cplatform\u003e/\u003corg\u003e/\u003cid\u003e when needed. Add to internal/types/types.go.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-22T17:53:25.104328-08:00","updated_at":"2025-12-22T17:58:00.014103-08:00","closed_at":"2025-12-22T17:58:00.014103-08:00","close_reason":"Implemented EntityRef type with Name, Platform, Org, ID fields. Added URI(), IsEmpty(), String() methods and ParseEntityURI() function. Full test coverage.","dependencies":[{"issue_id":"bd-nmch","depends_on_id":"bd-7pwh","type":"parent-child","created_at":"2025-12-22T17:53:43.325405-08:00","created_by":"daemon"}]} -{"id":"bd-nqyp","title":"mol-beads-release","description":"Release checklist for beads version {{version}}.\n\nThis molecule ensures all release steps are completed properly.\nVariable: {{version}} - target version (e.g., 0.35.0)\n\n## Step: update-release-notes\nUpdate cmd/bd/info.go with release notes for {{version}}.\n\nAdd a new VersionChange entry at the top of versionChanges slice:\n```go\n{\n Version: \"{{version}}\",\n Date: \"YYYY-MM-DD\",\n Changes: []string{\n \"NEW: Feature description\",\n \"FIX: Bug fix description\",\n \"IMPROVED: Enhancement description\",\n },\n},\n```\n\nRun `git log --oneline v\u003cprevious\u003e..HEAD` to see what changed.\n\n## Step: update-changelog\nUpdate CHANGELOG.md with detailed release notes.\n\nAdd a new section after [Unreleased]:\n```markdown\n## [{{version}}] - YYYY-MM-DD\n\n### Added\n- **Feature name** (issue-id) - Description\n\n### Changed\n- **Change description** (issue-id)\n\n### Fixed\n- **Bug fix** (issue-id) - Description\n```\n\nSort by importance, not chronologically.\nNeeds: update-release-notes\n\n## Step: bump-version\nRun the version bump script.\n\n```bash\n./scripts/bump-version.sh {{version}}\n```\n\nThis updates version in all files:\n- cmd/bd/version.go\n- .claude-plugin/*.json\n- integrations/beads-mcp/pyproject.toml\n- npm-package/package.json\n- Hook templates\n\nNeeds: update-changelog\n\n## Step: run-tests\nRun tests and verify lint passes.\n\n```bash\ngo test -short ./...\n```\n\nCI will run full lint, but fix any obvious issues first.\nNeeds: bump-version\n\n## Step: commit-release\nCommit the release changes.\n\n```bash\ngit add -A\ngit commit -m \"chore: bump version to v{{version}}\"\n```\n\nNeeds: run-tests\n\n## Step: push-and-tag\nPush commit and create release tag.\n\n```bash\ngit push origin main\ngit tag v{{version}}\ngit push origin v{{version}}\n```\n\nThis triggers GitHub Actions release workflow.\nNeeds: commit-release\n\n## Step: wait-for-ci\nWait for GitHub Actions to complete.\n\nMonitor: https://github.com/steveyegge/beads/actions\n\nCI will:\n- Build binaries via GoReleaser\n- Create GitHub Release with assets\n- Publish to npm (@beads/bd)\n- Publish to PyPI (beads-mcp)\n- Update Homebrew tap\n\nWait until all jobs succeed (~5-10 min).\nNeeds: push-and-tag\n\n## Step: verify-release\nVerify the release is complete.\n\n```bash\n# Check GitHub release\ngh release view v{{version}}\n\n# Check Homebrew\nbrew update \u0026\u0026 brew info steveyegge/beads/bd\n\n# Check npm\nnpm view @beads/bd version\n\n# Check PyPI\npip index versions beads-mcp\n```\n\nNeeds: wait-for-ci\n\n## Step: update-local\nUpdate local installations.\n\n```bash\n# Upgrade Homebrew\nbrew upgrade steveyegge/beads/bd\n\n# Or install from source\n./scripts/bump-version.sh {{version}} --install\n\n# Install MCP locally\npip install -e integrations/beads-mcp\n\n# Restart daemons\npkill -f \"bd daemon\" || true\n```\n\nVerify: `bd --version` shows {{version}}\nNeeds: verify-release\n\n## Step: manual-publish\n(Optional) Manual publish if CI failed.\n\n```bash\n# npm (requires npm login)\n./scripts/bump-version.sh {{version}} --publish-npm\n\n# PyPI (requires TWINE credentials)\n./scripts/bump-version.sh {{version}} --publish-pypi\n\n# Or both\n./scripts/bump-version.sh {{version}} --publish-all\n```\n\nOnly needed if CI publishing failed.\nNeeds: wait-for-ci","status":"open","priority":2,"issue_type":"molecule","created_at":"2025-12-23T11:29:39.087936-08:00","updated_at":"2025-12-23T11:29:39.087936-08:00"} -{"id":"bd-nuh1","title":"GH#403: bd doctor --fix circular error message","description":"bd doctor --fix suggests running bd doctor --fix for deletions manifest issue. Fix to provide actual resolution. See GitHub issue #403.","status":"tombstone","priority":2,"issue_type":"bug","created_at":"2025-12-16T01:03:16.290018-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} -{"id":"bd-nurq","title":"Implement bd mol current command","description":"Show what molecule the agent should currently be working on. Referenced by gt-um6q, gt-lz13. Needed for molecule navigation workflow in templates.","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-12-23T00:17:54.069983-08:00","updated_at":"2025-12-23T01:23:59.523404-08:00","closed_at":"2025-12-23T01:23:59.523404-08:00","close_reason":"Implementation already existed, added tests (TestGetMoleculeProgress, TestFindParentMolecule, TestAdvanceToNextStep*), rebuilt and installed binary"} -{"id":"bd-o4qy","title":"Improve CheckStaleness error handling","description":"## Problem\n\nCheckStaleness returns 'false' (not stale) for multiple error conditions instead of returning errors. This masks problems.\n\n**Location:** internal/autoimport/autoimport.go:253-285\n\n## Edge Cases That Return False\n\n1. **Invalid last_import_time format** (line 259-262)\n2. **No JSONL file found** (line 267-277) \n3. **JSONL stat fails** (line 279-282)\n\n## Fix\n\nReturn errors for abnormal conditions:\n\n```go\nlastImportTime, err := time.Parse(time.RFC3339, lastImportStr)\nif err != nil {\n return false, fmt.Errorf(\"corrupted last_import_time: %w\", err)\n}\n\nif jsonlPath == \"\" {\n return false, fmt.Errorf(\"no JSONL file found\")\n}\n\nstat, err := os.Stat(jsonlPath)\nif err != nil {\n return false, fmt.Errorf(\"cannot stat JSONL: %w\", err)\n}\n```\n\n## Impact\nMedium - edge cases are rare but should be handled\n\n## Effort \n30 minutes - requires updating callers in RPC server","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-11-20T20:17:27.606219-05:00","updated_at":"2025-12-17T23:13:40.536905-08:00","closed_at":"2025-12-17T19:11:12.965289-08:00","dependencies":[{"issue_id":"bd-o4qy","depends_on_id":"bd-2q6d","type":"blocks","created_at":"2025-11-20T20:18:26.81065-05:00","created_by":"stevey","metadata":"{}"}]} -{"id":"bd-o55a","title":"GH#509: bd doesn't find .beads when running from nested worktrees","description":"When worktrees are nested under main repo (.worktrees/feature/), bd stops at worktree git root instead of continuing to find .beads in parent. See GitHub issue #509 for detailed fix suggestion.","status":"tombstone","priority":2,"issue_type":"bug","created_at":"2025-12-16T01:03:20.281591-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} -{"id":"bd-o5xe","title":"Molecule bonding: composable workflow templates","description":"Vision: Molecules should be composable like LEGO bricks or Mad Max war rig sections. Bonding lets you attach molecules together to create compound workflows.\n\nTHREE BONDING CONTEXTS:\n1. Template-time: bd mol bond A B β†’ Create reusable compound proto\n2. Spawn-time: bd mol spawn A --attach B β†’ Attach modules when instantiating \n3. Runtime: bd mol attach epic B β†’ Add to running workflow\n\nBOND TYPES:\n- Sequential: B after A completes (feature β†’ deploy)\n- Parallel: B runs alongside A (feature + docs)\n- Conditional: B only if A fails (feature β†’ hotfix)\n\nBOND POINTS (Attachment Sites):\n- Default: B depends on A root epic completion\n- Explicit: --after issue-id for specific attachment\n- Future: Named bond points in proto definitions\n\nVARIABLE FLOW:\n- Shared namespace between bonded molecules\n- Warn on variable name conflicts\n- Future: explicit mapping with --map\n\nDATA MODEL: Issues track bonded_from to preserve compound lineage.\n\nSUCCESS CRITERIA:\n- Can bond two protos into a compound proto\n- Can spawn with --attach for on-the-fly composition\n- Can attach molecules to running workflows\n- Compound structure visible in bd mol show\n- Variables flow correctly between bonded molecules","design":"SIMPLIFIED API (per design review):\n\nCORE COMMANDS:\n- bd mol spawn \u003cproto\u003e - Instantiate proto β†’ molecule\n- bd mol bond \u003cA\u003e \u003cB\u003e - Polymorphic bonding (any combination)\n- bd mol distill \u003cmol\u003e - Extract molecule β†’ proto\n- bd mol show/catalog - Inspect\n\nBOND IS POLYMORPHIC:\n| bond A B | Result |\n|---------------|-------------------------------------|\n| proto + proto | compound proto |\n| proto + mol | spawn proto, attach to mol |\n| mol + proto | spawn proto, attach to mol |\n| mol + mol | join into compound molecule |\n\nBOND TYPES: sequential (default), parallel, conditional\n\nTERMINOLOGY:\n- Proto: Uninstantiated template (easter egg: 'protomolecule')\n- Molecule: Spawned instance of a proto\n- Compound: Result of bonding (proto or molecule)\n- Distill: Extract proto from ad-hoc epic","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-21T00:58:35.479009-08:00","updated_at":"2025-12-21T17:19:45.871164-08:00","closed_at":"2025-12-21T17:19:45.871164-08:00","close_reason":"All core bonding features implemented (6/7 children closed, 1 deferred for future polish)"} -{"id":"bd-o7ik","title":"Priority: refactor mol.go then bd squash","description":"Two tasks:\n\n1. bd-cnwx - Refactor mol.go (1200+ lines, split by subcommand)\n2. bd-2vh3 - Ephemeral cleanup (bd cleanup --ephemeral)\n\nRefactor first - smaller, unblocks easier review of future mol work.\n\n- Mayor","status":"closed","priority":2,"issue_type":"message","assignee":"beads-dave","created_at":"2025-12-21T11:31:38.287244-08:00","updated_at":"2025-12-21T12:59:32.937472-08:00","closed_at":"2025-12-21T12:59:32.937472-08:00","close_reason":"mol.go refactor (bd-cnwx) done. Epic bd-2vh3 now has 5 tiered implementation tasks with dependencies."} -{"id":"bd-o91r","title":"Polymorphic bond command: bd mol bond A B","description":"Implement proto-to-proto bonding to create compound protos.\n\nCOMMAND: bd mol bond proto-feature proto-testing [--as proto-feature-tested] [--type sequential]\n\nBEHAVIOR:\n- Load both proto subgraphs\n- Create new compound proto with combined structure\n- B's root becomes child of A's root (sequential) or sibling (parallel)\n- Wire dependencies: B depends on A's leaf nodes (sequential) or runs parallel\n- Store bonded_from metadata for lineage tracking\n\nFLAGS:\n- --as NAME: Custom ID for compound proto (default: generates hash)\n- --type: sequential (default) or parallel\n- --dry-run: Preview compound structure\n\nOUTPUT:\n- New compound proto in catalog\n- Shows combined variable requirements","notes":"UPDATE: bond is now polymorphic - handles proto+proto, proto+mol, and mol+mol based on operand types. Separate 'attach' command eliminated.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-21T00:58:55.604705-08:00","updated_at":"2025-12-21T10:10:25.385995-08:00","closed_at":"2025-12-21T10:10:25.385995-08:00","close_reason":"Implemented in commit f0df4070","dependencies":[{"issue_id":"bd-o91r","depends_on_id":"bd-o5xe","type":"parent-child","created_at":"2025-12-21T00:59:51.30026-08:00","created_by":"daemon"},{"issue_id":"bd-o91r","depends_on_id":"bd-mh4w","type":"blocks","created_at":"2025-12-21T00:59:51.569391-08:00","created_by":"daemon"},{"issue_id":"bd-o91r","depends_on_id":"bd-rnnr","type":"blocks","created_at":"2025-12-21T00:59:51.652397-08:00","created_by":"daemon"}]} -{"id":"bd-o9o","title":"Exclude pinned issues from bd ready","description":"Update bd ready to exclude pinned issues. Pinned issues are context markers, not work items, and should never appear in the ready-to-work list.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T23:33:41.979073-08:00","updated_at":"2025-12-21T11:29:41.190567-08:00","closed_at":"2025-12-21T11:29:41.190567-08:00","close_reason":"Already implemented in SQLite (line 18). Added memory storage exclusion for completeness.","dependencies":[{"issue_id":"bd-o9o","depends_on_id":"bd-0vg","type":"blocks","created_at":"2025-12-18T23:33:56.392931-08:00","created_by":"daemon"},{"issue_id":"bd-o9o","depends_on_id":"bd-7h5","type":"blocks","created_at":"2025-12-18T23:34:07.612655-08:00","created_by":"daemon"}]} -{"id":"bd-obep","title":"Spawn-time bonding: --attach flag","description":"Add --attach flag to bd mol spawn for on-the-fly composition.\n\nCOMMAND: bd mol spawn proto-feature --attach proto-docs --attach proto-testing\n\nBEHAVIOR:\n- Spawn the primary proto as normal\n- For each --attach: spawn that proto and wire to primary\n- Attachments become children of primary's root epic\n- Dependencies wired based on bond type (default: sequential)\n\nFLAGS:\n- --attach PROTO: Attach a proto (can repeat)\n- --attach-type TYPE: sequential (default) or parallel for all attachments\n- --after ISSUE: Attachment point for attached protos\n\nVARIABLE HANDLING:\n- All attached protos share variable namespace\n- Warn on variable name conflicts\n- All --var flags apply to all protos","notes":"DESIGN NOTE: This is syntactic sugar. Equivalent to:\n bd mol spawn proto-A\n bd mol bond $new_epic_id proto-B\n \nKeeping as separate task because it's a common UX pattern worth optimizing.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-21T00:59:06.178092-08:00","updated_at":"2025-12-21T10:42:50.554816-08:00","closed_at":"2025-12-21T10:42:50.554816-08:00","close_reason":"Implemented --attach and --attach-type flags for bd mol spawn. Fixed pre-existing bug where bondProtoMol and bondMolMol tried to add duplicate dependencies (UNIQUE constraint). Sequential bonds now use blocks type, parallel/conditional use parent-child.","dependencies":[{"issue_id":"bd-obep","depends_on_id":"bd-o5xe","type":"parent-child","created_at":"2025-12-21T00:59:51.368491-08:00","created_by":"daemon"},{"issue_id":"bd-obep","depends_on_id":"bd-o91r","type":"blocks","created_at":"2025-12-21T00:59:51.733369-08:00","created_by":"daemon"}]} -{"id":"bd-of2p","title":"Improve version bump molecule with missing steps","description":"During v0.32.1 release, discovered missing steps in the release molecule:\n\n**Missing from molecule:**\n1. Rebuild ~/go/bin/bd (only did ~/.local/bin/bd)\n2. Install beads-mcp from local source: `uv tool install --reinstall ./integrations/beads-mcp`\n3. Restart daemons with `bd daemons killall`\n4. (Optional) Publish beads-mcp to PyPI\n\n**Current molecule steps (bd-6s61):**\n1. Update CHANGELOG.md\n2. Update info.go versionChanges \n3. Run bump-version.sh\n4. Run tests and linting\n5. Update local installation\n6. Commit and push release\n7. Wait for CI\n8. Verify release artifacts\n\n**Proposed additions:**\n- After \"Update local installation\": rebuild BOTH ~/.local/bin/bd AND ~/go/bin/bd\n- Add: \"Install beads-mcp from source\" step\n- Add: \"Restart daemons\" step\n- Add: \"Verify all versions match\" step that checks all artifacts\n\n**Also learned:**\n- Must run from mayor/rig to avoid git conflicts with bd sync (already documented in bump-version.sh)","notes":"CORRECTION: npm publishing IS automated and working!\n\n**Package naming:**\n- OLD: `beads` (npm) - deprecated, stuck at 0.2.1\n- CURRENT: `@beads/bd` (npm) - scoped package, auto-published by CI\n\n**How it works:**\n- CI uses OIDC trusted publishing (no token needed)\n- Workflow: .github/workflows/release.yml β†’ publish-npm job\n- Permissions: `id-token: write` enables GitHub OIDC\n- To install: `npm install -g @beads/bd` (not `npm install beads`)\n\n**All publishing is automated on tag push:**\n1. GitHub Release - goreleaser βœ“\n2. PyPI - publish-pypi job βœ“\n3. Homebrew - update-homebrew job βœ“\n4. npm (@beads/bd) - publish-npm job βœ“\n\n**Remaining molecule improvements (local steps only):**\n- Rebuild BOTH ~/.local/bin/bd AND ~/go/bin/bd\n- Install beads-mcp from source: `uv tool install --reinstall ./integrations/beads-mcp`\n- Restart daemons: `bd daemons killall`\n- Run from mayor/rig to avoid git conflicts with bd sync\n- Final verification step to check all local versions match","status":"closed","priority":2,"issue_type":"task","assignee":"beads/dave","created_at":"2025-12-20T22:09:11.845787-08:00","updated_at":"2025-12-22T16:01:18.199132-08:00","closed_at":"2025-12-22T16:01:18.199132-08:00","close_reason":"Implemented: Added --install (dual location), --mcp-local, --restart-daemons, and --all flags to bump-version.sh. Updated RELEASING.md with new flag documentation."} -{"id":"bd-ohil","title":"refinery Handoff","description":"attached_molecule: bd-ndye\nattached_at: 2025-12-23T12:35:07Z","status":"pinned","priority":2,"issue_type":"task","created_at":"2025-12-23T04:35:07.488226-08:00","updated_at":"2025-12-23T04:35:07.785858-08:00"} -{"id":"bd-ola6","title":"Implement transaction retry logic for SQLITE_BUSY","description":"BEGIN IMMEDIATE fails immediately on SQLITE_BUSY instead of retrying with exponential backoff.\n\nLocation: internal/storage/sqlite/sqlite.go:223-225\n\nProblem:\n- Under concurrent write load, BEGIN IMMEDIATE can fail with SQLITE_BUSY\n- Current implementation fails immediately instead of retrying\n- Results in spurious failures under normal concurrent usage\n\nSolution: Implement exponential backoff retry:\n- Retry up to N times (e.g., 5)\n- Backoff: 10ms, 20ms, 40ms, 80ms, 160ms\n- Check for context cancellation between retries\n- Only retry on SQLITE_BUSY/database locked errors\n\nImpact: Spurious failures under concurrent write load\n\nEffort: 3 hours","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-11-16T14:51:31.247147-08:00","updated_at":"2025-12-21T21:39:23.071036-08:00","closed_at":"2025-12-21T21:39:23.071036-08:00","close_reason":"Already implemented: beginImmediateWithRetry in util.go provides exponential backoff (5 retries, 10msβ†’160ms) for SQLITE_BUSY errors, used by RunInTransaction. Tests in util_test.go verify behavior."} -{"id":"bd-om4a","title":"Support external: prefix in blocked_by field","description":"Allow blocked_by to include external project references:\n\n```bash\nbd update gt-xyz --blocked-by=\"external:beads:mol-run-assignee\"\n```\n\nSyntax: `external:\u003cproject\u003e:\u003ccapability\u003e`\n- project: name from external_projects config\n- capability: matches provides:\u003ccapability\u003e label in target project\n\nStorage: Store as-is in blocked_by array. Resolution happens at query time.\n\nPart of cross-project dependency system.\nSee: gastown/docs/cross-project-deps.md","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-12-21T22:37:29.725196-08:00","updated_at":"2025-12-21T23:07:48.127045-08:00","closed_at":"2025-12-21T23:07:48.127045-08:00","close_reason":"Implemented: bd dep add accepts external:project:capability syntax, stores as-is, shows in blocked output, updates blocked cache"} -{"id":"bd-ork0","title":"Add comments to 30+ silently ignored errors or fix them","description":"Code health review found 30+ instances of error suppression using blank identifier without explanation:\n\nGood examples (with comments):\n- merge.go: _ = gitRmCmd.Run() // Ignore errors\n- daemon_watcher.go: _ = watcher.Add(...) // Ignore error\n\nBad examples (no context):\n- create.go:213: dbPrefix, _ = store.GetConfig(ctx, \"issue_prefix\")\n- daemon_sync_branch.go: _ = daemonClient.Close()\n- migrate_hash_ids.go, version_tracking.go: _ = store.Close()\n\nFix: Add comments explaining WHY errors are ignored, or handle them properly.","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-16T18:17:25.899372-08:00","updated_at":"2025-12-22T21:28:32.898258-08:00","closed_at":"2025-12-22T21:28:32.898258-08:00","close_reason":"Added explanatory comments to 24 production file locations with ignored errors. Categories: Cobra flags (only fail if missing), best-effort cleanup/close operations, process signaling."} -{"id":"bd-oryk","title":"Fix update-homebrew.sh awk script corrupts formula","description":"The awk script in scripts/update-homebrew.sh incorrectly removes platform conditionals (on_macos do, on_linux do, if Hardware::CPU.arm?, etc.) when updating SHA256 hashes. This corrupts the Homebrew formula.\n\nThe issue is the awk script uses 'next' to skip lines containing platform conditionals but never reconstructs them, resulting in a syntax-invalid formula.\n\nFound during v0.34.0 release - had to manually fix the formula.\n\nFix options:\n1. Rewrite awk script to properly preserve structure while updating sha256 lines only\n2. Use sed instead with targeted sha256 replacements\n3. Template approach - store formula template and fill in version/hashes","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-22T12:17:17.748792-08:00","updated_at":"2025-12-22T13:13:31.947353-08:00","closed_at":"2025-12-22T13:13:31.947353-08:00","close_reason":"Fixed awk script - removed 'next' statements that skipped structural lines, now uses sub() to replace sha256 values in-place"} -{"id":"bd-ot0w","title":"Work on beads-tip: Fix broken Claude integration link in ...","description":"Work on beads-tip: Fix broken Claude integration link in bd doctor (GH#623). Update URL that doesn't exist. When done, submit MR (not PR) to integration branch for Refinery.","status":"closed","priority":2,"issue_type":"task","assignee":"beads/dementus","created_at":"2025-12-19T22:56:08.429157-08:00","updated_at":"2025-12-19T23:20:39.790305-08:00","closed_at":"2025-12-19T23:20:39.790305-08:00","close_reason":"Fixed broken Claude plugin URL in bd doctor"} -{"id":"bd-otf4","title":"Code Review: PR #481 - Context Engineering Optimizations","description":"Comprehensive code review of the merged context engineering PR (PR #481) that reduces MCP context usage by 80-90%.\n\n## Summary\nThe PR successfully implements lazy tool schema loading and minimal issue models to reduce context window usage. Overall implementation is solid and well-tested.\n\n## Positive Findings\nβœ… Well-designed models (IssueMinimal, CompactedResult)\nβœ… Comprehensive test coverage (28 tests, all passing)\nβœ… Clear documentation and comments\nβœ… Backward compatibility preserved (show() still returns full Issue)\nβœ… Sensible defaults (COMPACTION_THRESHOLD=20, PREVIEW_COUNT=5)\nβœ… Tool catalog complete with all 15 tools documented\n\n## Issues Identified\nSee linked issues for specific followup tasks.\n\n## Context Engineering Architecture\n- discover_tools(): List tool names only (~500 bytes vs ~15KB)\n- get_tool_info(name): Get specific tool details on-demand\n- IssueMinimal: Lightweight model for list views (~80 bytes vs ~400 bytes)\n- CompactedResult: Auto-compacts results with \u003e20 issues\n- _to_minimal(): Conversion function (efficient, no N+1 issues)","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-14T14:24:13.523532-08:00","updated_at":"2025-12-14T14:24:13.523532-08:00"} -{"id":"bd-otli","title":"Wait for CI to pass","description":"Monitor GitHub Actions - all checks must pass before release artifacts are built","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T22:56:03.022281-08:00","updated_at":"2025-12-20T00:49:51.928591-08:00","closed_at":"2025-12-20T00:25:52.635223-08:00","dependencies":[{"issue_id":"bd-otli","depends_on_id":"bd-6s61","type":"parent-child","created_at":"2025-12-19T22:56:15.097564-08:00","created_by":"daemon"},{"issue_id":"bd-otli","depends_on_id":"bd-7tuu","type":"blocks","created_at":"2025-12-19T22:56:23.360436-08:00","created_by":"daemon"}]} -{"id":"bd-oy6c","title":"Bump version in all files","description":"Run ./scripts/bump-version.sh 0.33.2 to update 10 version files. Then run with --commit after info.go is updated.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.759706-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task","wisp":true} -{"id":"bd-p5za","title":"mol-christmas-launch: 3-day execution plan","description":"Christmas Launch Molecule - execute phases in order, survive restarts.\n\nPIN THIS BEAD. Check progress each session start.\n\n## Step: phase0-beads-foundation\nFix blocking issues before swarming:\n1. Verify gastown beads schema works: bd list --status=open\n2. Ensure bd mol bond exists (check bd-usro)\n3. Verify bd-2vh3 (squash) is filed\n\n## Step: phase1-polecat-loop\nSerial work on polecat execution:\n1. gt-9nf: Fresh polecats only\n2. gt-975: Molecule execution support\n3. gt-8v8: Refuse uncommitted work\nThen swarm: gt-e1y, gt-f8v, gt-eu9\nNeeds: phase0-beads-foundation\n\n## Step: phase2-refinery\nSerial work on refinery autonomy:\n1. gt-5gkd: Refinery CLAUDE.md\n2. gt-bj6f: Refinery context in gt prime\n3. gt-0qki: Refinery-Witness protocol\nNeeds: phase1-polecat-loop\n\n## Step: phase3-deacon\nHealth monitoring infrastructure:\n1. gt-5af.4: Simplify daemon\n2. gt-5af.7: Crew session patterns\n3. gt-976: Crew lifecycle\nNeeds: phase2-refinery\n\n## Step: phase4-code-review\nSelf-improvement flywheel:\n1. Define mol-code-review (gt-fjvo)\n2. Test on open MRs\n3. Integrate with Refinery\nNeeds: phase3-deacon\n\n## Step: phase5-polish\nDemo readiness:\n1. gt-b2hj: Find orphaned work\n2. Doctor checks\n3. Clean up open MRs\nNeeds: phase4-code-review\n\n## Step: verify-flywheel\nSuccess criteria:\n- gt spawn works with molecules\n- Refinery processes MRs autonomously\n- mol-code-review runs on a PR\n- bd cleanup --ephemeral works\nNeeds: phase5-polish","status":"closed","priority":0,"issue_type":"epic","created_at":"2025-12-20T21:20:02.462889-08:00","updated_at":"2025-12-21T17:23:25.471749-08:00","closed_at":"2025-12-21T17:23:25.471749-08:00","close_reason":"Beads portion complete (phase0-beads-foundation done). Gastown tracks remainder."} -{"id":"bd-pbh","title":"Release v0.30.4","description":"## Version Bump Workflow\n\nCoordinating release from 0.30.3 to 0.30.4.\n\n### Components Updated\n- Go CLI (cmd/bd/version.go)\n- Claude Plugin (.claude-plugin/*.json)\n- MCP Server (integrations/beads-mcp/)\n- npm Package (npm-package/package.json)\n- Git hooks (cmd/bd/templates/hooks/)\n\n### Release Channels\n- GitHub Releases (GoReleaser)\n- PyPI (beads-mcp)\n- npm (@beads/cli)\n- Homebrew (homebrew-beads tap)\n","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-17T21:19:10.926133-08:00","updated_at":"2025-12-17T21:46:46.192948-08:00","closed_at":"2025-12-17T21:46:46.192948-08:00","labels":["release","v0.30.4","workflow"]} -{"id":"bd-pbh.1","title":"Update cmd/bd/version.go to 0.30.4","description":"Update the Version constant in cmd/bd/version.go:\n```go\nVersion = \"0.30.4\"\n```\n\n\n```verify\ngrep -q 'Version = \"0.30.4\"' cmd/bd/version.go\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:10.9462-08:00","updated_at":"2025-12-17T21:46:46.20387-08:00","closed_at":"2025-12-17T21:46:46.20387-08:00","labels":["workflow"],"dependencies":[{"issue_id":"bd-pbh.1","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:10.946633-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-pbh.10","title":"Run check-versions.sh - all must pass","description":"Run the version consistency check:\n```bash\n./scripts/check-versions.sh\n```\n\nAll versions must match 0.30.4.\n\n\n```verify\n./scripts/check-versions.sh\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:11.047311-08:00","updated_at":"2025-12-17T21:46:46.28316-08:00","closed_at":"2025-12-17T21:46:46.28316-08:00","labels":["workflow"],"dependencies":[{"issue_id":"bd-pbh.10","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:11.047888-08:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-pbh.10","depends_on_id":"bd-pbh.1","type":"blocks","created_at":"2025-12-17T21:19:11.159084-08:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-pbh.10","depends_on_id":"bd-pbh.4","type":"blocks","created_at":"2025-12-17T21:19:11.168248-08:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-pbh.10","depends_on_id":"bd-pbh.5","type":"blocks","created_at":"2025-12-17T21:19:11.177869-08:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-pbh.10","depends_on_id":"bd-pbh.6","type":"blocks","created_at":"2025-12-17T21:19:11.187629-08:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-pbh.10","depends_on_id":"bd-pbh.7","type":"blocks","created_at":"2025-12-17T21:19:11.199955-08:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-pbh.10","depends_on_id":"bd-pbh.8","type":"blocks","created_at":"2025-12-17T21:19:11.211479-08:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-pbh.10","depends_on_id":"bd-pbh.9","type":"blocks","created_at":"2025-12-17T21:19:11.224059-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-pbh.11","title":"Commit changes and create v0.30.4 tag","description":"```bash\ngit add -A\ngit commit -m \"chore: Bump version to 0.30.4\"\ngit tag -a v0.30.4 -m \"Release v0.30.4\"\n```\n\n\n```verify\ngit describe --tags --exact-match HEAD 2\u003e/dev/null | grep -q 'v0.30.4'\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:11.056575-08:00","updated_at":"2025-12-17T21:46:46.292166-08:00","closed_at":"2025-12-17T21:46:46.292166-08:00","labels":["workflow"],"dependencies":[{"issue_id":"bd-pbh.11","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:11.056934-08:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-pbh.11","depends_on_id":"bd-pbh.10","type":"blocks","created_at":"2025-12-17T21:19:11.234175-08:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-pbh.11","depends_on_id":"bd-pbh.2","type":"blocks","created_at":"2025-12-17T21:19:11.245316-08:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-pbh.11","depends_on_id":"bd-pbh.3","type":"blocks","created_at":"2025-12-17T21:19:11.255362-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-pbh.12","title":"Push commit and tag to origin","description":"```bash\ngit push origin main\ngit push origin v0.30.4\n```\n\nThis triggers GitHub Actions:\n- GoReleaser build\n- PyPI publish\n- npm publish\n\n\n```verify\ngit ls-remote origin refs/tags/v0.30.4 | grep -q 'v0.30.4'\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:11.066074-08:00","updated_at":"2025-12-17T21:46:46.301948-08:00","closed_at":"2025-12-17T21:46:46.301948-08:00","labels":["workflow"],"dependencies":[{"issue_id":"bd-pbh.12","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:11.066442-08:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-pbh.12","depends_on_id":"bd-pbh.11","type":"blocks","created_at":"2025-12-17T21:19:11.265986-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-pbh.13","title":"Monitor GoReleaser CI job","description":"Watch the GoReleaser action:\nhttps://github.com/steveyegge/beads/actions/workflows/release.yml\n\nShould complete in ~10 minutes and create:\n- GitHub Release with binaries for all platforms\n- Checksums and signatures\n\nCheck status:\n```bash\ngh run list --workflow=release.yml -L 1\ngh run watch # to monitor live\n```\n\nVerify release exists:\n```bash\ngh release view v0.30.4\n```\n\n\n```verify\ngh release view v0.30.4 --json tagName -q .tagName | grep -q 'v0.30.4'\n```","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-17T21:19:11.074476-08:00","updated_at":"2025-12-17T21:46:46.311506-08:00","closed_at":"2025-12-17T21:46:46.311506-08:00","labels":["workflow"],"dependencies":[{"issue_id":"bd-pbh.13","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:11.074833-08:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-pbh.13","depends_on_id":"bd-pbh.12","type":"blocks","created_at":"2025-12-17T21:19:11.279092-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-pbh.14","title":"Monitor PyPI publish","description":"Watch the PyPI publish action:\nhttps://github.com/steveyegge/beads/actions/workflows/pypi-publish.yml\n\nVerify at: https://pypi.org/project/beads-mcp/0.30.4/\n\nCheck:\n```bash\npip index versions beads-mcp 2\u003e/dev/null | grep -q '0.30.4'\n```\n\n\n```verify\npip index versions beads-mcp 2\u003e/dev/null | grep -q '0.30.4' || curl -s https://pypi.org/pypi/beads-mcp/json | jq -e '.releases[\"0.30.4\"]'\n```","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-17T21:19:11.083809-08:00","updated_at":"2025-12-17T21:46:46.320922-08:00","closed_at":"2025-12-17T21:46:46.320922-08:00","labels":["workflow"],"dependencies":[{"issue_id":"bd-pbh.14","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:11.084126-08:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-pbh.14","depends_on_id":"bd-pbh.12","type":"blocks","created_at":"2025-12-17T21:19:11.289698-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-pbh.15","title":"Monitor npm publish","description":"Watch the npm publish action:\nhttps://github.com/steveyegge/beads/actions/workflows/npm-publish.yml\n\nVerify at: https://www.npmjs.com/package/@anthropics/claude-code-beads-plugin/v/0.30.4\n\nCheck:\n```bash\nnpm view @anthropics/claude-code-beads-plugin@0.30.4 version\n```\n\n\n```verify\nnpm view @anthropics/claude-code-beads-plugin@0.30.4 version 2\u003e/dev/null | grep -q '0.30.4'\n```","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-17T21:19:11.091806-08:00","updated_at":"2025-12-17T21:46:46.333213-08:00","closed_at":"2025-12-17T21:46:46.333213-08:00","labels":["workflow"],"dependencies":[{"issue_id":"bd-pbh.15","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:11.092205-08:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-pbh.15","depends_on_id":"bd-pbh.12","type":"blocks","created_at":"2025-12-17T21:19:11.301843-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-pbh.16","title":"Update Homebrew formula","description":"After GoReleaser completes, the Homebrew tap should be auto-updated.\n\nIf manual update needed:\n```bash\n./scripts/update-homebrew.sh v0.30.4\n```\n\nOr manually update steveyegge/homebrew-beads with new SHA256.\n\nVerify:\n```bash\nbrew update\nbrew info beads\n```\n","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-17T21:19:11.100213-08:00","updated_at":"2025-12-17T21:46:46.341942-08:00","closed_at":"2025-12-17T21:46:46.341942-08:00","labels":["workflow"],"dependencies":[{"issue_id":"bd-pbh.16","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:11.100541-08:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-pbh.16","depends_on_id":"bd-pbh.13","type":"blocks","created_at":"2025-12-17T21:19:11.312625-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-pbh.17","title":"Install 0.30.4 Go binary locally","description":"Rebuild and install the Go binary:\n```bash\ngo install ./cmd/bd\n# OR\nmake install\n```\n\nVerify:\n```bash\nbd --version\n```\n\n\n```verify\nbd --version 2\u003e\u00261 | grep -q '0.30.4'\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:11.108597-08:00","updated_at":"2025-12-17T21:46:46.352702-08:00","closed_at":"2025-12-17T21:46:46.352702-08:00","labels":["workflow"],"dependencies":[{"issue_id":"bd-pbh.17","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:11.108917-08:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-pbh.17","depends_on_id":"bd-pbh.13","type":"blocks","created_at":"2025-12-17T21:19:11.322091-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-pbh.18","title":"Restart beads daemon","description":"Kill any running daemons so they pick up the new version:\n```bash\nbd daemons killall\n```\n\nStart fresh daemon:\n```bash\nbd list # triggers daemon start\n```\n\nVerify daemon version:\n```bash\nbd version --daemon\n```\n\n\n```verify\nbd version --daemon 2\u003e\u00261 | grep -q '0.30.4'\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:11.11636-08:00","updated_at":"2025-12-17T21:46:46.364842-08:00","closed_at":"2025-12-17T21:46:46.364842-08:00","labels":["workflow"],"dependencies":[{"issue_id":"bd-pbh.18","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:11.116706-08:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-pbh.18","depends_on_id":"bd-pbh.17","type":"blocks","created_at":"2025-12-17T21:19:11.330411-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-pbh.19","title":"Install 0.30.4 MCP server locally","description":"Upgrade the MCP server (after PyPI publish):\n```bash\npip install --upgrade beads-mcp\n# OR if using uv:\nuv tool upgrade beads-mcp\n```\n\nVerify:\n```bash\npip show beads-mcp | grep Version\n```\n\n\n```verify\npip show beads-mcp 2\u003e/dev/null | grep -q 'Version: 0.30.4'\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:11.124496-08:00","updated_at":"2025-12-17T21:46:46.372989-08:00","closed_at":"2025-12-17T21:46:46.372989-08:00","labels":["workflow"],"dependencies":[{"issue_id":"bd-pbh.19","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:11.124829-08:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-pbh.19","depends_on_id":"bd-pbh.14","type":"blocks","created_at":"2025-12-17T21:19:11.343558-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-pbh.2","title":"Update CHANGELOG.md for 0.30.4","description":"1. Change `## [Unreleased]` to `## [0.30.4] - 2025-12-17`\n2. Add new empty `## [Unreleased]` section at top\n3. Ensure all changes since 0.30.3 are documented\n","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:10.956332-08:00","updated_at":"2025-12-17T21:46:46.214512-08:00","closed_at":"2025-12-17T21:46:46.214512-08:00","labels":["workflow"],"dependencies":[{"issue_id":"bd-pbh.2","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:10.95683-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-pbh.20","title":"Update git hooks","description":"Install the updated hooks:\n```bash\nbd hooks install\n```\n\nVerify hook version:\n```bash\ngrep 'bd-hooks-version' .git/hooks/pre-commit\n```\n\n\n```verify\ngrep -q 'bd-hooks-version: 0.30.4' .git/hooks/pre-commit 2\u003e/dev/null || echo 'Hooks may not be installed - verify manually'\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:11.13198-08:00","updated_at":"2025-12-17T21:46:46.381519-08:00","closed_at":"2025-12-17T21:46:46.381519-08:00","labels":["workflow"],"dependencies":[{"issue_id":"bd-pbh.20","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:11.132306-08:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-pbh.20","depends_on_id":"bd-pbh.17","type":"blocks","created_at":"2025-12-17T21:19:11.352288-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-pbh.21","title":"Final release verification","description":"Verify all release artifacts are accessible:\n\n- [ ] `bd --version` shows 0.30.4\n- [ ] `bd version --daemon` shows 0.30.4\n- [ ] GitHub release exists: https://github.com/steveyegge/beads/releases/tag/v0.30.4\n- [ ] `brew upgrade beads \u0026\u0026 bd --version` shows 0.30.4 (if using Homebrew)\n- [ ] `pip show beads-mcp` shows 0.30.4\n- [ ] npm package available at 0.30.4\n- [ ] `bd info --whats-new` shows 0.30.4 notes\n\nRun final checks:\n```bash\nbd --version\nbd version --daemon\npip show beads-mcp | grep Version\nbd info --whats-new\n```\n","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:11.141249-08:00","updated_at":"2025-12-17T21:46:46.390985-08:00","closed_at":"2025-12-17T21:46:46.390985-08:00","labels":["workflow"],"dependencies":[{"issue_id":"bd-pbh.21","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:11.141549-08:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-pbh.21","depends_on_id":"bd-pbh.18","type":"blocks","created_at":"2025-12-17T21:19:11.364839-08:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-pbh.21","depends_on_id":"bd-pbh.19","type":"blocks","created_at":"2025-12-17T21:19:11.373656-08:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-pbh.21","depends_on_id":"bd-pbh.20","type":"blocks","created_at":"2025-12-17T21:19:11.382-08:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-pbh.21","depends_on_id":"bd-pbh.15","type":"blocks","created_at":"2025-12-17T21:19:11.389733-08:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-pbh.21","depends_on_id":"bd-pbh.16","type":"blocks","created_at":"2025-12-17T21:19:11.398347-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-pbh.3","title":"Add 0.30.4 to info.go release notes","description":"Update cmd/bd/info.go versionChanges map with release notes for 0.30.4.\nInclude any workflow-impacting changes for --whats-new output.\n","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:10.966781-08:00","updated_at":"2025-12-17T21:46:46.222445-08:00","closed_at":"2025-12-17T21:46:46.222445-08:00","labels":["workflow"],"dependencies":[{"issue_id":"bd-pbh.3","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:10.967287-08:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-pbh.3","depends_on_id":"bd-pbh.2","type":"blocks","created_at":"2025-12-17T21:19:11.149584-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-pbh.4","title":"Update .claude-plugin/plugin.json to 0.30.4","description":"Update version field in .claude-plugin/plugin.json:\n```json\n\"version\": \"0.30.4\"\n```\n\n\n```verify\njq -e '.version == \"0.30.4\"' .claude-plugin/plugin.json\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:10.976866-08:00","updated_at":"2025-12-17T21:46:46.23159-08:00","closed_at":"2025-12-17T21:46:46.23159-08:00","labels":["workflow"],"dependencies":[{"issue_id":"bd-pbh.4","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:10.97729-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-pbh.5","title":"Update .claude-plugin/marketplace.json to 0.30.4","description":"Update version field in .claude-plugin/marketplace.json:\n```json\n\"version\": \"0.30.4\"\n```\n\n\n```verify\njq -e '.plugins[0].version == \"0.30.4\"' .claude-plugin/marketplace.json\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:10.985619-08:00","updated_at":"2025-12-17T21:46:46.239122-08:00","closed_at":"2025-12-17T21:46:46.239122-08:00","labels":["workflow"],"dependencies":[{"issue_id":"bd-pbh.5","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:10.985942-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-pbh.6","title":"Update integrations/beads-mcp/pyproject.toml to 0.30.4","description":"Update version in pyproject.toml:\n```toml\nversion = \"0.30.4\"\n```\n\n\n```verify\ngrep -q 'version = \"0.30.4\"' integrations/beads-mcp/pyproject.toml\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:10.994004-08:00","updated_at":"2025-12-17T21:46:46.246574-08:00","closed_at":"2025-12-17T21:46:46.246574-08:00","labels":["workflow"],"dependencies":[{"issue_id":"bd-pbh.6","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:10.994376-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-pbh.7","title":"Update beads_mcp/__init__.py to 0.30.4","description":"Update __version__ in integrations/beads-mcp/src/beads_mcp/__init__.py:\n```python\n__version__ = \"0.30.4\"\n```\n\n\n```verify\ngrep -q '__version__ = \"0.30.4\"' integrations/beads-mcp/src/beads_mcp/__init__.py\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:11.005334-08:00","updated_at":"2025-12-17T21:46:46.254885-08:00","closed_at":"2025-12-17T21:46:46.254885-08:00","labels":["workflow"],"dependencies":[{"issue_id":"bd-pbh.7","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:11.005699-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-pbh.8","title":"Update npm-package/package.json to 0.30.4","description":"Update version field in npm-package/package.json:\n```json\n\"version\": \"0.30.4\"\n```\n\n\n```verify\njq -e '.version == \"0.30.4\"' npm-package/package.json\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:11.014905-08:00","updated_at":"2025-12-17T21:46:46.268821-08:00","closed_at":"2025-12-17T21:46:46.268821-08:00","labels":["workflow"],"dependencies":[{"issue_id":"bd-pbh.8","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:11.01529-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-pbh.9","title":"Update hook templates to 0.30.4","description":"Update bd-hooks-version comment in all 4 hook templates:\n- cmd/bd/templates/hooks/pre-commit\n- cmd/bd/templates/hooks/post-merge\n- cmd/bd/templates/hooks/pre-push\n- cmd/bd/templates/hooks/post-checkout\n\nEach should have:\n```bash\n# bd-hooks-version: 0.30.4\n```\n\n\n```verify\ngrep -l 'bd-hooks-version: 0.30.4' cmd/bd/templates/hooks/* | wc -l | grep -q '4'\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:11.0248-08:00","updated_at":"2025-12-17T21:46:46.27561-08:00","closed_at":"2025-12-17T21:46:46.27561-08:00","labels":["workflow"],"dependencies":[{"issue_id":"bd-pbh.9","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:11.025124-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-pdr2","title":"Consider backwards compatibility for ready() and list() return type change","description":"PR #481 changed the return types of `ready()` and `list()` from `list[Issue]` to `list[IssueMinimal] | CompactedResult`. This is a breaking change for MCP clients.\n\n## Impact Assessment\nBreaking change affects:\n- Any MCP client expecting `list[Issue]` from ready()\n- Any MCP client expecting `list[Issue]` from list()\n- Client code that accesses full Issue fields (description, design, acceptance_criteria, timestamps, dependencies, dependents)\n\n## Current Behavior\n- ready() returns `list[IssueMinimal] | CompactedResult`\n- list() returns `list[IssueMinimal] | CompactedResult`\n- show() still returns full `Issue` (good)\n\n## Considerations\n**Pros of current approach:**\n- Forces clients to use show() for full details (good for context efficiency)\n- Simple mental model (always use show for full data)\n- Documentation warns about this\n\n**Cons:**\n- Clients expecting list[Issue] will break\n- No graceful degradation option\n- No migration period\n\n## Potential Solutions\n1. Add optional parameter `full_details=false` to ready/list (would increase payload)\n2. Create separate tools: ready_minimal/list_minimal + ready_full/list_full\n3. Accept breaking change and document upgrade path (current approach)\n4. Version the MCP server and document migration guide\n\n## Recommendation\nCurrent approach (solution 3) is reasonable if:\n- Changelog clearly documents the breaking change\n- Migration guide provided to clients\n- Error handling is graceful for clients expecting specific fields","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-14T14:24:56.460465-08:00","updated_at":"2025-12-14T14:24:56.460465-08:00","dependencies":[{"issue_id":"bd-pdr2","depends_on_id":"bd-otf4","type":"discovered-from","created_at":"2025-12-14T14:24:56.461959-08:00","created_by":"stevey","metadata":"{}"}]} -{"id":"bd-pe4s","title":"JSON test issue","description":"Line 1\nLine 2\nLine 3","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-12-16T16:14:36.969074-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} -{"id":"bd-pgcs","title":"Clean up orphaned child issues (bd-cb64c226.*, bd-cbed9619.*)","description":"## Problem\n\nEvery bd command shows warnings about 12 orphaned child issues:\n- bd-cb64c226.1, .6, .8, .9, .10, .12, .13\n- bd-cbed9619.1, .2, .3, .4, .5\n\nThese are hierarchical IDs (parent.child format) where the parent issues no longer exist.\n\n## Impact\n\n- Clutters output of every bd command\n- Confusing for users\n- Indicates incomplete cleanup of deleted parent issues\n\n## Proposed Solution\n\n1. Delete the orphaned issues since their parents no longer exist:\n ```bash\n bd delete bd-cb64c226.1 bd-cb64c226.6 bd-cb64c226.8 ...\n ```\n\n2. Or convert them to top-level issues if they contain useful content\n\n## Investigation Needed\n\n- What were the parent issues bd-cb64c226 and bd-cbed9619?\n- Why were they deleted without their children?\n- Should bd delete cascade to children automatically?","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-12-16T23:06:17.240571-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} -{"id":"bd-phtv","title":"bd pin: pinned field overwritten by subsequent bd commands","description":"## Summary\n\nThe `bd pin` command correctly sets `pinned=1` in SQLite, but any subsequent `bd` command (including read-only commands like `bd show`) resets `pinned` to 0.\n\n## Reproduction Steps\n\n```bash\nbd --no-daemon pin \u003cissue-id\u003e --for=max\nsqlite3 .beads/beads.db \"SELECT id, pinned FROM issues WHERE id=\\\"\u003cissue-id\u003e\\\"\"\n# Shows pinned=1 βœ“\n\nbd --no-daemon show \u003cissue-id\u003e --json\nsqlite3 .beads/beads.db \"SELECT id, pinned FROM issues WHERE id=\\\"\u003cissue-id\u003e\\\"\"\n# Shows pinned=0 βœ— WRONG\n```\n\n## Root Cause Investigation\n\n### Prime Suspects\n\n1. **JSONL import overwrites DB** - The `pinned` field has `omitempty` so false values arent in JSONL. When JSONL is imported, it overwrites the DB pinned=1 with default pinned=0.\n\n2. **Files to check:**\n - `internal/importer/importer.go` - ImportIssue() may unconditionally set all fields\n - `internal/storage/sqlite/issues.go` - UpsertIssue() may not preserve pinned\n - `cmd/bd/main.go` - ensureStoreActive() may trigger import\n\n### Debug Steps\n\n```bash\n# Add debug logging to track what is writing pinned=0\ngrep -rn \"pinned\" internal/storage/sqlite/*.go\ngrep -rn \"Pinned\" internal/importer/*.go\n```\n\n## Likely Fix\n\nIn `internal/importer/importer.go` or `internal/storage/sqlite/issues.go`:\n\n```go\n// When upserting from JSONL, preserve pinned field if already set\nfunc (s *SQLiteStorage) UpsertIssue(ctx context.Context, issue *types.Issue) error {\n // Check if issue exists and is pinned\n existing, _ := s.GetIssue(ctx, issue.ID)\n if existing != nil \u0026\u0026 existing.Pinned \u0026\u0026 !issue.Pinned {\n // Preserve existing pinned status\n issue.Pinned = existing.Pinned\n }\n // ... rest of upsert\n}\n```\n\nOR the import should skip fields that are omitempty and not present in JSONL:\n\n```go\n// In importer, only update fields that are explicitly set in JSONL\n// Pinned with omitempty means absent = dont change, not absent = false\n```\n\n## Testing\n\n```bash\n# After fix:\nbd --no-daemon pin \u003cissue-id\u003e --for=max\nbd --no-daemon show \u003cissue-id\u003e --json # Should not reset pinned\nbd list --pinned # Should show the pinned issue\nbd hook --agent max # Should show pinned work\n```\n\n## Files to Modify\n\n1. **internal/importer/importer.go** - Preserve pinned on import\n2. **internal/storage/sqlite/issues.go** - UpsertIssue preserve pinned\n3. **Add test** in internal/importer/importer_test.go\n\n## Success Criteria\n- `bd pin` survives subsequent bd commands\n- `bd list --pinned` shows pinned issues\n- `bd hook --agent X` shows pinned work\n- Existing tests still pass","status":"closed","priority":1,"issue_type":"bug","assignee":"beads/Pinner","created_at":"2025-12-23T12:32:20.046988-08:00","updated_at":"2025-12-23T13:47:49.936021-08:00","closed_at":"2025-12-23T13:47:49.936021-08:00","close_reason":"Fixed two code paths in importer.go and multirepo.go that overwrote pinned field. Tests pass. May need follow-up if bug persists.","labels":["export:pinned-field-fix"],"dependencies":[{"issue_id":"bd-phtv","depends_on_id":"bd-iz5t","type":"parent-child","created_at":"2025-12-23T12:44:07.140151-08:00","created_by":"daemon"}]} -{"id":"bd-phwd","title":"Add timeout message for long-running git push operations","description":"When git push hangs waiting for credential/browser auth, show a periodic message to the user instead of appearing frozen. Add timeout messaging after N seconds of inactivity during git operations.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-21T11:44:57.318984535-07:00","updated_at":"2025-12-21T11:46:05.218023559-07:00","closed_at":"2025-12-21T11:46:05.218023559-07:00"} -{"id":"bd-pn0t","title":"Add 0.33.2 to versionChanges in info.go","description":"Add new entry at TOP of versionChanges in cmd/bd/info.go with release notes from CHANGELOG.md. Must do before bump-version.sh --commit.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.760056-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task","wisp":true} -{"id":"bd-psg","title":"Add tests for dependency management","description":"Key dependency functions like mergeBidirectionalTrees, GetDependencyTree, and DetectCycles have low or no coverage. These are essential for maintaining data integrity in the dependency graph.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T07:00:43.458548462-07:00","updated_at":"2025-12-19T09:54:57.018745301-07:00","closed_at":"2025-12-18T10:24:56.271508339-07:00","dependencies":[{"issue_id":"bd-psg","depends_on_id":"bd-6ss","type":"discovered-from","created_at":"2025-12-18T07:00:43.463910911-07:00","created_by":"matt"}]} -{"id":"bd-pvu0","title":"Merge: bd-4opy","description":"branch: polecat/angharad\ntarget: main\nsource_issue: bd-4opy\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-23T00:24:44.057267-08:00","updated_at":"2025-12-23T01:33:25.730271-08:00","closed_at":"2025-12-23T01:33:25.730271-08:00","close_reason":"Merged to main"} -{"id":"bd-pzw7","title":"gt handoff deadlock at handoff.go:125","notes":"When running 'gt handoff -m \"message\"' after successful MR submit, go panics with 'fatal error: all goroutines are asleep - deadlock\\!' at handoff.go:125. The shutdown request still appears to be sent successfully but the command crashes. Stack trace shows issue is in runHandoff select statement.","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-19T23:22:12.46315-08:00","updated_at":"2025-12-21T17:51:25.817355-08:00","closed_at":"2025-12-21T17:51:25.817355-08:00","close_reason":"Moved to gastown: gt-dich"} -{"id":"bd-qioh","title":"Standardize error handling: replace direct fmt.Fprintf+os.Exit with FatalError","description":"Standardize error handling in cmd/bd/ using FatalError pattern.\n\n## Current State\n~200+ instances of direct `fmt.Fprintf(os.Stderr, ...) + os.Exit(1)` pattern scattered across cmd/bd/*.go files.\n\n## Target Pattern\n\nUse existing FatalError helper (or create if missing):\n\n```go\n// In cmd/bd/helpers.go or similar\nfunc FatalError(format string, args ...interface{}) {\n fmt.Fprintf(os.Stderr, \"Error: \"+format+\"\\n\", args...)\n os.Exit(1)\n}\n\nfunc FatalErrorf(err error, context string) {\n fmt.Fprintf(os.Stderr, \"Error: %s: %v\\n\", context, err)\n os.Exit(1)\n}\n```\n\n## Transformation\n\n```go\n// Before\nfmt.Fprintf(os.Stderr, \"Error: %v\\n\", err)\nos.Exit(1)\n\n// After\nFatalError(\"%v\", err)\n\n// Before\nfmt.Fprintf(os.Stderr, \"Error: invalid --since duration: %v\\n\", err)\nos.Exit(1)\n\n// After\nFatalError(\"invalid --since duration: %v\", err)\n```\n\n## Files to Update (by occurrence count)\nRun: `grep -c \"os.Exit(1)\" cmd/bd/*.go | sort -t: -k2 -rn | head -20`\n\nPriority files (highest occurrence):\n- close.go\n- show.go\n- sync.go\n- init.go\n- list.go\n- create.go\n- update.go\n\n## Implementation Steps\n\n1. **Check if FatalError exists** in cmd/bd/helpers.go or create it\n2. **Create migration script** or use sed:\n ```bash\n # Find patterns to replace\n grep -rn \"fmt.Fprintf(os.Stderr.*Error.*\\n.*os.Exit(1)\" cmd/bd/\n ```\n3. **Replace systematically** file by file\n4. **Run tests** after each file to verify behavior unchanged\n5. **Run linter** to catch any missed patterns\n\n## Verification\n```bash\n# Count remaining direct exits (should be near zero)\ngrep -c \"os.Exit(1)\" cmd/bd/*.go | awk -F: \"{sum+=\\$2} END {print sum}\"\n\n# Run tests\ngo test -short ./cmd/bd/...\n```\n\n## Success Criteria\n- All error exits use FatalError/FatalErrorf\n- Consistent \"Error: \" prefix on all error messages\n- Tests pass\n- No behavior changes (exit codes remain 1)","notes":"## Progress (Dec 2025)\n\nStandardized error handling in 3 major files:\n- compact.go: All 48 os.Exit(1) calls converted to FatalError\n- sync.go: All error patterns converted (kept 1 valid summary exit)\n- migrate.go: 4 patterns converted\n\n## Remaining Work\n~326 fmt.Fprintf(os.Stderr, \"Error:\") patterns remain across ~30 files.\nHigh-count files remaining: show.go (16), dep.go (14), gate.go (28), init.go (11).\n\n## Verification\n- Build compiles successfully\n- Tests pass","status":"closed","priority":2,"issue_type":"task","assignee":"beads/Errata","created_at":"2025-12-16T18:17:19.309394-08:00","updated_at":"2025-12-23T14:14:37.939802-08:00","closed_at":"2025-12-23T14:14:37.939802-08:00","close_reason":"Error handling standardization - polecat completed, MR submitted","dependencies":[{"issue_id":"bd-qioh","depends_on_id":"bd-iz5t","type":"parent-child","created_at":"2025-12-23T12:44:07.43514-08:00","created_by":"daemon"}]} -{"id":"bd-qkw9","title":"Run bump-version.sh {{version}}","description":"Run ./scripts/bump-version.sh {{version}} to update version in all files","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T22:55:58.841424-08:00","updated_at":"2025-12-20T17:59:26.262877-08:00","closed_at":"2025-12-20T01:18:48.99813-08:00","dependencies":[{"issue_id":"bd-qkw9","depends_on_id":"bd-6s61","type":"parent-child","created_at":"2025-12-19T22:56:14.786027-08:00","created_by":"daemon"}]} -{"id":"bd-qqc","title":"Release v{{version}}","description":"Version bump workflow for beads release {{version}}.\n\n## Variables\n- `{{version}}` - The new version number (e.g., 0.31.0)\n- `{{date}}` - Release date (YYYY-MM-DD format)\n\n## Workflow Steps\n1. Kill running daemons\n2. Run tests and linting\n3. Bump version in all files (10 files total)\n4. Update cmd/bd/info.go with release notes\n5. Commit and push version bump\n6. Create and push git tag\n7. Update Homebrew formula\n8. Upgrade local Homebrew installation\n9. Verify installation\n\n## Files Updated by bump-version.sh\n- cmd/bd/version.go\n- .claude-plugin/plugin.json\n- .claude-plugin/marketplace.json\n- integrations/beads-mcp/pyproject.toml\n- integrations/beads-mcp/src/beads_mcp/__init__.py\n- README.md\n- npm-package/package.json\n- cmd/bd/templates/hooks/* (4 files)\n- CHANGELOG.md\n\n## Manual Step Required\n- cmd/bd/info.go - Add versionChanges entry with release notes","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-18T12:59:00.610371-08:00","updated_at":"2025-12-20T17:59:26.263219-08:00","closed_at":"2025-12-20T01:18:46.71424-08:00","labels":["template"]} -{"id":"bd-qqc.1","title":"Update version to {{version}} in version.go","description":"Edit cmd/bd/version.go line 17:\n\n```go\nVersion = \"{{version}}\"\n```\n\nVerify with: `grep 'Version =' cmd/bd/version.go`","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T12:59:13.887087-08:00","updated_at":"2025-12-18T23:34:18.630067-08:00","closed_at":"2025-12-18T22:41:41.82664-08:00","dependencies":[{"issue_id":"bd-qqc.1","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T12:59:13.887655-08:00","created_by":"stevey"}]} -{"id":"bd-qqc.10","title":"Upgrade local Homebrew installation","description":"Upgrade bd via Homebrew:\n\n```bash\nbrew update\nbrew upgrade bd\n/opt/homebrew/bin/bd version # Verify shows {{version}}\n```\n\n**Note**: If `brew upgrade` fails with CLT (Command Line Tools) errors on bleeding-edge macOS versions (e.g., Tahoe 26.x):\n\n```bash\n# Reinstall CLT\nsudo rm -rf /Library/Developer/CommandLineTools\nxcode-select --install\n# Wait for GUI installer to complete, then retry brew upgrade\n```\n\nAlternative: Skip Homebrew and use go install:\n```bash\ngo install ./cmd/bd\n~/go/bin/bd version # Verify shows {{version}}\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T22:42:37.60241-08:00","updated_at":"2025-12-22T12:32:23.678806-08:00","closed_at":"2025-12-18T22:52:00.331429-08:00","dependencies":[{"issue_id":"bd-qqc.10","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T22:42:37.603893-08:00","created_by":"daemon"},{"issue_id":"bd-qqc.10","depends_on_id":"bd-qqc.9","type":"blocks","created_at":"2025-12-18T22:43:21.458817-08:00","created_by":"daemon"}]} -{"id":"bd-qqc.11","title":"Update go install bd to {{version}}","description":"Rebuild and install bd to ~/go/bin:\n\n```bash\ngo install ./cmd/bd\n~/go/bin/bd version # Verify shows {{version}}\n```\n\nNote: If ~/go/bin is in PATH before /opt/homebrew/bin, this is the version that runs by default.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T23:07:55.838013-08:00","updated_at":"2025-12-18T23:09:05.775582-08:00","closed_at":"2025-12-18T23:09:05.775582-08:00","dependencies":[{"issue_id":"bd-qqc.11","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T23:07:55.838432-08:00","created_by":"daemon"},{"issue_id":"bd-qqc.11","depends_on_id":"bd-qqc.10","type":"blocks","created_at":"2025-12-18T23:08:19.629947-08:00","created_by":"daemon"}]} -{"id":"bd-qqc.12","title":"Restart daemon with {{version}}","description":"Restart the bd daemon to pick up new version:\n\n```bash\nbd daemon --stop\nbd daemon --start\nbd daemon --health # Verify Version: {{version}}\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T23:08:04.155448-08:00","updated_at":"2025-12-18T23:09:05.777375-08:00","closed_at":"2025-12-18T23:09:05.777375-08:00","dependencies":[{"issue_id":"bd-qqc.12","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T23:08:04.155832-08:00","created_by":"daemon"},{"issue_id":"bd-qqc.12","depends_on_id":"bd-qqc.11","type":"blocks","created_at":"2025-12-18T23:08:19.779897-08:00","created_by":"daemon"}]} -{"id":"bd-qqc.13","title":"Upgrade beads-mcp to {{version}}","description":"Upgrade the MCP server via pip:\n\n```bash\npip install --upgrade beads-mcp\npip show beads-mcp | grep Version # Verify {{version}}\n```\n\nNote: Restart Claude Code or MCP session to use new version.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T23:08:04.318233-08:00","updated_at":"2025-12-18T23:09:05.77824-08:00","closed_at":"2025-12-18T23:09:05.77824-08:00","dependencies":[{"issue_id":"bd-qqc.13","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T23:08:04.318709-08:00","created_by":"daemon"},{"issue_id":"bd-qqc.13","depends_on_id":"bd-qqc.11","type":"blocks","created_at":"2025-12-18T23:08:19.927825-08:00","created_by":"daemon"}]} -{"id":"bd-qqc.2","title":"Add {{version}} to versionChanges in info.go","description":"Add new entry at the TOP of versionChanges array in cmd/bd/info.go:\n\n```go\n{\n Version: \"{{version}}\",\n Date: \"{{date}}\",\n Changes: []string{\n // Add notable changes here\n },\n},\n```\n\nCopy changes from CHANGELOG.md [Unreleased] section.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T12:59:27.032117-08:00","updated_at":"2025-12-18T23:34:18.631996-08:00","closed_at":"2025-12-18T22:41:41.836137-08:00","dependencies":[{"issue_id":"bd-qqc.2","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T12:59:27.032746-08:00","created_by":"stevey"}]} -{"id":"bd-qqc.3","title":"Update CHANGELOG.md for {{version}}","description":"In CHANGELOG.md:\n\n1. Change `## [Unreleased]` section header to `## [{{version}}] - {{date}}`\n2. Add new empty `## [Unreleased]` section above it\n3. Review and clean up the changes list\n\nFormat:\n```markdown\n## [Unreleased]\n\n## [{{version}}] - {{date}}\n\n### Added\n- ...\n\n### Changed\n- ...\n\n### Fixed\n- ...\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T12:59:39.738561-08:00","updated_at":"2025-12-18T23:34:18.629213-08:00","closed_at":"2025-12-18T22:41:41.846609-08:00","dependencies":[{"issue_id":"bd-qqc.3","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T12:59:39.739138-08:00","created_by":"stevey"}]} -{"id":"bd-qqc.4","title":"Run tests and verify build","description":"Run the test suite to verify nothing is broken:\n\n```bash\n./scripts/test.sh\n```\n\nOr manually:\n```bash\ngo build ./cmd/bd/...\ngo test ./...\n```\n\nFix any failures before proceeding.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T12:59:52.308314-08:00","updated_at":"2025-12-18T23:34:18.631671-08:00","closed_at":"2025-12-18T22:41:41.856318-08:00","dependencies":[{"issue_id":"bd-qqc.4","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T12:59:52.308943-08:00","created_by":"stevey"},{"issue_id":"bd-qqc.4","depends_on_id":"bd-qqc.1","type":"blocks","created_at":"2025-12-18T13:00:40.62142-08:00","created_by":"stevey"},{"issue_id":"bd-qqc.4","depends_on_id":"bd-qqc.2","type":"blocks","created_at":"2025-12-18T13:00:45.820132-08:00","created_by":"stevey"},{"issue_id":"bd-qqc.4","depends_on_id":"bd-qqc.3","type":"blocks","created_at":"2025-12-18T13:00:51.014568-08:00","created_by":"stevey"}]} -{"id":"bd-qqc.5","title":"Commit release v{{version}}","description":"Stage and commit the version bump:\n\n```bash\ngit add cmd/bd/version.go cmd/bd/info.go CHANGELOG.md\ngit commit -m \"release: v{{version}}\"\n```\n\nDo NOT push yet - tag first.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T13:00:04.097628-08:00","updated_at":"2025-12-18T23:34:18.630946-08:00","closed_at":"2025-12-18T22:41:41.864839-08:00","dependencies":[{"issue_id":"bd-qqc.5","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T13:00:04.098265-08:00","created_by":"stevey"},{"issue_id":"bd-qqc.5","depends_on_id":"bd-qqc.4","type":"blocks","created_at":"2025-12-18T13:01:02.275008-08:00","created_by":"stevey"}]} -{"id":"bd-qqc.6","title":"Create git tag v{{version}}","description":"Create the release tag:\n\n```bash\ngit tag v{{version}}\n```\n\nVerify: `git tag | grep {{version}}`","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T13:00:15.495086-08:00","updated_at":"2025-12-18T23:34:18.632308-08:00","closed_at":"2025-12-18T22:41:41.874099-08:00","dependencies":[{"issue_id":"bd-qqc.6","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T13:00:15.496036-08:00","created_by":"stevey"},{"issue_id":"bd-qqc.6","depends_on_id":"bd-qqc.5","type":"blocks","created_at":"2025-12-18T13:01:07.478315-08:00","created_by":"stevey"}]} -{"id":"bd-qqc.7","title":"Push release v{{version}} to remote","description":"Push the commit and tag:\n\n```bash\ngit push \u0026\u0026 git push --tags\n```\n\nVerify on GitHub that the tag appears in releases.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T13:00:26.933082-08:00","updated_at":"2025-12-18T23:34:18.630538-08:00","closed_at":"2025-12-18T22:41:41.882956-08:00","dependencies":[{"issue_id":"bd-qqc.7","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T13:00:26.933687-08:00","created_by":"stevey"},{"issue_id":"bd-qqc.7","depends_on_id":"bd-qqc.6","type":"blocks","created_at":"2025-12-18T13:01:12.711161-08:00","created_by":"stevey"}]} -{"id":"bd-qqc.8","title":"Create and push git tag v{{version}}","description":"Create the release tag and push it:\n\n```bash\ngit tag v{{version}}\ngit push origin v{{version}}\n```\n\nThis triggers the GoReleaser GitHub Action to build release binaries.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T22:42:34.659927-08:00","updated_at":"2025-12-18T22:47:14.054232-08:00","closed_at":"2025-12-18T22:47:14.054232-08:00","dependencies":[{"issue_id":"bd-qqc.8","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T22:42:34.660248-08:00","created_by":"daemon"},{"issue_id":"bd-qqc.8","depends_on_id":"bd-vgi5","type":"blocks","created_at":"2025-12-18T22:43:21.209529-08:00","created_by":"daemon"}]} {"id":"bd-qqc.9","title":"Update Homebrew formula","description":"Update the Homebrew tap with new version:\n\n```bash\n./scripts/update-homebrew.sh {{version}}\n```\n\nThis script waits for GitHub Actions to complete (~5 min), then updates the formula with new SHA256 hashes.\n\nAfter running, verify the formula with:\n\n```bash\nbrew info steveyegge/beads/bd\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T22:42:35.815096-08:00","updated_at":"2025-12-22T13:13:42.442952-08:00","closed_at":"2025-12-18T22:51:07.863862-08:00","dependencies":[{"issue_id":"bd-qqc.9","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T22:42:35.816752-08:00","created_by":"daemon"},{"issue_id":"bd-qqc.9","depends_on_id":"bd-qqc.8","type":"blocks","created_at":"2025-12-18T22:43:21.332955-08:00","created_by":"daemon"}]} -{"id":"bd-r06v","title":"Merge: bd-phtv","description":"branch: polecat/Pinner\ntarget: main\nsource_issue: bd-phtv\nrig: beads","status":"closed","priority":1,"issue_type":"merge-request","created_at":"2025-12-23T13:48:16.853715-08:00","updated_at":"2025-12-23T19:12:08.342414-08:00","closed_at":"2025-12-23T19:12:08.342414-08:00","close_reason":"Stale merge-requests from orphaned polecat branches - refinery not processing"} -{"id":"bd-r2n1","title":"Add integration tests for RPC server and event loops","description":"After adding basic unit tests for daemon utilities, the complex daemon functions still need integration tests:\n\nCore daemon lifecycle:\n- startRPCServer: Initializes and starts RPC server with proper error handling\n- runEventLoop: Polling-based sync loop with parent monitoring and signal handling\n- runDaemonLoop: Main daemon initialization and setup\n\nHealth checking:\n- isDaemonHealthy: Checks daemon responsiveness and health metrics\n- checkDaemonHealth: Periodic health verification\n\nThese require more complex test infrastructure (mock RPC, test contexts, signal handling) and should be tackled after the unit test foundation is in place.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T12:28:56.022996362-07:00","updated_at":"2025-12-18T12:44:32.167862713-07:00","closed_at":"2025-12-18T12:44:32.167862713-07:00","dependencies":[{"issue_id":"bd-r2n1","depends_on_id":"bd-4or","type":"discovered-from","created_at":"2025-12-18T12:28:56.045893852-07:00","created_by":"mhwilkie"}]} -{"id":"bd-r36u","title":"gt mq list shows empty when MRs exist","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-20T01:13:07.561256-08:00","updated_at":"2025-12-21T17:51:25.891037-08:00","closed_at":"2025-12-21T17:51:25.891037-08:00","close_reason":"Moved to gastown: gt-uhc3"} -{"id":"bd-r46","title":"Support --reason flag in daemon mode for reopen command","description":"The reopen.go command has a TODO at line 61 to add reason as a comment once RPC supports AddComment. Currently --reason flag is ignored in daemon mode with a warning.","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-11-21T18:55:10.773626-05:00","updated_at":"2025-12-21T21:47:15.43375-08:00","closed_at":"2025-12-21T21:47:15.43375-08:00","close_reason":"Already implemented in commit 78847147: reopen command now adds reason as comment in daemon mode via RPC AddComment"} -{"id":"bd-r4sn","title":"Phase 2.5: TOON-based daemon sync","description":"Implement TOON-native daemon sync (replaces JSONL sync machinery).\n\n## Overview\nDaemon sync is the final integration point. Replace export/import/merge machinery with TOON-native sync, building on deletion tracking (2.3) and merge optimization (2.4).\n\n## Required Work\n\n### 2.5.1 TOON-based Daemon Sync\n- [ ] Understand current JSONL sync machinery (export.go, import.go, merge.go)\n- [ ] Replace export step with TOON encoding (EncodeTOON)\n- [ ] Replace import step with TOON decoding (DecodeTOON)\n- [ ] Replace merge step with TOON-aware 3-way merge\n- [ ] Update daemon auto-sync to read/write TOON\n- [ ] Verify 5-second debounce still works\n\n### 2.5.2 Deletion Sync Integration\n- [ ] Load deletions.toon during import phase\n- [ ] Apply deletions after merging issues\n- [ ] Ensure deletion TTL respects daemon schedule\n\n### 2.5.3 Testing\n- [ ] Unit tests for daemon sync with TOON\n- [ ] Integration tests with actual daemon operations\n- [ ] Multi-clone sync scenarios with concurrent edits\n- [ ] Performance comparison with JSONL sync\n- [ ] Long-running daemon stability tests\n\n## Success Criteria\n- Daemon reads/writes TOON format (not JSONL)\n- Sync latency comparable to JSONL (\u003c100ms)\n- All 70+ tests passing\n- bdt commands work seamlessly with daemon\n- Multi-clone sync scenarios work correctly","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T14:43:20.33132177-07:00","updated_at":"2025-12-21T14:42:25.274362-08:00","closed_at":"2025-12-21T14:42:25.274362-08:00","close_reason":"TOON approach declined","dependencies":[{"issue_id":"bd-r4sn","depends_on_id":"bd-uz8r","type":"blocks","created_at":"2025-12-19T14:43:20.347724699-07:00","created_by":"daemon"},{"issue_id":"bd-r4sn","depends_on_id":"bd-uwkp","type":"blocks","created_at":"2025-12-19T14:43:20.355379309-07:00","created_by":"daemon"}]} -{"id":"bd-r6a","title":"Redesign workflow system: templates as native Beads","description":"## Problem\n\nThe current workflow system (YAML templates in cmd/bd/templates/workflows/) is architecturally flawed:\n\n1. **Out-of-band data plane** - YAML files are a parallel system outside Beads itself\n2. **Heavyweight DSL** - YAML is gross; even TOML would have been better, but neither is ideal\n3. **Not graph-native** - Beads IS already a dependency graph with priorities, so why reinvent it?\n4. **Can't use bd commands on templates** - They're opaque YAML, not viewable/editable Beads\n\n## The Right Design\n\n**Templates should be Beads themselves.**\n\nA \"workflow template\" should be:\n- An epic marked as a template (via label, type, or prefix like `tpl-`)\n- Child issues with dependencies between them (using normal bd dep)\n- Titles and descriptions containing `{{variable}}` placeholders\n- Normal priorities that control serialization order\n\n\"Instantiation\" becomes:\n1. Clone the template subgraph (epic + children + dependencies)\n2. Substitute variables in titles/descriptions\n3. Generate new IDs for all cloned issues\n4. Return the new epic ID\n\n## Benefits\n\n- **No YAML** - Templates are just Beads\n- **Use existing tools** - `bd show`, `bd edit`, `bd dep` work on templates\n- **Graph-native** - Dependencies are real Beads dependencies\n- **Simpler codebase** - Remove all the YAML parsing/workflow code\n- **Composable** - Templates can reference other templates\n\n## Tasks\n\n1. Delete the YAML workflow system code (revert recent push + remove existing workflow code)\n2. Design template marking convention (label? type? id prefix?)\n3. Implement `bd template create` or `bd clone --as-template`\n4. Implement `bd template instantiate \u003ctemplate-id\u003e --var key=value`\n5. Migrate version-bump workflow to native Beads template\n6. Update documentation","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-17T22:41:57.359643-08:00","updated_at":"2025-12-18T17:42:26.000769-08:00","closed_at":"2025-12-18T13:47:04.632525-08:00"} -{"id":"bd-r6a.1","title":"Revert/remove YAML workflow system","description":"Revert the recent commit and remove all YAML workflow code:\n\n1. `git revert aae8407a` (the commit we just pushed with workflow fixes)\n2. Remove `cmd/bd/templates/workflows/` directory\n3. Remove workflow.go or gut it to minimal stub\n4. Remove WorkflowTemplate types from internal/types/workflow.go\n5. Remove any workflow-related RPC handlers\n\nKeep only minimal scaffolding if needed for the new template system.","status":"closed","priority":0,"issue_type":"task","created_at":"2025-12-17T22:42:07.339684-08:00","updated_at":"2025-12-17T22:46:08.606088-08:00","closed_at":"2025-12-17T22:46:08.606088-08:00","dependencies":[{"issue_id":"bd-r6a.1","depends_on_id":"bd-r6a","type":"parent-child","created_at":"2025-12-17T22:42:07.340117-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-r6a.2","title":"Implement subgraph cloning with variable substitution","description":"Core implementation of template instantiation:\n\n1. Add `bd template instantiate \u003ctemplate-id\u003e [--var key=value]...` command\n2. Implement subgraph loading:\n - Load template epic\n - Recursively load all children (and their children)\n - Load all dependencies between issues in the subgraph\n3. Implement variable substitution:\n - Scan titles and descriptions for `{{name}}` patterns\n - Replace with provided values\n - Error on missing required variables (or prompt interactively)\n4. Implement cloning:\n - Generate new IDs for all issues\n - Create cloned issues with substituted text\n - Remap and create dependencies\n5. Return the new epic ID\n\nConsider adding `--dry-run` flag to preview what would be created.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T22:43:25.179848-08:00","updated_at":"2025-12-17T23:02:29.034444-08:00","closed_at":"2025-12-17T23:02:29.034444-08:00","dependencies":[{"issue_id":"bd-r6a.2","depends_on_id":"bd-r6a","type":"parent-child","created_at":"2025-12-17T22:43:25.180286-08:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-r6a.2","depends_on_id":"bd-r6a.1","type":"blocks","created_at":"2025-12-17T22:44:03.15413-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-r6a.3","title":"Create version-bump template as native Beads","description":"Migrate the version-bump workflow from YAML to a native Beads template:\n\n1. Create epic with template label: Release {{version}}\n2. Create child tasks for each step (update version files, changelog, commit, push, publish)\n3. Set up dependencies between tasks\n4. Add verification commands in task descriptions\n\nThis serves as both migration and validation of the new system.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-17T22:43:40.694931-08:00","updated_at":"2025-12-18T17:42:26.001149-08:00","closed_at":"2025-12-18T13:02:09.039457-08:00","dependencies":[{"issue_id":"bd-r6a.3","depends_on_id":"bd-r6a","type":"parent-child","created_at":"2025-12-17T22:43:40.695392-08:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-r6a.3","depends_on_id":"bd-r6a.2","type":"blocks","created_at":"2025-12-17T22:44:03.311902-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-r6a.4","title":"Add bd template list command","description":"Add a convenience command to list available templates:\n\nbd template list\n\nThis is equivalent to 'bd list --label=template' but more discoverable.\nCould also show variable placeholders found in each template.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-17T22:43:47.525316-08:00","updated_at":"2025-12-17T23:02:45.700582-08:00","closed_at":"2025-12-17T23:02:45.700582-08:00","dependencies":[{"issue_id":"bd-r6a.4","depends_on_id":"bd-r6a","type":"parent-child","created_at":"2025-12-17T22:43:47.525743-08:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-r6a.4","depends_on_id":"bd-r6a.2","type":"blocks","created_at":"2025-12-17T22:44:03.474353-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-r6a.5","title":"Update documentation for template system","description":"Update AGENTS.md and help text to document the new template system:\n\n- How to create a template (epic + template label + child issues)\n- How to define variables (just use {{name}} placeholders)\n- How to instantiate (bd template instantiate)\n- Migration from YAML workflows (if any users had custom ones)","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-17T22:43:55.461345-08:00","updated_at":"2025-12-18T17:42:26.001474-08:00","closed_at":"2025-12-18T13:46:53.446262-08:00","dependencies":[{"issue_id":"bd-r6a.5","depends_on_id":"bd-r6a","type":"parent-child","created_at":"2025-12-17T22:43:55.461763-08:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-r6a.5","depends_on_id":"bd-r6a.3","type":"blocks","created_at":"2025-12-17T22:44:03.632404-08:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-r6a.5","depends_on_id":"bd-r6a.4","type":"blocks","created_at":"2025-12-17T22:44:03.788517-08:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-rdzk","title":"Merge: bd-rgyd","description":"branch: polecat/Splitter\ntarget: main\nsource_issue: bd-rgyd\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-23T13:41:15.051877-08:00","updated_at":"2025-12-23T19:12:08.351145-08:00","closed_at":"2025-12-23T19:12:08.351145-08:00","close_reason":"Stale merge-requests from orphaned polecat branches - refinery not processing"} -{"id":"bd-rece","title":"Phase 1.1: TOON Library Integration - Add gotoon dependency","description":"Add gotoon (github.com/alpkeskin/gotoon) to go.mod and create internal/toon wrapper package for TOON encoding/decoding. This enables bdtoon to encode Issue structs to TOON format and decode TOON back to issues.\n\n## Subtasks\n1. Add gotoon dependency: go get github.com/alpkeskin/gotoon\n2. Create internal/toon package with wrapper functions\n3. Write encode tests for Issue struct round-trip conversion\n4. Write decode tests for TOON to Issue conversion\n5. Add gotoon API options to wrapper (indent, delimiter, length markers)\n\n## Success Criteria\n- go.mod includes gotoon dependency\n- internal/toon/encode.go exports EncodeTOON(issues) ([]byte, error)\n- internal/toon/decode.go exports DecodeTOON(data []byte) ([]Issue, error)\n- Round-trip tests verify Issue β†’ TOON β†’ Issue produces identical data\n- Tests pass with: go test ./internal/toon -v","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T11:48:30.018161133-07:00","updated_at":"2025-12-19T12:53:56.808833405-07:00","closed_at":"2025-12-19T12:53:56.808833405-07:00"} -{"id":"bd-rgd7","title":"Update CHANGELOG.md with release notes","description":"Add release notes for 0.32.1: MCP output control params (#667), pin field fix","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-20T21:53:16.031879-08:00","updated_at":"2025-12-20T21:54:07.982164-08:00","closed_at":"2025-12-20T21:54:07.982164-08:00","close_reason":"Added 0.32.1 release notes","dependencies":[{"issue_id":"bd-rgd7","depends_on_id":"bd-an4s","type":"parent-child","created_at":"2025-12-20T21:53:16.034926-08:00","created_by":"daemon"}]} -{"id":"bd-rgyd","title":"Split internal/storage/sqlite/queries.go (1586 lines)","description":"Split internal/storage/sqlite/queries.go (1704 lines) into logical modules.\n\n## Current State\nqueries.go is 1704 lines with mixed responsibilities:\n- Issue CRUD operations\n- Search/filter operations\n- Delete operations (complex cascade logic)\n- Helper functions (parsing, formatting)\n\n## Proposed Split\n\n### 1. queries.go (keep ~400 lines) - Core CRUD\n```go\n// Core issue operations\nfunc (s *SQLiteStorage) CreateIssue(...)\nfunc (s *SQLiteStorage) GetIssue(...)\nfunc (s *SQLiteStorage) UpdateIssue(...)\nfunc (s *SQLiteStorage) CloseIssue(...)\n```\n\n### 2. queries_search.go (~300 lines) - Search/Filter\n```go\n// Search and filtering\nfunc (s *SQLiteStorage) SearchIssues(...)\nfunc (s *SQLiteStorage) GetIssueByExternalRef(...)\nfunc (s *SQLiteStorage) GetCloseReason(...)\nfunc (s *SQLiteStorage) GetCloseReasonsForIssues(...)\n```\n\n### 3. queries_delete.go (~400 lines) - Delete Operations\n```go\n// Delete operations with cascade logic\nfunc (s *SQLiteStorage) CreateTombstone(...)\nfunc (s *SQLiteStorage) DeleteIssue(...)\nfunc (s *SQLiteStorage) DeleteIssues(...)\nfunc (s *SQLiteStorage) resolveDeleteSet(...)\nfunc (s *SQLiteStorage) expandWithDependents(...)\nfunc (s *SQLiteStorage) validateNoDependents(...)\nfunc (s *SQLiteStorage) checkSingleIssueValidation(...)\nfunc (s *SQLiteStorage) trackOrphanedIssues(...)\nfunc (s *SQLiteStorage) collectOrphansForID(...)\nfunc (s *SQLiteStorage) populateDeleteStats(...)\nfunc (s *SQLiteStorage) executeDelete(...)\nfunc (s *SQLiteStorage) findAllDependentsRecursive(...)\n```\n\n### 4. queries_helpers.go (~100 lines) - Utilities\n```go\n// Helper functions (already at top of file)\nfunc parseNullableTimeString(...)\nfunc parseJSONStringArray(...)\nfunc formatJSONStringArray(...)\n```\n\n### 5. queries_rename.go (~100 lines) - ID/Prefix Operations\n```go\n// ID and prefix management\nfunc (s *SQLiteStorage) UpdateIssueID(...)\nfunc (s *SQLiteStorage) RenameDependencyPrefix(...)\nfunc (s *SQLiteStorage) RenameCounterPrefix(...)\nfunc (s *SQLiteStorage) ResetCounter(...)\n```\n\n## Implementation Steps\n\n1. **Create new files** with package declaration:\n ```go\n // queries_delete.go\n package sqlite\n \n import (...)\n ```\n\n2. **Move functions** - cut/paste, maintaining order within each file\n\n3. **Update imports** - each file needs its own imports\n\n4. **Run tests** after each file split:\n ```bash\n go test ./internal/storage/sqlite/...\n ```\n\n5. **Run linter** to catch any issues:\n ```bash\n golangci-lint run ./internal/storage/sqlite/...\n ```\n\n## File Organization\n```\ninternal/storage/sqlite/\nβ”œβ”€β”€ queries.go # Core CRUD (~400 lines)\nβ”œβ”€β”€ queries_search.go # Search/filter (~300 lines)\nβ”œβ”€β”€ queries_delete.go # Delete cascade (~400 lines)\nβ”œβ”€β”€ queries_helpers.go # Utilities (~100 lines)\n└── queries_rename.go # ID operations (~100 lines)\n```\n\n## Success Criteria\n- No file \u003e 500 lines\n- All tests pass\n- No functionality changes\n- Clear separation of concerns","status":"closed","priority":2,"issue_type":"task","assignee":"beads/Splitter","created_at":"2025-12-16T18:17:23.85869-08:00","updated_at":"2025-12-23T13:40:51.62551-08:00","closed_at":"2025-12-23T13:40:51.62551-08:00","close_reason":"Split queries.go into 5 focused modules - all tests pass","dependencies":[{"issue_id":"bd-rgyd","depends_on_id":"bd-iz5t","type":"parent-child","created_at":"2025-12-23T12:44:07.50733-08:00","created_by":"daemon"}]} -{"id":"bd-rl5t","title":"Integration test: agent waits for CI via gate","description":"End-to-end test of the gate workflow.\n\n## Test Scenario\n1. Agent creates gate: bd gate create --await gh:run:123 --timeout 5m --notify beads/dave\n2. Agent writes handoff and exits\n3. Deacon patrol checks gate condition\n4. (Mock) GitHub run completes\n5. Deacon notifies waiter and closes gate\n6. New agent session reads mail and resumes\n\n## Test Requirements\n- Mock GitHub API responses\n- Test timeout path\n- Test multiple waiters\n- Verify mail notifications sent","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-23T11:44:41.725752-08:00","updated_at":"2025-12-23T12:24:08.346347-08:00","closed_at":"2025-12-23T12:24:08.346347-08:00","close_reason":"Moved to gastown: gt-gswn","dependencies":[{"issue_id":"bd-rl5t","depends_on_id":"bd-udsi","type":"parent-child","created_at":"2025-12-23T11:44:53.157037-08:00","created_by":"daemon"},{"issue_id":"bd-rl5t","depends_on_id":"bd-2l03","type":"blocks","created_at":"2025-12-23T11:44:56.674866-08:00","created_by":"daemon"},{"issue_id":"bd-rl5t","depends_on_id":"bd-ykqu","type":"blocks","created_at":"2025-12-23T11:44:56.753264-08:00","created_by":"daemon"}]} -{"id":"bd-rnnr","title":"BondRef data model for compound lineage","description":"Add data model support for tracking compound molecule lineage.\n\nNEW FIELDS on Issue:\n bonded_from: []BondRef // For compounds: constituent protos\n\nNEW TYPE:\n type BondRef struct {\n ProtoID string // Source proto ID\n BondType string // sequential, parallel, conditional\n BondPoint string // Attachment site (issue ID or empty for root)\n }\n\nJSONL SERIALIZATION:\n {\n \"id\": \"proto-feature-tested\",\n \"title\": \"Feature with tests\",\n \"bonded_from\": [\n {\"proto_id\": \"proto-feature\", \"bond_type\": \"root\"},\n {\"proto_id\": \"proto-testing\", \"bond_type\": \"sequential\"}\n ],\n ...\n }\n\nQUERIES:\n- GetCompoundConstituents(id) β†’ []BondRef\n- IsCompound(id) β†’ bool\n- GetCompoundsUsing(protoID) β†’ []Issue // Reverse lookup","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-21T00:59:38.582509-08:00","updated_at":"2025-12-21T01:19:43.922416-08:00","closed_at":"2025-12-21T01:19:43.922416-08:00","close_reason":"Added BondRef type to types.go with ProtoID, BondType, BondPoint fields; added BondedFrom field to Issue; added IsCompound() and GetConstituents() helpers; added BondType constants","dependencies":[{"issue_id":"bd-rnnr","depends_on_id":"bd-o5xe","type":"parent-child","created_at":"2025-12-21T00:59:51.234246-08:00","created_by":"daemon"}]} -{"id":"bd-rp4o","title":"Deleted issues resurrect during bd sync (tombstones not propagating)","description":"## Problem\n\nWhen issues are deleted with bd delete --force, they get deleted from the local DB but resurrect during the next bd sync.\n\n## Reproduction\n\n1. Observe orphan warnings (bd-cb64c226.*, bd-cbed9619.*)\n2. Delete them: bd delete bd-cb64c226.1 ... --force\n3. Run bd sync\n4. Orphan warnings reappear - issues were resurrected!\n\n## Root Cause Hypothesis\n\nThe sync branch workflow (beads-sync) has the old state before deletions. When bd sync pulls from beads-sync and copies JSONL to main, the deleted issues are re-imported.\n\nTombstones may not be properly:\n1. Written to beads-sync during export\n2. Propagated during pull/merge\n3. Honored during import\n\n## Related\n\n- bd-7b7h: chicken-and-egg sync.branch bug (same workflow)\n- bd-ncwo: ID-based fallback matching to prevent ghost resurrection\n\n## Files to Investigate\n\n- cmd/bd/sync.go (export/import flow)\n- internal/syncbranch/worktree.go (PullFromSyncBranch, copyJSONLToMainRepo)\n- internal/importer/ (tombstone handling)","status":"tombstone","priority":1,"issue_type":"bug","created_at":"2025-12-16T23:09:43.072696-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} -{"id":"bd-rupw","title":"Run bump-version.sh 0.30.7","description":"Run ./scripts/bump-version.sh 0.30.7 to update version in all files","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T22:56:48.649647-08:00","updated_at":"2025-12-19T22:57:31.512956-08:00","closed_at":"2025-12-19T22:57:31.512956-08:00","dependencies":[{"issue_id":"bd-rupw","depends_on_id":"bd-8pyn","type":"parent-child","created_at":"2025-12-19T22:56:48.653475-08:00","created_by":"stevey"}]} -{"id":"bd-rze6","title":"Digest: Release v0.34.0 @ 2025-12-22 12:16","description":"Released v0.34.0: wisp commands, chemistry UX, cross-project deps","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-22T12:16:53.033119-08:00","updated_at":"2025-12-22T12:16:53.033119-08:00","closed_at":"2025-12-22T12:16:53.033025-08:00","close_reason":"Squashed from wisp bd-25c (20 issues)"} -{"id":"bd-s0qf","title":"GH#405: Fix prefix parsing with hyphens - multi-hyphen prefixes parsed incorrectly","description":"Fixed: ExtractIssuePrefix was falling back to first-hyphen for word-like suffixes, breaking multi-hyphen prefixes like 'hacker-news' and 'me-py-toolkit'.","status":"tombstone","priority":2,"issue_type":"bug","created_at":"2025-12-16T01:13:56.951359-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} -{"id":"bd-s1pz","title":"Merge: bd-u2sc.4","description":"branch: polecat/Logger\ntarget: main\nsource_issue: bd-u2sc.4\nrig: beads","status":"closed","priority":3,"issue_type":"merge-request","created_at":"2025-12-23T13:45:52.412757-08:00","updated_at":"2025-12-23T19:12:08.356689-08:00","closed_at":"2025-12-23T19:12:08.356689-08:00","close_reason":"Stale merge-requests from orphaned polecat branches - refinery not processing"} -{"id":"bd-s2t","title":"wish: a 'continue' or similar cmd/flag which means alter last issue","description":"so many time I create an issue and then have another thought: 'oh, before I did X and it crashed there was ZZZ happening' or 'actually that is P4 not P2'. It would be nice if when `bd {cmd}` is used without a {title} or {id} it just adds or updates the most recently touched issue.","status":"open","priority":3,"issue_type":"task","created_at":"2025-12-08T06:46:37.529160416-07:00","updated_at":"2025-12-08T06:46:37.529160416-07:00"} -{"id":"bd-sal9","title":"bd mol current: soft cursor showing current/next step","description":"Add bd mol current command for molecule navigation orientation.\n\n## Usage\n\nbd mol current [mol-id]\n\nIf mol-id given, show status for that molecule.\nIf not given, infer from in_progress issues assigned to current agent.\n\n## Output\n\nYou're working on molecule gt-abc (Feature X)\n\n [done] gt-abc.1: Design\n [done] gt-abc.2: Scaffold \n [done] gt-abc.3: Implement\n [current] gt-abc.4: Write tests [in_progress] \u003c- YOU ARE HERE\n [pending] gt-abc.5: Documentation\n [pending] gt-abc.6: Exit decision\n\nProgress: 3/6 steps complete\n\n## Key behaviors\n- Shows full molecule structure with status indicators\n- Highlights current in_progress step\n- If no in_progress, highlights first ready step\n- Works without explicit cursor tracking (inferred from state)\n\n## Implementation notes\n- Query children of mol-id\n- Sort by dependency order\n- Find first in_progress or first ready\n- Format with status indicators\n\n## Gas Town integration\n- gt-lz13: Update templates with nav workflow\n- gt-um6q: Update docs with nav workflow","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-12-22T17:03:30.245964-08:00","updated_at":"2025-12-22T17:36:31.936007-08:00","closed_at":"2025-12-22T17:36:31.936007-08:00","close_reason":"Implemented mol current and close --continue"} -{"id":"bd-sh4c","title":"Improve test coverage for cmd/bd/setup (28.4% β†’ 50%)","description":"The setup package has only 28.4% test coverage. Setup commands are critical for first-time user experience.\n\nCurrent coverage: 28.4%\nTarget coverage: 50%","status":"in_progress","priority":2,"issue_type":"task","assignee":"beads/foxtrot","created_at":"2025-12-13T20:43:04.409346-08:00","updated_at":"2025-12-23T22:29:35.537883-08:00"} -{"id":"bd-si4g","title":"Verify release artifacts","description":"Check GitHub releases page - binaries for darwin/linux/windows should be available","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T22:56:04.183029-08:00","updated_at":"2025-12-20T00:49:51.92894-08:00","closed_at":"2025-12-20T00:25:52.720816-08:00","dependencies":[{"issue_id":"bd-si4g","depends_on_id":"bd-6s61","type":"parent-child","created_at":"2025-12-19T22:56:15.173619-08:00","created_by":"daemon"},{"issue_id":"bd-si4g","depends_on_id":"bd-otli","type":"blocks","created_at":"2025-12-19T22:56:23.428507-08:00","created_by":"daemon"}]} -{"id":"bd-siz1","title":"GH#532: bd sync circular error (suggests running bd sync)","description":"bd sync error message recommends running bd sync to fix the bd sync error. Fix error handling to provide useful guidance. See GitHub issue #532.","status":"tombstone","priority":2,"issue_type":"bug","created_at":"2025-12-16T01:04:00.543573-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} -{"id":"bd-sj5y","title":"Daemon should be singleton and aggressively kill stale instances","description":"Found 2 bd daemons running (PIDs 76868, 77515) during shutdown. The daemon should:\n\n1. Be a singleton - only one instance per rig allowed\n2. On startup, check for existing daemon and kill it before starting\n3. Use a PID file or lock file to enforce this\n\nCurrently stale daemons can accumulate, causing confusion and resource waste.","notes":"**Investigation 2025-12-21:**\n\nThe singleton mechanism is already implemented and working correctly:\n\n1. **daemon.lock** uses flock (exclusive non-blocking) to prevent duplicate daemons\n2. **bd.sock.startlock** coordinates concurrent auto-starts via O_CREATE|O_EXCL\n3. **Registry** tracks all daemons globally in ~/.beads/registry.json\n\nTesting shows:\n- Trying to start a second daemon gives: 'Error: daemon already running (PID X)'\n- Multiple daemons for *different* rigs is expected/correct behavior\n\nThe original report ('Found 2 bd daemons running PIDs 76868, 77515') was likely:\n1. Two daemons for different rigs (expected), OR\n2. An edge case that's since been fixed\n\nConsider closing as RESOLVED or clarifying the original scenario.","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-21T01:29:14.778949-08:00","updated_at":"2025-12-21T11:27:34.302585-08:00","closed_at":"2025-12-21T11:27:34.302585-08:00","close_reason":"Singleton mechanism already implemented and verified working. Uses flock-based daemon.lock + O_CREATE|O_EXCL startlock. Testing confirms 'daemon already running' error on duplicate start attempts."} +{"id":"bd-5qim","title":"Optimize GetReadyWork performance - 752ms on 10K database (target: \u003c50ms)","notes":"# Performance Analysis (10K Issue Database)\n\nAnalyzed using CPU profiles from benchmark suite on Apple M2 Pro.\n\n## Operation Performance\n\n| Operation | Time | Allocations | Memory |\n|----------------------------------|---------|-------------|--------|\n| bd ready (GetReadyWork) | ~752ms | 167,466 | 16MB |\n| bd list (SearchIssues no filter) | ~11.6ms | 89,214 | 5.8MB |\n| bd list (SearchIssues filtered) | ~9.2ms | 62,365 | 3.5MB |\n| bd create (CreateIssue) | ~2.6ms | 146 | 8.6KB |\n| bd update (UpdateIssue) | ~0.32ms | 364 | 15KB |\n| bd close (UpdateIssue) | ~0.32ms | 364 | 15KB |\n\n**Target: \u003c50ms for all operations on 10K database**\n\n**Current issue: GetReadyWork is 15x over target (752ms vs 50ms)**\n\n## Root Cause\n\nGetReadyWork (internal/storage/sqlite/ready.go:90-128) uses recursive CTE to propagate blocking:\n- 65x slower than SearchIssues\n- Recalculates entire blocked issue tree on every call\n- Algorithm:\n 1. Find directly blocked issues via 'blocks' dependencies\n 2. Recursively propagate blockage to descendants (max depth: 50)\n 3. Exclude all blocked issues from results\n\n## CPU Profile Analysis\n\n- Database syscalls (pthread_cond_signal, syscall6): ~75%\n- SQLite engine overhead: inherent to recursive CTE\n- Application code (query construction): \u003c1%\n\n**Bottleneck is the recursive CTE query execution, not application code.**\n\n## Optimization Recommendations\n\n### High Impact (Likely to achieve \u003c50ms target)\n\n1. **Cache blocked issue calculation**\n - Add `blocked_issues` table updated on dependency changes\n - Trade write complexity for read speed (ready called \u003e\u003e dependency changes)\n - Eliminates recursive CTE on every read\n\n2. **Add/verify database indexes**\n ```sql\n CREATE INDEX IF NOT EXISTS idx_dependencies_blocked \n ON dependencies(issue_id, type, depends_on_id);\n CREATE INDEX IF NOT EXISTS idx_issues_status \n ON issues(status);\n ```\n\n### Medium Impact\n\n3. **Reduce allocations** (167K allocations for GetReadyWork)\n - Profile `scanIssues()` for object pooling opportunities\n - Reuse slice capacity for repeated calls\n\n### Low Impact (Not recommended)\n- Query optimization for CRUD operations (already \u003c3ms)\n- Connection pooling tuning (not showing in profiles)\n\n## Verification\n\nRun benchmarks to validate optimization:\n```bash\nmake bench-quick\ngo tool pprof -http=:8080 internal/storage/sqlite/bench-cpu-*.prof\n```\n\nProfile files automatically generated in `internal/storage/sqlite/`.","status":"closed","priority":0,"issue_type":"bug","created_at":"2025-11-14T09:02:46.507526-08:00","updated_at":"2025-12-17T23:13:40.534258-08:00","closed_at":"2025-12-17T16:21:37.918868-08:00"} +{"id":"bd-3jcw","title":"activity.go: Missing test coverage","description":"The new activity.go command (from bd-xo1o.3) has no test coverage. At minimum, tests should cover:\n- parseDurationString() for various formats (5m, 1h, 2d, invalid)\n- filterEvents() for --mol and --type filtering\n- formatEvent() and getEventDisplay() for all mutation types\n\nDiscovered during code review of bd-xo1o implementation.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-23T04:06:15.563579-08:00","updated_at":"2025-12-23T04:14:56.150151-08:00","closed_at":"2025-12-23T04:14:56.150151-08:00"} +{"id":"bd-0a43","title":"Split monolithic sqlite.go into focused files","description":"internal/storage/sqlite/sqlite.go is 1050 lines containing initialization, 20+ CRUD methods, query building, and schema management.\n\nSplit into:\n- store.go: Store struct \u0026 initialization (150 lines)\n- bead_queries.go: Bead CRUD (300 lines)\n- work_queries.go: Work queries (200 lines) \n- stats_queries.go: Statistics (150 lines)\n- schema.go: Schema \u0026 migrations (150 lines)\n- helpers.go: Common utilities (100 lines)\n\nImpact: Impossible to understand at a glance; hard to find specific functionality; high cognitive load\n\nEffort: 6-8 hours","status":"closed","priority":0,"issue_type":"task","created_at":"2025-11-16T14:51:16.520465-08:00","updated_at":"2025-12-17T23:13:40.533947-08:00","closed_at":"2025-12-17T16:51:30.236012-08:00"} +{"id":"bd-qkw9","title":"Run bump-version.sh {{version}}","description":"Run ./scripts/bump-version.sh {{version}} to update version in all files","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T22:55:58.841424-08:00","updated_at":"2025-12-20T17:59:26.262877-08:00","closed_at":"2025-12-20T01:18:48.99813-08:00","dependencies":[{"issue_id":"bd-qkw9","depends_on_id":"bd-6s61","type":"parent-child","created_at":"2025-12-19T22:56:14.786027-08:00","created_by":"daemon"}]} +{"id":"bd-4sfl","title":"Merge: bd-14ie","description":"branch: polecat/toast\ntarget: main\nsource_issue: bd-14ie\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-19T23:23:37.360782-08:00","updated_at":"2025-12-20T23:17:26.997276-08:00","closed_at":"2025-12-20T23:17:26.997276-08:00"} +{"id":"bd-2vh3.6","title":"Tier 5 (Future): JSONL archive rotation","description":"Periodic rotation of issues.jsonl for long-running repos.\n\n## Design\n\n.beads/\nβ”œβ”€β”€ issues.jsonl # Current (hot)\nβ”œβ”€β”€ archive/\nβ”‚ β”œβ”€β”€ issues-2025-12.jsonl.gz # Archived (cold)\nβ”‚ └── ...\n└── index.jsonl # Merged index for queries\n\n## Commands\n\nbd archive rotate [flags]\n --older-than N Archive issues closed \u003e N days\n --compress Gzip archives\n --dry-run Preview\n\nbd archive list # Show archived periods\nbd archive restore \u003cperiod\u003e # Restore from archive\n\n## Config\n\nbd config set archive.enabled true\nbd config set archive.rotate_days 90\nbd config set archive.compress true\nbd config set archive.path '.beads/archive'\n\n## Considerations\n\n- Archives can be gitignored (local only) or committed (shared)\n- Query layer must check index, hydrate from archive\n- Cold storage tiering (S3/GCS) for enterprise\n- Merkle proofs preserved for audit\n\n## Priority\n\nThis is post-1.0 work. Current focus is on squash (removes ephemeral).\nArchive helps with long-term history but is less critical.","status":"deferred","priority":4,"issue_type":"feature","created_at":"2025-12-21T12:58:38.210008-08:00","updated_at":"2025-12-23T12:27:02.371921-08:00","dependencies":[{"issue_id":"bd-2vh3.6","depends_on_id":"bd-2vh3","type":"parent-child","created_at":"2025-12-21T12:58:38.210543-08:00","created_by":"stevey"}]} +{"id":"bd-ia3g","title":"BondRef.ProtoID field name is misleading for mol+mol bonds","description":"In bondMolMol, the BondRef.ProtoID field is used to store molecule IDs:\n\n```go\nBondedFrom: append(molA.BondedFrom, types.BondRef{\n ProtoID: molB.ID, // This is a molecule, not a proto\n ...\n})\n```\n\nThis is semantically confusing since ProtoID suggests it should only hold proto references.\n\n**Options:**\n1. Rename ProtoID to SourceID (breaking change, needs migration)\n2. Add documentation clarifying ProtoID can hold molecule IDs in bond context\n3. Leave as-is, accept the naming is imprecise\n\nLow priority since it's just naming, not functionality.","status":"open","priority":4,"issue_type":"task","created_at":"2025-12-21T10:23:00.755067-08:00","updated_at":"2025-12-21T10:23:00.755067-08:00"} {"id":"bd-su45","title":"Protect pinned issues from bd cleanup/compact","description":"Update bd cleanup and bd compact to never delete pinned issues, even if they are closed. Pinned issues should persist indefinitely as reference material.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T23:33:46.204783-08:00","updated_at":"2025-12-19T17:43:35.712617-08:00","closed_at":"2025-12-19T00:43:04.06406-08:00","dependencies":[{"issue_id":"bd-su45","depends_on_id":"bd-0vg","type":"blocks","created_at":"2025-12-18T23:33:56.64582-08:00","created_by":"daemon"},{"issue_id":"bd-su45","depends_on_id":"bd-7h5","type":"blocks","created_at":"2025-12-18T23:34:07.857586-08:00","created_by":"daemon"}]} -{"id":"bd-sumr","title":"Merge: bd-t4sb","description":"branch: polecat/capable\ntarget: main\nsource_issue: bd-t4sb\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-19T23:22:21.343724-08:00","updated_at":"2025-12-20T23:17:26.997992-08:00","closed_at":"2025-12-20T23:17:26.997992-08:00","close_reason":"Branches nuked, MRs obsolete"} -{"id":"bd-svb5","title":"GH#505: Add bd reset/wipe command","description":"Add command to cleanly reset/wipe beads database. User reports painful manual process to start fresh. See GitHub issue #505.","status":"tombstone","priority":2,"issue_type":"feature","created_at":"2025-12-16T01:03:42.160966-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"feature"} -{"id":"bd-t3cf","title":"Update CHANGELOG.md for 0.33.2","description":"In CHANGELOG.md:\n\n1. Change `## [Unreleased]` section header to `## [0.33.2] - 2025-12-21`\n2. Add new empty `## [Unreleased]` section above it\n3. Review and clean up the changes list\n\nFormat:\n```markdown\n## [Unreleased]\n\n## [0.33.2] - 2025-12-21\n\n### Added\n- ...\n\n### Changed\n- ...\n\n### Fixed\n- ...\n```","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.7614-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","close_reason":"CHANGELOG already updated for 0.33.2","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task","wisp":true} -{"id":"bd-t3en","title":"Merge: bd-d28c","description":"branch: polecat/capable\ntarget: main\nsource_issue: bd-d28c\nrig: beads","status":"closed","priority":1,"issue_type":"merge-request","created_at":"2025-12-23T20:43:16.997802-08:00","updated_at":"2025-12-23T21:21:57.694201-08:00","closed_at":"2025-12-23T21:21:57.694201-08:00","close_reason":"stale - no code pushed"} -{"id":"bd-t4sb","title":"Work on beads-d8h: Fix prefix mismatch false positive wit...","description":"Work on beads-d8h: Fix prefix mismatch false positive with multi-hyphen prefixes like 'asianops-audit-' (GH#422). When done, submit MR (not PR) to integration branch for Refinery.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-19T22:56:19.545069-08:00","updated_at":"2025-12-19T23:28:32.429127-08:00","closed_at":"2025-12-19T23:21:45.471711-08:00","close_reason":"Fixed multi-hyphen prefix false positive in ValidateIDFormat (GH#422)"} -{"id":"bd-t4u1","title":"False positive detection by Kaspersky Antivirus (Trojan)","description":"Kaspersky Antivirus falsely detects beads (bd.exe v0.23.1) as a Trojan (PDM:Trojan.Win32.Generic) and removes it.\nEvent: Malicious object detected\nComponent: System Watcher\nObject name: bd.exe\n","status":"open","priority":1,"issue_type":"task","created_at":"2025-11-20T18:56:12.498187-05:00","updated_at":"2025-11-20T18:56:12.498187-05:00"} -{"id":"bd-tbz3","title":"bd init UX Improvements","description":"bd init leaves users with incomplete setup, requiring manual bd doctor --fix. Issues found: (1) git hooks not installed if user declines prompt, (2) no auto-migration when CLI is upgraded, (3) stale merge driver configs from old versions. Fix by making bd init more robust with better defaults and auto-migration.","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-11-21T23:16:00.333543-08:00","updated_at":"2025-12-23T04:20:51.88847-08:00","closed_at":"2025-12-23T04:20:51.88847-08:00","close_reason":"Already implemented in commits ec4117d0 and 3a36d0b9 - hooks/merge driver install by default, doctor runs at end of init"} -{"id":"bd-tggf","title":"Code Health Review Dec 2025: Technical Debt Cleanup","description":"Epic grouping technical debt identified in the Dec 16, 2025 code health review.\n\n## Overall Health Grade: B (Solid foundation, needs cleanup)\n\n### P1 (High Priority):\n- bd-74w1: Consolidate duplicate path-finding utilities\n- bd-b6xo: Remove/fix ClearDirtyIssues() race condition\n- bd-b3og: Fix TestImportBugIntegration deadlock\n\n### P2 (Medium Priority):\n- bd-05a8: Split large files (doctor.go, sync.go)\n- bd-qioh: Standardize error handling patterns\n- bd-rgyd: Split queries.go (1586 lines)\n- bd-9g1z: Fix/remove TestFindJSONLPathDefault\n\n### P3 (Low Priority):\n- bd-ork0: Add comments to 30+ ignored errors\n- bd-4nqq: Remove dead test code in info_test.go\n- bd-dhza: Reduce global state in main.go\n\n## Key Areas:\n1. Code duplication in path utilities\n2. Large monolithic files (5 files \u003e1000 lines)\n3. Global state (25+ variables, 3 deprecated)\n4. Silent error suppression (30+ instances)\n5. Test gaps and dead test code\n6. Atomicity risks in batch operations","status":"open","priority":2,"issue_type":"epic","created_at":"2025-12-16T18:18:58.115507-08:00","updated_at":"2025-12-16T18:21:50.561709-08:00","dependencies":[{"issue_id":"bd-tggf","depends_on_id":"bd-74w1","type":"blocks","created_at":"2025-12-22T21:00:21.429274-08:00","created_by":"daemon"},{"issue_id":"bd-tggf","depends_on_id":"bd-05a8","type":"blocks","created_at":"2025-12-22T21:00:21.501589-08:00","created_by":"daemon"},{"issue_id":"bd-tggf","depends_on_id":"bd-9g1z","type":"blocks","created_at":"2025-12-22T21:00:21.571116-08:00","created_by":"daemon"},{"issue_id":"bd-tggf","depends_on_id":"bd-qioh","type":"blocks","created_at":"2025-12-22T21:00:21.640589-08:00","created_by":"daemon"},{"issue_id":"bd-tggf","depends_on_id":"bd-rgyd","type":"blocks","created_at":"2025-12-22T21:00:21.710912-08:00","created_by":"daemon"},{"issue_id":"bd-tggf","depends_on_id":"bd-4nqq","type":"blocks","created_at":"2025-12-22T21:00:21.781914-08:00","created_by":"daemon"},{"issue_id":"bd-tggf","depends_on_id":"bd-dhza","type":"blocks","created_at":"2025-12-22T21:00:21.852-08:00","created_by":"daemon"},{"issue_id":"bd-tggf","depends_on_id":"bd-ork0","type":"blocks","created_at":"2025-12-22T21:00:21.930168-08:00","created_by":"daemon"}]} -{"id":"bd-thgk","title":"Improve test coverage for internal/compact (18.2% β†’ 70%)","description":"Improve test coverage for internal/compact package from 17% to 70%.\n\n## Current State\n- Coverage: 17.3%\n- Files: compactor.go, git.go, haiku.go\n- Tests: compactor_test.go (minimal tests)\n\n## Functions Needing Tests\n\n### compactor.go (core compaction)\n- [ ] New - needs config validation tests\n- [ ] CompactTier1 - needs single issue compaction tests\n- [ ] CompactTier1Batch - needs batch processing tests\n- [ ] compactSingleWithResult - internal, test via public API\n\n### git.go\n- [ ] GetCurrentCommitHash - needs git repo fixture tests\n\n### haiku.go (AI summarization) - MOCK REQUIRED\n- [ ] NewHaikuClient - needs API key validation tests\n- [ ] SummarizeTier1 - needs mock API response tests\n- [ ] callWithRetry - needs retry logic tests\n- [ ] isRetryable - needs error classification tests\n- [ ] renderTier1Prompt - needs template rendering tests\n\n## Implementation Guide\n\n1. **Mock the Anthropic API:**\n ```go\n // Create mock HTTP server\n server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n json.NewEncoder(w).Encode(map[string]interface{}{\n \"content\": []map[string]string{{\"text\": \"Summarized content\"}},\n })\n }))\n defer server.Close()\n \n // Point client at mock\n client.baseURL = server.URL\n ```\n\n2. **Test scenarios:**\n - Successful compaction with AI summary\n - API failure with retry\n - Rate limit handling\n - Empty issue handling\n - Large issue truncation\n\n3. **Use test database:**\n ```go\n store, cleanup := testutil.NewTestStore(t)\n defer cleanup()\n ```\n\n## Success Criteria\n- Coverage β‰₯ 70%\n- AI calls properly mocked (no real API calls in tests)\n- Retry logic verified\n- Error paths covered\n\n## Run Tests\n```bash\ngo test -v -cover ./internal/compact\ngo test -race ./internal/compact\n```","status":"closed","priority":1,"issue_type":"task","assignee":"beads/Compactor","created_at":"2025-12-13T20:42:58.455767-08:00","updated_at":"2025-12-23T13:41:10.80832-08:00","closed_at":"2025-12-23T13:41:10.80832-08:00","close_reason":"Coverage improved from 17.3% to 81.8%, exceeding 70% target","dependencies":[{"issue_id":"bd-thgk","depends_on_id":"bd-iz5t","type":"parent-child","created_at":"2025-12-23T12:44:07.287377-08:00","created_by":"daemon"}]} -{"id":"bd-tj00","title":"Update local installation","description":"go build -o ~/.local/bin/bd ./cmd/bd \u0026\u0026 codesign -s - ~/.local/bin/bd","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-20T21:53:20.616907-08:00","updated_at":"2025-12-20T21:55:42.756171-08:00","closed_at":"2025-12-20T21:55:42.756171-08:00","close_reason":"Installed bd 0.32.1","dependencies":[{"issue_id":"bd-tj00","depends_on_id":"bd-an4s","type":"parent-child","created_at":"2025-12-20T21:53:20.619834-08:00","created_by":"daemon"},{"issue_id":"bd-tj00","depends_on_id":"bd-9l0h","type":"blocks","created_at":"2025-12-20T21:53:29.817989-08:00","created_by":"daemon"}]} -{"id":"bd-tm2p","title":"Polecats get stuck on interactive shell prompts (cp/mv/rm -i)","description":"During swarm operations, polecats frequently get stuck waiting for interactive prompts from shell commands like:\n- cp prompting 'overwrite file? (y/n)'\n- mv prompting 'overwrite file? (y/n)' \n- rm prompting 'remove file?'\n\nThis happens because macOS aliases or shell configs may have -i flags set by default.\n\nRoot cause: Claude Code runs commands that trigger interactive confirmation prompts, but cannot respond to them, causing the agent to hang indefinitely.\n\nObserved in: Multiple polecats during GH issues swarm (Dec 2024)\n- Derrick, Roustabout, Prospector, Warboy all got stuck on y/n prompts\n\nSuggested fixes:\n1. AGENTS.md should instruct agents to always use -f flag with cp/mv/rm\n2. Polecat startup could set shell aliases to use non-interactive versions\n3. bd prime hook could include guidance about non-interactive commands\n4. Consider detecting stuck prompts and auto-recovering","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-14T16:51:24.572271-08:00","updated_at":"2025-12-17T23:13:40.536312-08:00","closed_at":"2025-12-17T19:13:04.074424-08:00"} -{"id":"bd-to1u","title":"Run bump-version.sh test-squash","description":"Run ./scripts/bump-version.sh test-squash to update version in all files","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T13:52:33.06696-08:00","updated_at":"2025-12-21T13:53:41.841677-08:00","deleted_at":"2025-12-21T13:53:41.841677-08:00","deleted_by":"stevey","delete_reason":"manual delete","original_type":"task","wisp":true} -{"id":"bd-toy3","title":"Test hook","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-12-16T18:33:39.717036-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} -{"id":"bd-tvu3","title":"Improve test coverage for internal/beads (48.1% β†’ 70%)","description":"Improve test coverage for internal/beads package from 48% to 70%.\n\n## Current State\n- Coverage: 48.4%\n- Files: beads.go, fingerprint.go\n- Tests: beads_test.go (moderate coverage)\n\n## Functions Needing Tests\n\n### beads.go (database discovery)\n- [ ] followRedirect - needs redirect file tests\n- [ ] findDatabaseInBeadsDir - needs various dir structures\n- [x] NewSQLiteStorage - likely covered\n- [ ] FindDatabasePath - needs BEADS_DB env var tests\n- [ ] hasBeadsProjectFiles - needs file existence tests\n- [ ] FindBeadsDir - needs directory traversal tests\n- [ ] FindJSONLPath - needs path derivation tests\n- [ ] findGitRoot - needs git repo tests\n- [ ] findDatabaseInTree - needs nested directory tests\n- [ ] FindAllDatabases - needs multi-database tests\n- [ ] FindWispDir - needs wisp directory tests\n- [ ] FindWispDatabasePath - needs wisp path tests\n- [ ] NewWispStorage - needs wisp storage tests\n- [ ] EnsureWispGitignore - needs gitignore creation tests\n- [ ] IsWispDatabase - needs path classification tests\n\n### fingerprint.go (repo identification)\n- [ ] ComputeRepoID - needs various remote URL tests\n- [ ] canonicalizeGitURL - needs URL normalization tests\n- [ ] GetCloneID - needs clone identification tests\n\n## Implementation Guide\n\n1. **Use temp directories:**\n ```go\n func TestFindBeadsDir(t *testing.T) {\n tmpDir := t.TempDir()\n beadsDir := filepath.Join(tmpDir, \".beads\")\n os.MkdirAll(beadsDir, 0755)\n \n // Create test files\n os.WriteFile(filepath.Join(beadsDir, \"beads.db\"), []byte{}, 0644)\n \n // Change to tmpDir and test\n oldWd, _ := os.Getwd()\n os.Chdir(tmpDir)\n defer os.Chdir(oldWd)\n \n result := FindBeadsDir()\n assert.Equal(t, beadsDir, result)\n }\n ```\n\n2. **Test scenarios:**\n - BEADS_DB environment variable set\n - .beads/ in current directory\n - .beads/ in parent directory\n - Redirect file pointing elsewhere\n - No beads directory found\n - Wisp directory alongside main beads\n\n3. **Git remote URL tests:**\n ```go\n tests := []struct{\n input string\n expected string\n }{\n {\"git@github.com:user/repo.git\", \"github.com/user/repo\"},\n {\"https://github.com/user/repo\", \"github.com/user/repo\"},\n {\"ssh://git@github.com/user/repo.git\", \"github.com/user/repo\"},\n }\n ```\n\n## Success Criteria\n- Coverage β‰₯ 70%\n- All FindXxx functions have tests\n- Environment variable handling tested\n- Edge cases (missing dirs, redirects) covered\n\n## Run Tests\n```bash\ngo test -v -cover ./internal/beads\ngo test -race ./internal/beads\n```","status":"closed","priority":1,"issue_type":"task","assignee":"beads/Beader","created_at":"2025-12-13T20:42:59.739142-08:00","updated_at":"2025-12-23T13:36:17.885237-08:00","closed_at":"2025-12-23T13:36:17.885237-08:00","close_reason":"Coverage improved from 48.4% to 80.2%, exceeding 70% target","dependencies":[{"issue_id":"bd-tvu3","depends_on_id":"bd-iz5t","type":"parent-child","created_at":"2025-12-23T12:44:07.362967-08:00","created_by":"daemon"}]} -{"id":"bd-u0g9","title":"GH#405: Prefix parsing with hyphens treats first segment as prefix","description":"Prefix me-py-toolkit gets parsed as just me- when detecting mismatches. Fix prefix parsing to handle multi-hyphen prefixes. See GitHub issue #405.","status":"tombstone","priority":2,"issue_type":"bug","created_at":"2025-12-16T01:03:18.354066-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} -{"id":"bd-u0sb","title":"Merge: bd-uqfn","description":"branch: polecat/cheedo\ntarget: main\nsource_issue: bd-uqfn\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-20T01:11:52.033964-08:00","updated_at":"2025-12-20T23:17:26.994875-08:00","closed_at":"2025-12-20T23:17:26.994875-08:00","close_reason":"Branches nuked, MRs obsolete"} -{"id":"bd-u2sc","title":"GH#692: Code quality and refactoring improvements","description":"Epic for implementing refactoring suggestions from GitHub issue #692 (rsnodgrass). These are code quality improvements that don't change functionality but improve maintainability, type safety, and performance.\n\nOriginal issue: https://github.com/steveyegge/beads/issues/692\n\nHigh priority items:\n1. Replace map[string]interface{} with typed structs for JSON output\n2. Adopt slices.SortFunc instead of sort.Slice (Go 1.21+)\n3. Split large files (sync.go, init.go, show.go)\n4. Introduce slog for structured logging in daemon\n\nLower priority:\n5. Further CLI helper extraction\n6. Preallocate slices in hot paths\n7. Polish items (error wrapping, table-driven parsing)","status":"closed","priority":3,"issue_type":"epic","created_at":"2025-12-22T14:26:31.630004-08:00","updated_at":"2025-12-23T22:07:32.477628-08:00","closed_at":"2025-12-23T22:07:32.477628-08:00","close_reason":"All 4 child tasks completed by swarm","external_ref":"gh-692"} -{"id":"bd-u2sc.1","title":"Replace map[string]interface{} with typed JSON response structs","description":"Many CLI commands use map[string]interface{} for JSON output which loses type safety and compile-time error detection.\n\nFiles with map[string]interface{}:\n- cmd/bd/compact.go (10+ instances)\n- cmd/bd/cleanup.go\n- cmd/bd/daemons.go\n- cmd/bd/daemon_lifecycle.go\n\nExample fix:\n```go\n// Before\nresult := map[string]interface{}{\n \"status\": \"ok\",\n \"count\": 42,\n}\n\n// After\ntype CompactResponse struct {\n Status string `json:\"status\"`\n Count int `json:\"count\"`\n}\nresult := CompactResponse{Status: \"ok\", Count: 42}\n```\n\nBenefits:\n- Compile-time type checking\n- IDE autocompletion\n- Easier refactoring\n- Self-documenting API","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-22T14:26:44.088548-08:00","updated_at":"2025-12-22T15:48:22.88824-08:00","closed_at":"2025-12-22T15:48:22.88824-08:00","close_reason":"Replaced map[string]interface{} with typed JSON response structs in compact.go, cleanup.go, daemons.go, and daemon_lifecycle.go. Created 15 typed response structs for improved type safety and compile-time error detection.","dependencies":[{"issue_id":"bd-u2sc.1","depends_on_id":"bd-u2sc","type":"parent-child","created_at":"2025-12-22T14:26:44.088931-08:00","created_by":"daemon"}]} -{"id":"bd-u2sc.2","title":"Migrate sort.Slice to slices.SortFunc","description":"Go 1.21+ provides slices.SortFunc which is cleaner and slightly faster than sort.Slice.\n\nFound 15+ instances of sort.Slice in:\n- cmd/bd/autoflush.go\n- cmd/bd/count.go\n- cmd/bd/daemon_sync.go\n- cmd/bd/doctor.go\n- cmd/bd/export.go\n- cmd/bd/import.go\n- cmd/bd/integrity.go\n- cmd/bd/jira.go\n- cmd/bd/list.go\n- cmd/bd/migrate_hash_ids.go\n- cmd/bd/rename_prefix.go\n- cmd/bd/show.go\n\nExample migration:\n```go\n// Before\nsort.Slice(issues, func(i, j int) bool {\n return issues[i].Priority \u003c issues[j].Priority\n})\n\n// After\nslices.SortFunc(issues, func(a, b *types.Issue) int {\n return cmp.Compare(a.Priority, b.Priority)\n})\n```\n\nBenefits:\n- Cleaner 3-way comparison\n- Slightly better performance\n- Modern idiomatic Go","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-22T14:26:55.573524-08:00","updated_at":"2025-12-22T15:10:12.639807-08:00","closed_at":"2025-12-22T15:10:12.639807-08:00","close_reason":"Migrated all 19 instances of sort.Slice to slices.SortFunc across 17 files in cmd/bd/. Used cmp.Compare for orderable types, time.Compare for time comparisons, and cmp.Or for multi-field sorts.","dependencies":[{"issue_id":"bd-u2sc.2","depends_on_id":"bd-u2sc","type":"parent-child","created_at":"2025-12-22T14:26:55.573978-08:00","created_by":"daemon"}]} -{"id":"bd-u2sc.3","title":"Split large cmd/bd files into logical modules","description":"Split large cmd/bd files into logical modules for maintainability.\n\n## Current State\n| File | Lines | Status |\n|------|-------|--------|\n| sync.go | 2203 | Too large |\n| init.go | 1742 | Too large |\n| show.go | 1419 | Too large |\n| compact.go | 1190 | Borderline |\n\n## Proposed Splits\n\n### 1. sync.go (2203 lines) β†’ 4 files\n\n**sync.go** (~500 lines) - Main command and coordination\n```go\nvar syncCmd = \u0026cobra.Command{...}\nfunc init() {...}\nfunc doSync(...) {...}\n```\n\n**sync_export.go** (~400 lines) - Export operations\n```go\nfunc exportToJSONL(...)\nfunc markDirtyAndScheduleFlush(...)\nfunc flushPendingChanges(...)\n```\n\n**sync_import.go** (~500 lines) - Import operations\n```go\nfunc importFromJSONL(...)\nfunc handleImportConflicts(...)\nfunc mergeImportedIssues(...)\n```\n\n**sync_branch.go** (~400 lines) - Git branch operations\n```go\nfunc commitToSyncBranch(...)\nfunc pullFromSyncBranch(...)\nfunc handleSyncBranchDivergence(...)\n```\n\n### 2. init.go (1742 lines) β†’ 3 files\n\n**init.go** (~400 lines) - Main init command\n```go\nvar initCmd = \u0026cobra.Command{...}\nfunc runInit(...)\nfunc determinePrefix(...)\n```\n\n**init_wizard.go** (~500 lines) - Interactive setup\n```go\nfunc runContributorWizard(...)\nfunc runTeamWizard(...)\nfunc promptForConfig(...)\n```\n\n**init_hooks.go** (~400 lines) - Git hooks setup\n```go\nfunc installGitHooks(...)\nfunc configureAutosync(...)\nfunc setupMergeDriver(...)\n```\n\n### 3. show.go (1419 lines) β†’ 3 files\n\n**show.go** (~400 lines) - Main show command\n```go\nvar showCmd = \u0026cobra.Command{...}\nfunc showIssue(...)\nfunc showMultipleIssues(...)\n```\n\n**show_format.go** (~400 lines) - Output formatting\n```go\nfunc formatIssueText(...)\nfunc formatIssueMarkdown(...)\nfunc formatDependencyTree(...)\n```\n\n**show_threads.go** (~300 lines) - Thread/conversation display\n```go\nfunc showThread(...)\nfunc formatThreadMessages(...)\n```\n\n### 4. compact.go (1190 lines) β†’ 2 files\n\n**compact.go** (~600 lines) - Main compact command\n```go\nvar compactCmd = \u0026cobra.Command{...}\nfunc runCompact(...)\n```\n\n**compact_tiers.go** (~400 lines) - Tier-specific logic\n```go\nfunc compactTier1(...)\nfunc compactTier2(...)\nfunc squashWisps(...)\n```\n\n## Implementation Steps\n\n1. **Start with sync.go** (largest file)\n2. **Create new files** with same package declaration\n3. **Move functions** maintaining related code together\n4. **Update any file-local variables** that need to be shared\n5. **Run tests** after each split:\n ```bash\n go test -short ./cmd/bd/...\n ```\n\n## File Organization After Split\n```\ncmd/bd/\nβ”œβ”€β”€ sync.go (~500 lines)\nβ”œβ”€β”€ sync_export.go (~400 lines)\nβ”œβ”€β”€ sync_import.go (~500 lines)\nβ”œβ”€β”€ sync_branch.go (~400 lines)\nβ”œβ”€β”€ init.go (~400 lines)\nβ”œβ”€β”€ init_wizard.go (~500 lines)\nβ”œβ”€β”€ init_hooks.go (~400 lines)\nβ”œβ”€β”€ show.go (~400 lines)\nβ”œβ”€β”€ show_format.go (~400 lines)\nβ”œβ”€β”€ show_threads.go (~300 lines)\nβ”œβ”€β”€ compact.go (~600 lines)\n└── compact_tiers.go (~400 lines)\n```\n\n## Success Criteria\n- No file \u003e 600 lines\n- All tests pass\n- Related code stays together\n- Clear file naming indicates purpose","status":"closed","priority":3,"issue_type":"task","assignee":"beads/Modular","created_at":"2025-12-22T14:27:06.146343-08:00","updated_at":"2025-12-23T14:14:39.023606-08:00","closed_at":"2025-12-23T14:14:39.023606-08:00","close_reason":"Split cmd/bd files - polecat completed, MR submitted","dependencies":[{"issue_id":"bd-u2sc.3","depends_on_id":"bd-u2sc","type":"parent-child","created_at":"2025-12-22T14:27:06.146704-08:00","created_by":"daemon"},{"issue_id":"bd-u2sc.3","depends_on_id":"bd-iz5t","type":"parent-child","created_at":"2025-12-23T12:44:07.583136-08:00","created_by":"daemon"}]} -{"id":"bd-u2sc.4","title":"Introduce slog for structured daemon logging","description":"Introduce slog for structured daemon logging.\n\n## Current State\nDaemon uses fmt.Fprintf for logging:\n```go\nfmt.Fprintf(os.Stderr, \"Warning: failed to detect user role: %v\\n\", err)\nfmt.Fprintf(logFile, \"[%s] %s\\n\", time.Now().Format(time.RFC3339), msg)\n```\n\nThis produces unstructured, hard-to-parse logs.\n\n## Target State\nUse Go 1.21+ slog for structured logging:\n```go\nslog.Warn(\"failed to detect user role\", \"error\", err)\nslog.Info(\"sync completed\", \"created\", 5, \"updated\", 3, \"duration_ms\", 150)\n```\n\n## Implementation\n\n### 1. Create logger setup (internal/daemon/logger.go)\n\n```go\npackage daemon\n\nimport (\n \"io\"\n \"log/slog\"\n \"os\"\n)\n\n// SetupLogger configures the daemon logger.\n// Returns a cleanup function to close the log file.\nfunc SetupLogger(logPath string, jsonFormat bool, level slog.Level) (func(), error) {\n var w io.Writer = os.Stderr\n var cleanup func()\n \n if logPath \\!= \"\" {\n f, err := os.OpenFile(logPath, os.O_CREATE|os.O_APPEND|os.O_WRONLY, 0600)\n if err \\!= nil {\n return nil, err\n }\n w = io.MultiWriter(os.Stderr, f)\n cleanup = func() { f.Close() }\n }\n \n var handler slog.Handler\n opts := \u0026slog.HandlerOptions{Level: level}\n \n if jsonFormat {\n handler = slog.NewJSONHandler(w, opts)\n } else {\n handler = slog.NewTextHandler(w, opts)\n }\n \n slog.SetDefault(slog.New(handler))\n \n return cleanup, nil\n}\n```\n\n### 2. Add log level flag\n\nIn cmd/bd/daemon.go:\n```go\ndaemonCmd.Flags().String(\"log-level\", \"info\", \"Log level (debug, info, warn, error)\")\ndaemonCmd.Flags().Bool(\"log-json\", false, \"Output logs in JSON format\")\n```\n\n### 3. Replace fmt.Fprintf with slog calls\n\n**Pattern 1: Simple messages**\n```go\n// Before\nfmt.Fprintf(os.Stderr, \"Starting daemon on %s\\n\", socketPath)\n\n// After\nslog.Info(\"starting daemon\", \"socket\", socketPath)\n```\n\n**Pattern 2: Errors**\n```go\n// Before\nfmt.Fprintf(os.Stderr, \"Error: failed to connect: %v\\n\", err)\n\n// After\nslog.Error(\"failed to connect\", \"error\", err)\n```\n\n**Pattern 3: Debug info**\n```go\n// Before\nif debug {\n fmt.Fprintf(os.Stderr, \"Received request: %s\\n\", method)\n}\n\n// After\nslog.Debug(\"received request\", \"method\", method)\n```\n\n**Pattern 4: Structured data**\n```go\n// Before\nfmt.Fprintf(logFile, \"Import: %d created, %d updated\\n\", created, updated)\n\n// After\nslog.Info(\"import completed\", \n \"created\", created,\n \"updated\", updated,\n \"unchanged\", unchanged,\n \"duration_ms\", duration.Milliseconds())\n```\n\n### 4. Files to Update\n\n| File | Changes |\n|------|---------|\n| cmd/bd/daemon.go | Add log flags, call SetupLogger |\n| internal/daemon/server.go | Replace fmt with slog |\n| internal/daemon/rpc_handler.go | Replace fmt with slog |\n| internal/daemon/sync.go | Replace fmt with slog |\n| internal/daemon/autoflush.go | Replace fmt with slog |\n| internal/daemon/logger.go | New file |\n\n### 5. Log output examples\n\n**Text format (default):**\n```\ntime=2025-12-23T12:30:00Z level=INFO msg=\"daemon started\" socket=/tmp/bd.sock pid=12345\ntime=2025-12-23T12:30:01Z level=INFO msg=\"import completed\" created=5 updated=3 duration_ms=150\ntime=2025-12-23T12:30:05Z level=WARN msg=\"sync branch diverged\" local_ahead=2 remote_ahead=1\n```\n\n**JSON format (--log-json):**\n```json\n{\"time\":\"2025-12-23T12:30:00Z\",\"level\":\"INFO\",\"msg\":\"daemon started\",\"socket\":\"/tmp/bd.sock\",\"pid\":12345}\n{\"time\":\"2025-12-23T12:30:01Z\",\"level\":\"INFO\",\"msg\":\"import completed\",\"created\":5,\"updated\":3,\"duration_ms\":150}\n```\n\n## Migration Strategy\n\n1. **Add logger.go** with SetupLogger\n2. **Update daemon startup** to initialize slog\n3. **Convert one file at a time** (start with server.go)\n4. **Test after each file**\n5. **Remove old logging code** once all converted\n\n## Testing\n\n```bash\n# Start daemon with debug logging\nbd daemon start --log-level debug\n\n# Check logs\nbd daemons logs . | head -20\n\n# Test JSON output\nbd daemon start --log-json --log-level debug\nbd daemons logs . | jq .\n```\n\n## Success Criteria\n- All daemon logging uses slog\n- --log-level controls verbosity\n- --log-json produces machine-parseable output\n- Log entries have consistent structure\n- No fmt.Fprintf to stderr in daemon code","status":"closed","priority":3,"issue_type":"task","assignee":"beads/Logger","created_at":"2025-12-22T14:27:16.47144-08:00","updated_at":"2025-12-23T13:44:23.374935-08:00","closed_at":"2025-12-23T13:44:23.374935-08:00","close_reason":"Implemented slog for structured daemon logging with --log-level and --log-json flags","dependencies":[{"issue_id":"bd-u2sc.4","depends_on_id":"bd-u2sc","type":"parent-child","created_at":"2025-12-22T14:27:16.471878-08:00","created_by":"daemon"},{"issue_id":"bd-u2sc.4","depends_on_id":"bd-iz5t","type":"parent-child","created_at":"2025-12-23T12:44:08.048156-08:00","created_by":"daemon"}]} -{"id":"bd-u66e","title":"Implement bd gate create/show/list/close/wait commands","description":"Implement the gate CLI commands.\n\n## Commands\n```bash\n# Create gate (returns gate ID)\nbd gate create --await \u003ctype\u003e:\u003cid\u003e --timeout \u003cduration\u003e --notify \u003caddr\u003e\n\n# Check gate status\nbd gate show \u003cid\u003e\n\n# List open gates\nbd gate list\n\n# Close gate (usually done by Deacon)\nbd gate close \u003cid\u003e --reason \"completed\"\n\n# Add waiter to existing gate\nbd gate wait \u003cid\u003e --notify \u003caddr\u003e\n```\n\n## Implementation\n- Add cmd/bd/gate.go with subcommands\n- Gate create creates wisp issue of type gate\n- Gate list filters for open gates\n- Gate wait adds to waiters[] array\n- All support --json output","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-23T11:44:34.022464-08:00","updated_at":"2025-12-23T12:06:18.550673-08:00","closed_at":"2025-12-23T12:06:18.550673-08:00","close_reason":"Implemented bd gate commands: create, show, list, close, wait. All support --json output and work with wisp/gate fields.","dependencies":[{"issue_id":"bd-u66e","depends_on_id":"bd-udsi","type":"parent-child","created_at":"2025-12-23T11:44:52.823353-08:00","created_by":"daemon"},{"issue_id":"bd-u66e","depends_on_id":"bd-lz49","type":"blocks","created_at":"2025-12-23T11:44:56.349662-08:00","created_by":"daemon"}]} -{"id":"bd-u8g4","title":"Merge: bd-dxtc","description":"branch: polecat/cheedo\ntarget: main\nsource_issue: bd-dxtc\nrig: beads","status":"closed","priority":1,"issue_type":"merge-request","created_at":"2025-12-23T20:42:17.222567-08:00","updated_at":"2025-12-23T21:21:57.696047-08:00","closed_at":"2025-12-23T21:21:57.696047-08:00","close_reason":"stale - no code pushed"} -{"id":"bd-ucgz","title":"Migration invariants should exclude external dependencies from orphan check","description":"## Summary\n\nThe `checkForeignKeys` function in `migration_invariants.go` flags external dependencies as orphaned because they dont exist in the local issues table.\n\n## Location\n\n`internal/storage/sqlite/migration_invariants.go` around line 150-162\n\n## Current Code (buggy)\n\n```go\n// Check for orphaned dependencies\nvar orphanCount int\nerr = db.QueryRowContext(ctx, \\`\n SELECT COUNT(*)\n FROM dependencies d\n WHERE NOT EXISTS (SELECT 1 FROM issues WHERE id = d.depends_on_id)\n\\`).Scan(\u0026orphanCount)\n```\n\n## Fix\n\nExclude external references (format: `external:\u003cproject\u003e:\u003ccapability\u003e`):\n\n```go\n// Check for orphaned dependencies (excluding external refs)\nvar orphanCount int\nerr = db.QueryRowContext(ctx, \\`\n SELECT COUNT(*)\n FROM dependencies d\n WHERE NOT EXISTS (SELECT 1 FROM issues WHERE id = d.depends_on_id)\n AND d.depends_on_id NOT LIKE 'external:%'\n\\`).Scan(\u0026orphanCount)\n```\n\n## Reproduction\n\n```bash\n# Add external dependency\nbd dep add bd-xxx external:other-project:some-capability\n\n# Try to sync - fails\nbd sync\n# Error: found 1 orphaned dependencies\n```\n\n## Files to Modify\n\n1. **internal/storage/sqlite/migration_invariants.go** - Add WHERE clause\n\n## Testing\n\n```bash\n# Create issue with external dep\nbd create \"Test external deps\" -t task\nbd dep add bd-xxx external:beads:release-workflow\n\n# Sync should succeed\nbd sync\n\n# Verify dep exists\nbd show bd-xxx --json | jq .dependencies\n```\n\n## Success Criteria\n- External dependencies dont trigger orphan check\n- `bd sync` succeeds with external deps\n- Regular orphan detection still works for internal deps","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-23T12:37:08.99387-08:00","updated_at":"2025-12-23T12:42:03.722691-08:00","closed_at":"2025-12-23T12:42:03.722691-08:00","close_reason":"Fixed in commit 90196eca - added NOT LIKE 'external:%' exclusion"} -{"id":"bd-udsi","title":"Async Gates for Agent Coordination","description":"Agents need an async primitive for waiting on external events (CI completion, API responses, human approval). Gates are wisp issues that block until external conditions are met, managed by the Deacon.\n\n## Core Concepts\n\n**Gate** = wisp issue that blocks until external condition is met\n- Type: gate\n- Phase: wisp (never synced, ephemeral)\n- Assignee: deacon/ (Deacon monitors it)\n- Fields: await_type, await_id, timeout, waiters[]\n\n**Await Types:**\n- gh:run:\u003cid\u003e - GitHub Actions run completion\n- gh:pr:\u003cid\u003e - PR merged/closed\n- timer:\u003cduration\u003e - Simple delay\n- human:\u003cprompt\u003e - Human approval required\n- mail:\u003cpattern\u003e - Wait for mail matching pattern\n\n## Open Questions\n- Should gates live in wisp storage or main storage with wisp flag?\n- Do we need a gate catalog (like molecule catalog)?\n- Should waits-for dep type work with gates?","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-23T11:44:02.711062-08:00","updated_at":"2025-12-23T12:24:43.537615-08:00","closed_at":"2025-12-23T12:24:43.537615-08:00","close_reason":"Core beads implementation complete (types, migration, CLI). Remaining Deacon integration work moved to gastown (gt-dh65, gt-ng6g, gt-fqcz, gt-gswn)."} -{"id":"bd-umbf","title":"Design contributor namespace isolation for beads pollution prevention","description":"## Problem\n\nWhen contributors work on beads-the-project using beads-the-tool, their personal work-tracking issues leak into PRs. The .beads/issues.jsonl is intentionally tracked (it's the project's issue database), but contributors' local issues pollute the diff.\n\nThis is a recursion problem unique to self-hosting projects.\n\n## Possible Solutions to Explore\n\n1. **Contributor namespaces** - Each contributor gets a private prefix (e.g., `bd-steve-xxxx`) that's gitignored or filtered\n2. **Separate database** - Contributors use BEADS_DIR pointing elsewhere for personal tracking\n3. **Issue ownership/visibility flags** - Mark issues as \"local-only\" vs \"project\"\n4. **Prefix-based filtering** - Configure which prefixes are committed vs ignored\n\n## Design Considerations\n\n- Should be zero-friction for contributors (no manual setup)\n- Must not break existing workflows\n- Needs to work with sync/collaboration features\n- Consider: what if a \"personal\" issue graduates to \"project\" issue?\n\n## Expansion Needed\n\nThis is a placeholder. Needs detailed design exploration before implementation.","status":"open","priority":2,"issue_type":"feature","created_at":"2025-12-13T18:00:29.638743-08:00","updated_at":"2025-12-13T18:00:41.345673-08:00"} -{"id":"bd-uqfn","title":"Work on beads-wkt: Output control parameters for MCP tool...","description":"Work on beads-wkt: Output control parameters for MCP tools (GH#622). Add brief, fields, max_description_length params to ready/list/show. When done, submit MR (not PR) to integration branch for Refinery.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-19T22:57:10.675535-08:00","updated_at":"2025-12-20T00:49:51.929271-08:00","closed_at":"2025-12-19T23:28:25.362931-08:00","close_reason":"Implemented output control parameters for MCP tools (GH#622)"} -{"id":"bd-usro","title":"Rename 'template instantiate' to 'mol bond'","description":"Rename the template instantiation command to match molecule metaphor.\n\nCurrent: bd template instantiate \u003cid\u003e --var key=value\nTarget: bd mol bond \u003cid\u003e --var key=value\n\nChanges needed:\n- Add 'mol' command group (or extend existing)\n- Add 'bond' subcommand that wraps template instantiate logic\n- Keep 'template instantiate' as deprecated alias for backward compat\n- Update help text and docs to use molecule terminology\n\nThe 'bond' verb captures:\n1. Chemistry metaphor (molecules bond to form structures)\n2. Dependency linking (child issues bonded in a DAG)\n3. Short and active\n\nSee also: molecule execution model in Gas Town","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-12-20T16:56:37.582795-08:00","updated_at":"2025-12-20T23:22:43.567337-08:00","closed_at":"2025-12-20T23:22:43.567337-08:00","close_reason":"Implemented mol command: catalog, show, bond"} -{"id":"bd-uutv","title":"Work on beads-rs0: Namepool configuration for themed pole...","description":"Work on beads-rs0: Namepool configuration for themed polecat names. See bd show beads-rs0 for full details.","status":"closed","priority":2,"issue_type":"task","assignee":"beads/polecat-02","created_at":"2025-12-19T21:49:48.129778-08:00","updated_at":"2025-12-19T21:59:25.565894-08:00","closed_at":"2025-12-19T21:59:25.565894-08:00","close_reason":"Completed work on beads-rs0: Implemented themed namepool feature"} -{"id":"bd-uwkp","title":"Phase 2.4: Git merge driver optimization for TOON format","description":"Optimize git 3-way merge for TOON line-oriented format.\n\n## Overview\nTOON is line-oriented (unlike binary formats), enabling smarter git merge strategies. Implement custom merge driver to handle TOON-specific merge patterns.\n\n## Required Work\n\n### 2.4.1 TOON Merge Driver\n- [ ] Create .git/info/attributes entry for *.toon files\n- [ ] Implement custom merge driver script/command\n- [ ] Handle tabular format row merges (line-based 3-way)\n- [ ] Handle YAML-style format merges\n- [ ] Conflict markers for unsolvable conflicts\n\n### 2.4.2 Merge Patterns\n- [ ] Row addition: both branches add different rows β†’ union\n- [ ] Row deletion: one branch deletes, other modifies β†’ conflict (manual review)\n- [ ] Row modification: concurrent field changes β†’ intelligent merge or conflict\n- [ ] Field ordering changes: ignore (TOON format resilient to order)\n\n### 2.4.3 Testing \u0026 Documentation\n- [ ] Unit tests for merge scenarios (3-way merge logic)\n- [ ] Integration tests with actual git merges\n- [ ] Conflict scenario testing\n- [ ] Documentation of merge strategy\n\n## Success Criteria\n- Git merge handles TOON conflicts intelligently\n- Fewer manual merge conflicts than JSONL\n- Round-trip preserved through merges\n- All 70+ tests still passing\n- Git history stays clean (minimal conflict markers)","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T14:43:14.339238776-07:00","updated_at":"2025-12-21T14:42:26.434306-08:00","closed_at":"2025-12-21T14:42:26.434306-08:00","close_reason":"TOON approach declined","dependencies":[{"issue_id":"bd-uwkp","depends_on_id":"bd-iic1","type":"discovered-from","created_at":"2025-12-19T14:43:14.34427988-07:00","created_by":"daemon"}]} -{"id":"bd-uz8r","title":"Phase 2.3: TOON deletion tracking","description":"Implement deletion tracking in TOON format.\n\n## Overview\nPhase 2.2 switched storage to TOON format. Phase 2.3 adds deletion tracking in TOON format for propagating deletions across clones.\n\n## Required Work\n\n### 2.3.1 Deletion Tracking (TOON Format)\n- [ ] Implement deletions.toon file (tracking deleted issue records)\n- [ ] Add DeleteTracker struct to record deleted issue IDs and metadata\n- [ ] Update bdt delete command to record in deletions.toon\n- [ ] Design deletion record format (ID, timestamp, reason, hash)\n- [ ] Implement auto-prune of old deletion records (configurable TTL)\n\n### 2.3.2 Sync Propagation\n- [ ] Load deletions.toon during import\n- [ ] Remove deleted issues from local database when imported from remote\n- [ ] Handle edge cases (delete same issue in multiple clones)\n- [ ] Deletion ordering and conflict resolution\n\n### 2.3.3 Testing\n- [ ] Unit tests for deletion tracking\n- [ ] Integration tests for deletion propagation\n- [ ] Multi-clone deletion scenarios\n- [ ] TTL expiration tests\n\n## Success Criteria\n- deletions.toon stores deletion records in TOON format\n- Deletions propagate across clones via git sync\n- Old records auto-prune after TTL\n- All 70+ tests still passing\n- bdt delete command works seamlessly","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T14:37:23.722066816-07:00","updated_at":"2025-12-21T14:42:27.491932-08:00","closed_at":"2025-12-21T14:42:27.491932-08:00","close_reason":"TOON approach declined","dependencies":[{"issue_id":"bd-uz8r","depends_on_id":"bd-iic1","type":"discovered-from","created_at":"2025-12-19T14:37:23.726825771-07:00","created_by":"daemon"}]} -{"id":"bd-vgi5","title":"Push version bump to GitHub","description":"git push origin main - triggers CI but no release yet.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T22:43:05.363604-08:00","updated_at":"2025-12-18T22:46:57.50777-08:00","closed_at":"2025-12-18T22:46:57.50777-08:00","dependencies":[{"issue_id":"bd-vgi5","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T22:43:16.87736-08:00","created_by":"daemon"},{"issue_id":"bd-vgi5","depends_on_id":"bd-3ggb","type":"blocks","created_at":"2025-12-18T22:43:21.078208-08:00","created_by":"daemon"}]} -{"id":"bd-vks2","title":"bd dep tree doesn't display external dependencies","description":"GetDependencyTree (dependencies.go:464-624) uses a recursive CTE that JOINs with the issues table, which means external refs (external:project:capability) are invisible in the tree output.\n\nWhen an issue has an external blocking dependency, running 'bd dep tree \u003cid\u003e' won't show it.\n\nOptions:\n1. Query dependencies table separately for external refs and display them as leaf nodes\n2. Add a synthetic 'external' node type that shows the ref and resolution status\n3. Document that external deps aren't shown in tree view (use bd show for full deps)\n\nLower priority since bd show \u003cid\u003e displays all dependencies including external refs.","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-21T23:45:27.121934-08:00","updated_at":"2025-12-22T22:30:19.083652-08:00","closed_at":"2025-12-22T22:30:19.083652-08:00","close_reason":"Implemented: GetDependencyTree now fetches external deps and adds them as synthetic leaf nodes with resolution status. Added test TestGetDependencyTreeExternalDeps. Updated formatTreeNode to display external deps specially.","dependencies":[{"issue_id":"bd-vks2","depends_on_id":"bd-zmmy","type":"discovered-from","created_at":"2025-12-21T23:45:27.122511-08:00","created_by":"daemon"}]} -{"id":"bd-vpan","title":"Re: Thread Test 2","description":"Got your message. Testing reply feature.","status":"tombstone","priority":2,"issue_type":"message","created_at":"2025-12-16T18:21:29.144352-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","dependencies":[{"issue_id":"bd-vpan","depends_on_id":"bd-x36g","type":"replies-to","created_at":"2025-12-18T13:45:31.137191-08:00","created_by":"migration"}],"deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"message"} -{"id":"bd-vs9","title":"Fix unparam unused parameter in cmd/bd/doctor.go:541","description":"Linting issue: checkHooksQuick - path is unused (unparam) at cmd/bd/doctor.go:541:22. Error: func checkHooksQuick(path string) string {","status":"closed","priority":0,"issue_type":"bug","created_at":"2025-12-07T15:35:17.02177046-07:00","updated_at":"2025-12-17T23:13:40.535743-08:00","closed_at":"2025-12-17T16:46:11.028332-08:00"} -{"id":"bd-vzds","title":"Create git tag v0.33.2","description":"Create the release tag:\n\n```bash\ngit tag v0.33.2\n```\n\nVerify: `git tag | grep 0.33.2`","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.761888-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","close_reason":"Tag v0.33.2 exists","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task","wisp":true} -{"id":"bd-w193","title":"Work on beads-399: Add omitempty to JSONL fields for smal...","description":"Work on beads-399: Add omitempty to JSONL fields for smaller notifications. When done, submit MR (not PR) to integration branch for Refinery.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-19T22:55:37.440894-08:00","updated_at":"2025-12-19T23:28:32.42751-08:00","closed_at":"2025-12-19T23:23:09.542288-08:00","close_reason":"Added omitempty to Description and Dependency.CreatedBy fields in types.go"} -{"id":"bd-w8g0","title":"test pin issue","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-12-20T22:44:27.963361-08:00","updated_at":"2025-12-20T22:44:57.977229-08:00","deleted_at":"2025-12-20T22:44:57.977229-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} -{"id":"bd-wc2","title":"Test body-file","description":"This is a test description from a file.\n\nIt has multiple lines.\n","status":"closed","priority":4,"issue_type":"task","created_at":"2025-12-17T17:27:20.508724-08:00","updated_at":"2025-12-17T17:28:33.83142-08:00","closed_at":"2025-12-17T17:28:33.83142-08:00"} -{"id":"bd-whgv","title":"Merge: bd-401h","description":"branch: polecat/rictus\ntarget: main\nsource_issue: bd-401h\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-19T23:20:37.854953-08:00","updated_at":"2025-12-20T23:17:26.999477-08:00","closed_at":"2025-12-20T23:17:26.999477-08:00","close_reason":"Branches nuked, MRs obsolete"} -{"id":"bd-wp5j","title":"Merge: bd-indn","description":"branch: polecat/rictus\ntarget: main\nsource_issue: bd-indn\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-23T20:45:51.286598-08:00","updated_at":"2025-12-23T21:21:57.697826-08:00","closed_at":"2025-12-23T21:21:57.697826-08:00","close_reason":"stale - no code pushed"} -{"id":"bd-wu62","title":"Gate: timer:1m","status":"open","priority":1,"issue_type":"gate","assignee":"deacon/","created_at":"2025-12-23T13:42:57.169229-08:00","updated_at":"2025-12-23T13:42:57.169229-08:00","wisp":true} -{"id":"bd-x1xs","title":"Work on beads-1ra: Add molecules.jsonl as separate catalo...","description":"Work on beads-1ra: Add molecules.jsonl as separate catalog file for template molecules","status":"closed","priority":2,"issue_type":"task","assignee":"beads/polecat-01","created_at":"2025-12-19T20:17:44.840032-08:00","updated_at":"2025-12-21T15:28:17.633716-08:00","closed_at":"2025-12-21T15:28:17.633716-08:00","close_reason":"Implemented: molecules.jsonl loading, is_template column, template filtering in bd list (excluded by default), --include-templates flag, bd mol list catalog view"} -{"id":"bd-x2bd","title":"Merge: bd-likt","description":"branch: polecat/Gater\ntarget: main\nsource_issue: bd-likt\nrig: beads","status":"closed","priority":3,"issue_type":"merge-request","created_at":"2025-12-23T13:46:27.091846-08:00","updated_at":"2025-12-23T19:12:08.355637-08:00","closed_at":"2025-12-23T19:12:08.355637-08:00","close_reason":"Stale merge-requests from orphaned polecat branches - refinery not processing"} +{"id":"bd-8fgn","title":"test hash length","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-12-16T13:49:32.113843-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-4uoc","title":"Code Review Followup Summary: PR #481 + PR #551","description":"## Merged PRs Summary\n\n### PR #551: Persist close_reason to issues table\n- βœ… Merged successfully\n- βœ… Bug fix: close_reason now persisted in database column (not just events table)\n- βœ… Comprehensive test coverage added\n- βœ… Handles reopen case (clearing close_reason)\n\n**Followup Issues Filed:**\n- bd-lxzx: Document close_reason in JSONL export format\n- bd-077e: Update CLI documentation for close_reason field\n\n---\n\n### PR #481: Context Engineering Optimizations (80-90% context reduction)\n- βœ… Merged successfully \n- βœ… Lazy tool discovery: discover_tools() + get_tool_info()\n- βœ… Minimal issue models: IssueMinimal (~80% smaller than full Issue)\n- βœ… Result compaction: Auto-compacts results \u003e20 items\n- βœ… All 28 tests passing\n- ⚠️ Breaking change: ready() and list() return type changed\n\n**Followup Issues Filed:**\n- bd-b318: Add integration tests for CompactedResult\n- bd-4u2b: Make compaction settings configurable (THRESHOLD, PREVIEW_COUNT)\n- bd-2kf8: Document CompactedResult response format in CONTEXT_ENGINEERING.md\n- bd-pdr2: Document backwards compatibility considerations\n\n---\n\n## Overall Assessment\n\nBoth PRs are production-ready with solid implementations. All critical functionality works and tests pass. Followup issues focus on:\n1. Documentation improvements (5 issues)\n2. Integration test coverage (1 issue)\n3. Configuration flexibility (1 issue)\n4. Backwards compatibility guidance (1 issue)\n\nNo critical bugs or design issues found.\n\n## Review Completed By\nCode review process completed. Issues auto-created for tracking improvements.","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-14T14:25:59.214886-08:00","updated_at":"2025-12-14T14:25:59.214886-08:00","dependencies":[{"issue_id":"bd-4uoc","depends_on_id":"bd-otf4","type":"discovered-from","created_at":"2025-12-14T14:25:59.216884-08:00","created_by":"stevey"},{"issue_id":"bd-4uoc","depends_on_id":"bd-z86n","type":"discovered-from","created_at":"2025-12-14T14:25:59.217296-08:00","created_by":"stevey"}]} +{"id":"bd-indn","title":"bd template commands fail with daemon mode","description":"The `bd template show` and `bd template instantiate` commands fail with 'Error loading template: no database connection' when daemon is running.\n\n**Reproduction:**\n```bash\nbd daemon --start\nbd template show bd-qqc # Error: no database connection\nbd template show bd-qqc --no-daemon # Works\n```\n\n**Expected:** Template commands should work with daemon like other commands.\n\n**Workaround:** Use `--no-daemon` flag.\n\n**Location:** Likely in cmd/bd/template.go - daemon RPC path not implemented for template operations.","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-18T22:57:35.16596-08:00","updated_at":"2025-12-23T22:29:35.689875-08:00","closed_at":"2025-12-23T20:45:12.491417-08:00"} +{"id":"bd-rp4o","title":"Deleted issues resurrect during bd sync (tombstones not propagating)","description":"## Problem\n\nWhen issues are deleted with bd delete --force, they get deleted from the local DB but resurrect during the next bd sync.\n\n## Reproduction\n\n1. Observe orphan warnings (bd-cb64c226.*, bd-cbed9619.*)\n2. Delete them: bd delete bd-cb64c226.1 ... --force\n3. Run bd sync\n4. Orphan warnings reappear - issues were resurrected!\n\n## Root Cause Hypothesis\n\nThe sync branch workflow (beads-sync) has the old state before deletions. When bd sync pulls from beads-sync and copies JSONL to main, the deleted issues are re-imported.\n\nTombstones may not be properly:\n1. Written to beads-sync during export\n2. Propagated during pull/merge\n3. Honored during import\n\n## Related\n\n- bd-7b7h: chicken-and-egg sync.branch bug (same workflow)\n- bd-ncwo: ID-based fallback matching to prevent ghost resurrection\n\n## Files to Investigate\n\n- cmd/bd/sync.go (export/import flow)\n- internal/syncbranch/worktree.go (PullFromSyncBranch, copyJSONLToMainRepo)\n- internal/importer/ (tombstone handling)","status":"tombstone","priority":1,"issue_type":"bug","created_at":"2025-12-16T23:09:43.072696-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} {"id":"bd-x36g","title":"Thread Test 2","description":"Testing direct mode","status":"tombstone","priority":2,"issue_type":"message","created_at":"2025-12-16T18:21:16.470631-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"message"} -{"id":"bd-x3hi","title":"Support redirect files in .beads/ directory","description":"Gas Town creates polecat worktrees with .beads/redirect files that point to a shared beads database. The bd CLI should:\n\n1. When finding a .beads/ directory, check if it contains a 'redirect' file\n2. If redirect exists, read the relative path and use that as the beads directory\n3. This allows multiple git worktrees to share a single beads database\n\nExample:\n- polecats/alpha/.beads/redirect contains '../../mayor/rig/.beads'\n- bd commands from alpha should use mayor/rig/.beads\n\nCurrently bd ignores redirect files and either uses the local .beads/ or walks up to find a parent .beads/.\n\nRelated: gt-nriy (test message that can't be retrieved due to missing redirect support)","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-12-20T21:46:23.415172-08:00","updated_at":"2025-12-20T21:59:25.759664-08:00","closed_at":"2025-12-20T21:59:25.759664-08:00","close_reason":"Not applicable - filed against stale bd v0.30.6"} -{"id":"bd-x3j8","title":"Update info.go versionChanges","description":"Add 0.32.1 entry to versionChanges map in cmd/bd/info.go","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-20T21:53:17.344841-08:00","updated_at":"2025-12-20T21:54:31.906761-08:00","closed_at":"2025-12-20T21:54:31.906761-08:00","close_reason":"Added 0.32.1 to versionChanges","dependencies":[{"issue_id":"bd-x3j8","depends_on_id":"bd-an4s","type":"parent-child","created_at":"2025-12-20T21:53:17.346736-08:00","created_by":"daemon"},{"issue_id":"bd-x3j8","depends_on_id":"bd-rgd7","type":"blocks","created_at":"2025-12-20T21:53:29.62309-08:00","created_by":"daemon"}]} -{"id":"bd-x5wg","title":"Create and push git tag v0.33.2","description":"Create the release tag and push it:\n\n```bash\ngit tag v0.33.2\ngit push origin v0.33.2\n```\n\nThis triggers the GoReleaser GitHub Action to build release binaries.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.76223-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","close_reason":"Duplicate of bd-vzds, tag exists","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task","wisp":true} -{"id":"bd-xctp","title":"GH#519: bd sync fails when sync.branch is currently checked-out branch","status":"tombstone","priority":2,"issue_type":"bug","created_at":"2025-12-16T01:06:05.319281-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} -{"id":"bd-xj2e","title":"GH#522: Add --type flag to bd update command","description":"Add --type flag to bd update for changing issue type (task/epic/bug/feature). Storage layer already supports it. See GitHub issue #522.","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-12-16T01:03:12.506583-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} -{"id":"bd-xo1o","title":"Dynamic Molecule Bonding: Fanout patterns for patrol molecules","description":"## Vision\n\nEnable molecules to dynamically spawn child molecules at runtime based on discovered\nwork. This is the foundation for the \"Christmas Ornament\" pattern where a patrol\nmolecule grows arms per-polecat.\n\n## The Activity Feed Vision\n\nInstead of parsing agent logs, users see structured work state:\n\n```\n[14:32:08] + patrol-x7k.arm-ace bonded (5 steps)\n[14:32:08] + patrol-x7k.arm-nux bonded (5 steps)\n[14:32:09] β†’ patrol-x7k.arm-ace.capture in_progress\n[14:32:10] βœ“ patrol-x7k.arm-ace.capture completed\n[14:32:14] βœ“ patrol-x7k.arm-ace.decide completed (action: nudge-1)\n```\n\nThis requires beads to track molecule step state transitions in real-time.\n\n## Key Primitives Needed\n\n### 1. Dynamic Bond with Variables\n```bash\nbd mol bond mol-polecat-arm \u003cparent-wisp-id\u003e \\\n --var polecat_name=ace \\\n --var rig=gastown\n```\n\nCreates wisp children under the parent:\n- parent-id.arm-ace\n- parent-id.arm-ace.capture\n- parent-id.arm-ace.assess\n- etc.\n\n### 2. WaitsFor Directive\n```markdown\n## Step: aggregate\nCollect outcomes from all dynamically-bonded children.\nWaitsFor: all-children\nNeeds: survey-workers\n```\n\nThe `WaitsFor: all-children` directive makes this a fanout gate - it can't\nproceed until ALL dynamically-bonded children complete.\n\n### 3. Activity Feed Query\n```bash\nbd activity --follow # Real-time state stream\nbd activity --mol \u003cid\u003e # Activity for specific molecule\nbd activity --since 5m # Last 5 minutes\n```\n\n### 4. Parallel Step Detection\nSteps with no inter-dependencies should be flagged as parallelizable.\nWhen arms are bonded, their steps can run in parallel across arms.\n\n## Use Case: mol-witness-patrol\n\nThe Witness monitors N polecats where N varies at runtime:\n\n```\nsurvey-workers discovers: [ace, nux, toast]\nFor each polecat:\n bd mol bond mol-polecat-arm \u003cpatrol-id\u003e --var polecat_name=\u003cname\u003e\naggregate step waits for all arms to complete\n```\n\nThis creates the Christmas Ornament shape:\n- Trunk: preflight steps\n- Arms: per-polecat inspection molecules\n- Base: cleanup after all arms complete\n\n## Design Docs\n\nSee Gas Town docs:\n- docs/molecular-chemistry.md (updated with Christmas Ornament pattern)\n- docs/architecture.md (activity feed section)\n\n## Dependencies\n\nThis epic may depend on:\n- Wisp storage (.beads-wisp/) - already implemented\n- Variable substitution in molecules - may need enhancement","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-23T02:32:43.173305-08:00","updated_at":"2025-12-23T04:01:02.729388-08:00","closed_at":"2025-12-23T04:01:02.729388-08:00","close_reason":"All subtasks completed: dynamic bonding (.1), waits-for gates (.2), activity feed (.3), parallel detection (.4)"} -{"id":"bd-xo1o.1","title":"bd mol bond: Dynamic bond with variable substitution","description":"Implement dynamic molecule bonding with runtime variable substitution.\n\n## Command\n```bash\nbd mol bond \u003cproto-id\u003e \u003cparent-wisp-id\u003e --var key=value --var key2=value2\n```\n\n## Behavior\n1. Parse proto molecule template\n2. Substitute {{key}} placeholders with provided values\n3. Create wisp children under the parent molecule\n4. Child IDs follow pattern: parent-id.child-ref (e.g., patrol-x7k.arm-ace)\n5. Nested children: parent-id.child-ref.step-ref (e.g., patrol-x7k.arm-ace.capture)\n\n## Variable Substitution\n- In step titles: \"Inspect {{polecat_name}}\" -\u003e \"Inspect ace\"\n- In descriptions: Full template substitution\n- In Needs directives: Allow referencing parent steps\n\n## Output\n```\nβœ“ Bonded mol-polecat-arm to patrol-x7k\n Created: patrol-x7k.arm-ace (5 steps)\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-23T02:33:13.878996-08:00","updated_at":"2025-12-23T03:38:03.54745-08:00","closed_at":"2025-12-23T03:38:03.54745-08:00","close_reason":"Implemented dynamic molecule bonding with --ref flag for custom child IDs (Christmas Ornament pattern)","dependencies":[{"issue_id":"bd-xo1o.1","depends_on_id":"bd-xo1o","type":"parent-child","created_at":"2025-12-23T02:33:13.879419-08:00","created_by":"daemon"}]} -{"id":"bd-xo1o.2","title":"WaitsFor directive: Fanout gate for dynamic children","description":"Implement WaitsFor directive for molecules that spawn dynamic children.\n\n## Syntax\n```markdown\n## Step: aggregate\nCollect outcomes from all dynamically-bonded children.\nWaitsFor: all-children\nNeeds: survey-workers\n```\n\n## Behavior\n1. Parse WaitsFor directive during molecule step parsing\n2. Track which steps spawn dynamic children (the spawner)\n3. Gate step waits until ALL children of the spawner complete\n4. Works with bd ready - gate step not ready until children done\n\n## Gate Types\n- `WaitsFor: all-children` - Wait for all dynamic children\n- `WaitsFor: any-children` - Proceed when first child completes (future)\n- `WaitsFor: \u003cstep-ref\u003e.children` - Wait for specific spawner's children\n\n## Integration\n- bd ready should skip gate steps until children complete\n- bd show \u003cmol\u003e should display gate status and child count","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-23T02:33:14.946475-08:00","updated_at":"2025-12-23T04:00:09.443106-08:00","closed_at":"2025-12-23T04:00:09.443106-08:00","close_reason":"Implemented waits-for dependency type with all-children and any-children gates, --waits-for flags, and full test coverage","dependencies":[{"issue_id":"bd-xo1o.2","depends_on_id":"bd-xo1o","type":"parent-child","created_at":"2025-12-23T02:33:14.950008-08:00","created_by":"daemon"}]} -{"id":"bd-xo1o.3","title":"bd activity: Real-time molecule state feed","description":"Implement activity feed command for watching molecule state transitions.\n\n## Commands\n```bash\nbd activity --follow # Real-time streaming\nbd activity --mol \u003cid\u003e # Activity for specific molecule\nbd activity --since 5m # Last 5 minutes\nbd activity --type step # Only step transitions\n```\n\n## Output Format\n```\n[14:32:01] βœ“ patrol-x7k.inbox-check completed\n[14:32:03] βœ“ patrol-x7k.check-refinery completed\n[14:32:08] + patrol-x7k.arm-ace bonded (5 steps)\n[14:32:09] β†’ patrol-x7k.arm-ace.capture in_progress\n[14:32:10] βœ“ patrol-x7k.arm-ace.capture completed\n[14:32:14] βœ“ patrol-x7k.arm-ace.decide completed (action: nudge-1)\n[14:32:17] βœ“ patrol-x7k.arm-ace COMPLETE\n[14:32:23] βœ“ patrol-x7k SQUASHED β†’ digest-x7k\n```\n\n## Event Types\n- `+` bonded - New molecule/step created\n- `β†’` in_progress - Step started\n- `βœ“` completed - Step/molecule finished\n- `βœ—` failed - Step failed\n- `⊘` burned - Wisp discarded\n- `β—‰` squashed - Wisp condensed to digest\n\n## Implementation\n- Could use SQLite triggers or polling\n- --follow uses OS file watching or polling\n- Filter by mol ID, type, time range","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-23T02:33:16.298764-08:00","updated_at":"2025-12-23T03:18:33.434079-08:00","closed_at":"2025-12-23T03:18:33.434079-08:00","close_reason":"Implemented bd activity command with real-time feed, filtering, and new mutation event types","dependencies":[{"issue_id":"bd-xo1o.3","depends_on_id":"bd-xo1o","type":"parent-child","created_at":"2025-12-23T02:33:16.301522-08:00","created_by":"daemon"}]} -{"id":"bd-xo1o.4","title":"Parallel step detection in molecules","description":"Detect and flag parallelizable steps in molecules.\n\n## Detection Rules\nSteps can run in parallel when:\n1. No Needs dependencies between them\n2. Not in same sequential chain\n3. Across dynamic arms (arm-ace and arm-nux can parallelize)\n\n## Output in bd mol show\n```\npatrol-x7k (mol-witness-patrol)\nβ”œβ”€β”€ inbox-check [completed]\nβ”œβ”€β”€ survey-workers [completed]\nβ”‚ β”œβ”€β”€ arm-ace [parallel group A]\nβ”‚ β”‚ β”œβ”€β”€ capture [can parallelize]\nβ”‚ β”‚ β”œβ”€β”€ assess [needs: capture]\nβ”‚ β”‚ └── execute [needs: assess]\nβ”‚ └── arm-nux [parallel group A]\nβ”‚ β”œβ”€β”€ capture [can parallelize]\nβ”‚ └── ...\nβ”œβ”€β”€ aggregate [gate: waits for all-children]\n```\n\n## Flags\n- `bd mol show \u003cid\u003e --parallel` - Highlight parallel opportunities\n- `bd ready --mol \u003cid\u003e` - List all steps that can run now\n\n## Future: Parallel Execution Hints\nFor agents using Task tool subagents:\n- Identify independent arms that can run simultaneously\n- Suggest parallelization strategy","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-23T02:33:17.660368-08:00","updated_at":"2025-12-23T03:56:39.653982-08:00","closed_at":"2025-12-23T03:56:39.653982-08:00","close_reason":"Implemented --parallel flag for bd mol show and --mol flag for bd ready","dependencies":[{"issue_id":"bd-xo1o.4","depends_on_id":"bd-xo1o","type":"parent-child","created_at":"2025-12-23T02:33:17.662232-08:00","created_by":"daemon"}]} -{"id":"bd-xsl9","title":"Remove legacy autoflush code paths","description":"## Problem\n\nThe autoflush system has dual code paths - an old timer-based approach and a new FlushManager. Both are actively used based on whether flushManager is nil.\n\n## Locations\n\n- main.go:78-81: isDirty, needsFullExport, flushTimer marked 'used by legacy code'\n- autoflush.go:291-369: Functions with 'Legacy path for backward compatibility with tests'\n\n## Current Behavior\n\n```go\n// In markDirtyAndScheduleFlush():\nif flushManager != nil {\n flushManager.MarkDirty(false)\n return\n}\n// Legacy path for backward compatibility with tests\n```\n\n## Proposed Fix\n\n1. Ensure flushManager is always initialized (even in tests)\n2. Remove the legacy timer-based code paths\n3. Remove isDirty, needsFullExport, flushTimer globals\n4. Update tests to use FlushManager\n\n## Risk\n\nLow - the FlushManager is the production path. Legacy code only runs when flushManager is nil (test scenarios).","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-21T15:49:30.83769-08:00","updated_at":"2025-12-23T01:54:59.09333-08:00","closed_at":"2025-12-23T01:54:59.09333-08:00","close_reason":"Removed legacy autoflush code: isDirty, needsFullExport, flushTimer globals and flushToJSONL() wrapper. FlushManager is now the only code path."} -{"id":"bd-xurv","title":"Restart daemon with 0.33.2","description":"Restart the bd daemon to pick up new version:\n\n```bash\nbd daemon --stop\nbd daemon --start\nbd daemon --health # Verify Version: 0.33.2\n```","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.760884-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","close_reason":"Daemons running 0.33.2","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task","wisp":true} -{"id":"bd-y0fj","title":"Issue lifecycle hooks (on-close, on-complete)","description":"Add hooks that fire on issue state transitions, enabling automation like closing linked GitHub issues.\n\n## Problem\n\nWe have `external_ref` to link beads issues to external systems (GitHub, Linear, Jira), but no mechanism to trigger actions when issues close. Currently:\n\n```\nbd-u2sc (external_ref: gh-692) closes β†’ nothing happens\n```\n\n## Proposed Solution\n\n### Phase 1: Shell Hooks\n\nAdd `.beads-hooks/on-close.sh` (and similar lifecycle hooks):\n\n```bash\n# .beads-hooks/on-close.sh\n# Called by bd close with issue JSON on stdin\n#\\!/bin/bash\nissue=$(cat)\nexternal_ref=$(echo \"$issue\" | jq -r '.external_ref // empty')\nif [[ \"$external_ref\" == gh-* ]]; then\n number=\"${external_ref#gh-}\"\n gh issue close \"$number\" --repo steveyegge/beads \\\n --comment \"Completed via beads epic $(echo $issue | jq -r .id)\"\nfi\n```\n\n### Lifecycle Events\n\n| Event | Trigger | Use Cases |\n|-------|---------|-----------|\n| `on-close` | Issue closed | Close external refs, notify, archive |\n| `on-complete` | Epic children all done | Roll-up completion, close parent refs |\n| `on-status-change` | Any status transition | Sync to external systems |\n\n### Phase 2: Molecule Completion Handlers\n\nMolecules could define completion actions:\n\n```yaml\nname: github-issue-tracker\non_complete:\n - action: shell\n command: gh issue close {{external_ref}} --repo {{repo}}\n - action: mail\n to: mayor/\n subject: \"Epic {{id}} completed\"\n```\n\n### Phase 3: Gas Town Integration\n\nFor full Gas Town deployments:\n- Witness observes closures via beads events\n- Routes to integration agents via mail\n- Agents handle external system interactions\n\n## Implementation Notes\n\n- Hooks should be async (don't block bd close)\n- Pass full issue JSON to hook via stdin\n- Support hook timeout and failure handling\n- Consider `--no-hooks` flag for bulk operations\n\n## Related\n\n- `external_ref` field already exists (GH#142)\n- Cross-project deps: bd-h807, bd-d9mu\n- Git hooks: .beads-hooks/ pattern established\n\n## Use Cases\n\n1. **GitHub integration**: Close GH issues when beads epic completes\n2. **Linear sync**: Update Linear status when beads status changes \n3. **Notifications**: Send mail/Slack when high-priority issues close\n4. **Audit**: Log all closures to external system","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-12-22T14:46:04.846657-08:00","updated_at":"2025-12-22T14:50:40.35447-08:00","closed_at":"2025-12-22T14:50:40.35447-08:00","close_reason":"Molecules already cover this use case. Completion actions should be encoded as tail steps in molecules rather than lifecycle hooks. This keeps everything in the beads data plane, makes it resumable/auditable, and allows tiered delegation (haiku for simple steps). Hooks would escape the ledger and add a parallel system without clear benefit."} -{"id":"bd-y2v","title":"Refactor duplicate JSONL-from-git parsing code","description":"Both readFirstIssueFromGit() in init.go and importFromGit() in autoimport.go have similar code patterns for:\n1. Running git show \u003cref\u003e:\u003cpath\u003e\n2. Scanning the output with bufio.Scanner\n3. Parsing JSON lines\n\nCould be refactored to share a helper like:\n- readJSONLFromGit(gitRef, path string) ([]byte, error)\n- Or a streaming version: streamJSONLFromGit(gitRef, path string) (io.Reader, error)\n\nFiles:\n- cmd/bd/autoimport.go:225-256 (importFromGit)\n- cmd/bd/init.go:1212-1243 (readFirstIssueFromGit)\n\nPriority is low since code duplication is minimal and both functions work correctly.","status":"in_progress","priority":2,"issue_type":"task","assignee":"beads/juliet","created_at":"2025-12-05T14:51:18.41124-08:00","updated_at":"2025-12-23T22:29:35.786445-08:00"} -{"id":"bd-y4vz","title":"Work on beads-eub: Consolidated context tool for MCP serv...","description":"Work on beads-eub: Consolidated context tool for MCP server (GH#636). Merge set_context, where_am_i, init into single 'context' tool. When done, submit MR (not PR) to integration branch for Refinery.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-19T22:56:58.527144-08:00","updated_at":"2025-12-20T00:49:51.929597-08:00","closed_at":"2025-12-19T23:31:11.906952-08:00","close_reason":"Completed: Consolidated set_context, where_am_i, init into unified context tool"} -{"id":"bd-y7j8","title":"Update info.go versionChanges","description":"Add entry to versionChanges in cmd/bd/info.go with agent-actionable changes for test-squash","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T13:52:33.066625-08:00","updated_at":"2025-12-21T13:53:49.554496-08:00","deleted_at":"2025-12-21T13:53:49.554496-08:00","deleted_by":"stevey","delete_reason":"manual delete","original_type":"task","wisp":true} -{"id":"bd-y8bj","title":"Auto-detect identity from directory context for bd mail","description":"Currently bd mail inbox defaults to git user name, requiring --identity flag with exact format.\n\n## Problem\n- Mail sent to `gastown/crew/max`\n- Max runs `bd mail inbox` β†’ defaults to 'Steve Yegge' (git user)\n- Max must know to use `--identity 'gastown/crew/max'` with exact slashes\n\n## Proposed Fix\nAuto-detect identity from directory context when in a Gas Town workspace:\n- In `/Users/stevey/gt/gastown/crew/max`, infer identity = `gastown/crew/max`\n- Pattern: `\u003ctown\u003e/\u003crig\u003e/\u003crole\u003e/\u003cname\u003e` β†’ `\u003crig\u003e/\u003crole\u003e/\u003cname\u003e`\n\n## Additional Improvements\n1. Support GT_IDENTITY env var (set by gt crew at / session spawning)\n2. Support identity in .beads/config.yaml\n3. Normalize format: accept both slashes and dashes as equivalent\n\n## Context\nDiscovered during crew-to-crew work assignment. Max couldn't see mail despite correct nudge because identity defaulted wrong.","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-20T17:22:53.938586-08:00","updated_at":"2025-12-20T18:12:58.472262-08:00","closed_at":"2025-12-20T17:58:51.034201-08:00"} -{"id":"bd-y8tn","title":"Test Molecule","description":"A test molecule","status":"closed","priority":2,"issue_type":"molecule","created_at":"2025-12-19T18:30:24.491279-08:00","updated_at":"2025-12-19T18:31:12.49898-08:00","closed_at":"2025-12-19T18:31:12.49898-08:00","close_reason":"test molecule - deleting"} -{"id":"bd-yck","title":"Fix checkExistingBeadsData to be worktree-aware","description":"The checkExistingBeadsData function in cmd/bd/init.go checks for .beads in the current working directory, but for worktrees it should check the main repository root instead. This prevents proper worktree compatibility.","status":"in_progress","priority":2,"issue_type":"bug","assignee":"beads/hotel","created_at":"2025-12-07T16:48:32.082776345-07:00","updated_at":"2025-12-23T22:29:35.79267-08:00"} -{"id":"bd-ykd9","title":"Add bd doctor --fix flag to automatically repair issues","description":"Add bd doctor --fix flag to automatically repair detected issues.\n\n## Current State\n- Doctor checks return issues with \"Fix\" field containing fix instructions\n- No automatic fix execution\n- User must manually follow fix instructions\n\n## Implementation\n\n### 1. Add --fix flag to doctor.go\n```go\n// In cmd/bd/doctor.go init()\ndoctorCmd.Flags().Bool(\"fix\", false, \"Automatically fix detected issues\")\ndoctorCmd.Flags().Bool(\"yes\", false, \"Skip confirmation prompts (use with --fix)\")\n```\n\n### 2. Create fix registry (cmd/bd/doctor/fix/registry.go)\n```go\npackage fix\n\n// Fixer can automatically repair an issue\ntype Fixer interface {\n // CanFix returns true if this fixer handles the given check name\n CanFix(checkName string) bool\n // Fix attempts to repair the issue, returns error if failed\n Fix(ctx context.Context, issue *CheckResult) error\n // Description returns human-readable description of what will be fixed\n Description() string\n}\n\nvar registry []Fixer\n\nfunc Register(f Fixer) {\n registry = append(registry, f)\n}\n\nfunc GetFixer(checkName string) Fixer {\n for _, f := range registry {\n if f.CanFix(checkName) {\n return f\n }\n }\n return nil\n}\n```\n\n### 3. Implement fixers (one per file)\n\n**cmd/bd/doctor/fix/hooks.go**\n```go\ntype HooksFixer struct{}\n\nfunc (f *HooksFixer) CanFix(name string) bool {\n return name == \"git-hooks\" || name == \"hooks-outdated\"\n}\n\nfunc (f *HooksFixer) Fix(ctx context.Context, issue *CheckResult) error {\n return hooks.Install(\".\", true) // force reinstall\n}\n\nfunc (f *HooksFixer) Description() string {\n return \"Reinstall git hooks\"\n}\n\nfunc init() {\n Register(\u0026HooksFixer{})\n}\n```\n\n**cmd/bd/doctor/fix/sync_branch.go**\n```go\ntype SyncBranchFixer struct{}\n\nfunc (f *SyncBranchFixer) CanFix(name string) bool {\n return name == \"sync-branch-missing\" || name == \"sync-branch-diverged\"\n}\n\nfunc (f *SyncBranchFixer) Fix(ctx context.Context, issue *CheckResult) error {\n // Reset to remote or create branch\n return syncbranch.ResetToRemote(ctx, repoRoot, branch, jsonlPath)\n}\n```\n\n**cmd/bd/doctor/fix/merge_driver.go**\n```go\ntype MergeDriverFixer struct{}\n\nfunc (f *MergeDriverFixer) CanFix(name string) bool {\n return name == \"merge-driver-missing\" || name == \"merge-driver-outdated\"\n}\n\nfunc (f *MergeDriverFixer) Fix(ctx context.Context, issue *CheckResult) error {\n return setupMergeDriver()\n}\n```\n\n### 4. Update doctor run logic\n\n```go\nfunc runDoctor(cmd *cobra.Command, args []string) {\n issues := runAllChecks()\n \n if \\!fixFlag {\n // Existing behavior - just display issues\n displayIssues(issues)\n return\n }\n \n // Collect fixable issues\n var fixable []FixableIssue\n for _, issue := range issues {\n if fixer := fix.GetFixer(issue.CheckName); fixer \\!= nil {\n fixable = append(fixable, FixableIssue{issue, fixer})\n }\n }\n \n if len(fixable) == 0 {\n fmt.Println(\"No automatically fixable issues found\")\n return\n }\n \n // Show what will be fixed\n fmt.Printf(\"Found %d fixable issues:\\n\", len(fixable))\n for i, f := range fixable {\n fmt.Printf(\" %d. [%s] %s\\n\", i+1, f.Issue.CheckName, f.Fixer.Description())\n }\n \n // Confirm unless --yes\n if \\!yesFlag {\n fmt.Print(\"\\nProceed with fixes? [Y/n] \")\n // ... read confirmation\n }\n \n // Apply fixes\n for _, f := range fixable {\n fmt.Printf(\"Fixing %s... \", f.Issue.CheckName)\n if err := f.Fixer.Fix(ctx, f.Issue); err \\!= nil {\n fmt.Printf(\"FAILED: %v\\n\", err)\n } else {\n fmt.Println(\"OK\")\n }\n }\n}\n```\n\n## Fixable Checks (Initial Set)\n\n| Check | Fixer | Action |\n|-------|-------|--------|\n| git-hooks | HooksFixer | Reinstall hooks |\n| hooks-outdated | HooksFixer | Update hooks |\n| merge-driver-missing | MergeDriverFixer | Configure merge driver |\n| sync-branch-diverged | SyncBranchFixer | Reset to remote |\n| daemon-version-mismatch | DaemonFixer | Restart daemon |\n\n## Testing\n```bash\n# Test with broken hooks\nrm .git/hooks/pre-commit\nbd doctor --fix --yes\n\n# Verify fix applied\nbd doctor # Should pass now\n```\n\n## Success Criteria\n- --fix flag triggers automatic repair\n- User prompted for confirmation (unless --yes)\n- Clear output showing what was fixed\n- Graceful handling of fix failures\n- At least 5 common issues auto-fixable","status":"closed","priority":2,"issue_type":"feature","assignee":"beads/Doctor","created_at":"2025-11-14T18:17:48.411264-08:00","updated_at":"2025-12-23T13:34:05.646445-08:00","closed_at":"2025-12-23T13:34:05.646445-08:00","close_reason":"Feature already fully implemented. Verified: --fix, --yes, --interactive, --dry-run flags all working. 20+ fixable issues supported. Tests pass.","dependencies":[{"issue_id":"bd-ykd9","depends_on_id":"bd-iz5t","type":"parent-child","created_at":"2025-12-23T12:44:07.732505-08:00","created_by":"daemon"}]} -{"id":"bd-ykqu","title":"Add gate timeout tracking and notification","description":"Implement timeout and notification logic for gates.\n\n## Timeout Behavior\n1. Gate created with timeout (e.g., 30m)\n2. Deacon tracks elapsed time during patrol\n3. If timeout reached:\n - Notify all waiters: \"Gate timed out\"\n - Close gate with timeout reason\n - Waiter can retry, escalate, or fail gracefully\n\n## Notification\n- Use gt mail send to notify waiters\n- Include gate ID, await type, and reason in message\n- Support multiple waiters notification\n\n## Escalation Path\n- Witness sees stuck worker, nudges them\n- Worker can escalate to human if needed","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-23T11:44:40.1825-08:00","updated_at":"2025-12-23T12:19:44.362527-08:00","closed_at":"2025-12-23T12:19:44.362527-08:00","close_reason":"Moved to gastown: gt-fqcz","dependencies":[{"issue_id":"bd-ykqu","depends_on_id":"bd-udsi","type":"parent-child","created_at":"2025-12-23T11:44:53.072862-08:00","created_by":"daemon"},{"issue_id":"bd-ykqu","depends_on_id":"bd-is6m","type":"blocks","created_at":"2025-12-23T11:44:56.595085-08:00","created_by":"daemon"}]} -{"id":"bd-ymqn","title":"Code review: bd mol bond --ref and bd activity (bd-xo1o work)","description":"Review dave's recent commits for bd-xo1o (Dynamic Molecule Bonding):\n\n## Commits to Review\n- ee04b1ea: feat: add dynamic molecule bonding with --ref flag (bd-xo1o.1)\n- be520d90: feat: add bd activity command for real-time state feed (bd-xo1o.3)\n\n## Review Focus\n1. Code quality and correctness\n2. Error handling\n3. Edge cases\n4. Test coverage\n5. Documentation\n\n## Deliverables\n- File beads for any issues found\n- Note any concerns or suggestions\n- Verify the implementation matches the bd-xo1o epic requirements","status":"closed","priority":1,"issue_type":"task","assignee":"beads/ace","created_at":"2025-12-23T03:47:55.217363-08:00","updated_at":"2025-12-23T04:11:00.226326-08:00","closed_at":"2025-12-23T04:11:00.226326-08:00","close_reason":"Code review completed by ace"} -{"id":"bd-yqhh","title":"bd list --parent: filter by parent issue","description":"Add --parent flag to bd list to filter issues by parent.\n\nExample:\n```bash\nbd list --parent=gt-h5n --status=open\n```\n\nWould show all open children of gt-h5n.\n\nUseful for:\n- Checking epic progress\n- Finding swarmable work within an epic\n- Molecule step listing","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-12-23T01:51:26.830952-08:00","updated_at":"2025-12-23T02:10:12.909803-08:00","closed_at":"2025-12-23T02:10:12.909803-08:00","close_reason":"Implemented --parent flag for bd list. Filters children by parent issue using parent-child dependencies."} -{"id":"bd-yx22","title":"Merge: bd-d28c","description":"branch: polecat/testcat\ntarget: main\nsource_issue: bd-d28c\nrig: beads","status":"closed","priority":1,"issue_type":"merge-request","created_at":"2025-12-23T21:33:15.490412-08:00","updated_at":"2025-12-23T21:36:38.584933-08:00","closed_at":"2025-12-23T21:36:38.584933-08:00","close_reason":"stale - code never pushed to remote"} -{"id":"bd-z3rf","title":"dave Handoff","description":"attached_molecule: bd-ifuw\nattached_at: 2025-12-23T12:49:44Z","status":"pinned","priority":2,"issue_type":"task","created_at":"2025-12-23T04:33:42.874554-08:00","updated_at":"2025-12-23T04:49:44.1246-08:00"} -{"id":"bd-z86n","title":"Code Review: PR #551 - Persist close_reason to issues table","description":"Code review of PR #551 which fixes close_reason persistence bug.\n\n## Summary\nThe PR correctly fixes a bug where close_reason was only stored in the events table, not in the issues.close_reason column. This caused `bd show --json` to return empty close_reason.\n\n## What Was Fixed\n- βœ… CloseIssue now updates both close_reason and closed_at\n- βœ… ReOpenIssue clears both close_reason and closed_at\n- βœ… Comprehensive tests added for both storage and CLI layers\n- βœ… Clear documentation in queries.go about dual storage strategy\n\n## Quality Assessment\nβœ… Tests cover both storage layer and CLI JSON output\nβœ… Handles reopen case (clearing close_reason)\nβœ… Good comments explaining dual-storage design\nβœ… No known issues\n\n## Potential Followups\nSee linked issues for suggestions.","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-14T14:25:06.887069-08:00","updated_at":"2025-12-14T14:25:06.887069-08:00"} +{"id":"bd-eyrh","title":"🀝 HANDOFF: Review remaining beads PRs","description":"## Current State\nJust merged PR #653 (doctor refactor) and added tests to restore coverage.\n\n## Remaining Open PRs to Review\nRun `gh pr list --repo steveyegge/beads` to see current list. As of handoff:\n\n1. #655 - feat: Linear Integration (jblwilliams)\n2. #651 - feat(audit): agent audit trail (dchichkov)\n3. #648 - Stop init creating redundant @AGENTS.md (maphew)\n4. #646 - fix(unix): handle Statfs field types (jordanhubbard)\n5. #645 - feat: /plan-to-beads Claude Code command (petebytes)\n6. #642, #641, #640 - sync branch fixes (cpdata)\n\n## Review Checklist\n- Check CI status with `gh pr checks \u003cnum\u003e --repo steveyegge/beads`\n- Verify no .beads/ data leaking (we have a hook now)\n- Review code quality\n- Merge good ones, request changes on problematic ones\n\n## Notes\n- User wants us to be proactive about merging good PRs\n- Can add tests ourselves if coverage drops","status":"tombstone","priority":2,"issue_type":"message","created_at":"2025-12-19T17:44:34.149837-08:00","updated_at":"2025-12-21T13:53:33.613805-08:00","deleted_at":"2025-12-21T13:53:33.613805-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"message"} +{"id":"bd-ykd9","title":"Add bd doctor --fix flag to automatically repair issues","description":"Add bd doctor --fix flag to automatically repair detected issues.\n\n## Current State\n- Doctor checks return issues with \"Fix\" field containing fix instructions\n- No automatic fix execution\n- User must manually follow fix instructions\n\n## Implementation\n\n### 1. Add --fix flag to doctor.go\n```go\n// In cmd/bd/doctor.go init()\ndoctorCmd.Flags().Bool(\"fix\", false, \"Automatically fix detected issues\")\ndoctorCmd.Flags().Bool(\"yes\", false, \"Skip confirmation prompts (use with --fix)\")\n```\n\n### 2. Create fix registry (cmd/bd/doctor/fix/registry.go)\n```go\npackage fix\n\n// Fixer can automatically repair an issue\ntype Fixer interface {\n // CanFix returns true if this fixer handles the given check name\n CanFix(checkName string) bool\n // Fix attempts to repair the issue, returns error if failed\n Fix(ctx context.Context, issue *CheckResult) error\n // Description returns human-readable description of what will be fixed\n Description() string\n}\n\nvar registry []Fixer\n\nfunc Register(f Fixer) {\n registry = append(registry, f)\n}\n\nfunc GetFixer(checkName string) Fixer {\n for _, f := range registry {\n if f.CanFix(checkName) {\n return f\n }\n }\n return nil\n}\n```\n\n### 3. Implement fixers (one per file)\n\n**cmd/bd/doctor/fix/hooks.go**\n```go\ntype HooksFixer struct{}\n\nfunc (f *HooksFixer) CanFix(name string) bool {\n return name == \"git-hooks\" || name == \"hooks-outdated\"\n}\n\nfunc (f *HooksFixer) Fix(ctx context.Context, issue *CheckResult) error {\n return hooks.Install(\".\", true) // force reinstall\n}\n\nfunc (f *HooksFixer) Description() string {\n return \"Reinstall git hooks\"\n}\n\nfunc init() {\n Register(\u0026HooksFixer{})\n}\n```\n\n**cmd/bd/doctor/fix/sync_branch.go**\n```go\ntype SyncBranchFixer struct{}\n\nfunc (f *SyncBranchFixer) CanFix(name string) bool {\n return name == \"sync-branch-missing\" || name == \"sync-branch-diverged\"\n}\n\nfunc (f *SyncBranchFixer) Fix(ctx context.Context, issue *CheckResult) error {\n // Reset to remote or create branch\n return syncbranch.ResetToRemote(ctx, repoRoot, branch, jsonlPath)\n}\n```\n\n**cmd/bd/doctor/fix/merge_driver.go**\n```go\ntype MergeDriverFixer struct{}\n\nfunc (f *MergeDriverFixer) CanFix(name string) bool {\n return name == \"merge-driver-missing\" || name == \"merge-driver-outdated\"\n}\n\nfunc (f *MergeDriverFixer) Fix(ctx context.Context, issue *CheckResult) error {\n return setupMergeDriver()\n}\n```\n\n### 4. Update doctor run logic\n\n```go\nfunc runDoctor(cmd *cobra.Command, args []string) {\n issues := runAllChecks()\n \n if \\!fixFlag {\n // Existing behavior - just display issues\n displayIssues(issues)\n return\n }\n \n // Collect fixable issues\n var fixable []FixableIssue\n for _, issue := range issues {\n if fixer := fix.GetFixer(issue.CheckName); fixer \\!= nil {\n fixable = append(fixable, FixableIssue{issue, fixer})\n }\n }\n \n if len(fixable) == 0 {\n fmt.Println(\"No automatically fixable issues found\")\n return\n }\n \n // Show what will be fixed\n fmt.Printf(\"Found %d fixable issues:\\n\", len(fixable))\n for i, f := range fixable {\n fmt.Printf(\" %d. [%s] %s\\n\", i+1, f.Issue.CheckName, f.Fixer.Description())\n }\n \n // Confirm unless --yes\n if \\!yesFlag {\n fmt.Print(\"\\nProceed with fixes? [Y/n] \")\n // ... read confirmation\n }\n \n // Apply fixes\n for _, f := range fixable {\n fmt.Printf(\"Fixing %s... \", f.Issue.CheckName)\n if err := f.Fixer.Fix(ctx, f.Issue); err \\!= nil {\n fmt.Printf(\"FAILED: %v\\n\", err)\n } else {\n fmt.Println(\"OK\")\n }\n }\n}\n```\n\n## Fixable Checks (Initial Set)\n\n| Check | Fixer | Action |\n|-------|-------|--------|\n| git-hooks | HooksFixer | Reinstall hooks |\n| hooks-outdated | HooksFixer | Update hooks |\n| merge-driver-missing | MergeDriverFixer | Configure merge driver |\n| sync-branch-diverged | SyncBranchFixer | Reset to remote |\n| daemon-version-mismatch | DaemonFixer | Restart daemon |\n\n## Testing\n```bash\n# Test with broken hooks\nrm .git/hooks/pre-commit\nbd doctor --fix --yes\n\n# Verify fix applied\nbd doctor # Should pass now\n```\n\n## Success Criteria\n- --fix flag triggers automatic repair\n- User prompted for confirmation (unless --yes)\n- Clear output showing what was fixed\n- Graceful handling of fix failures\n- At least 5 common issues auto-fixable","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-11-14T18:17:48.411264-08:00","updated_at":"2025-12-23T13:34:05.646445-08:00","closed_at":"2025-12-23T13:34:05.646445-08:00","dependencies":[{"issue_id":"bd-ykd9","depends_on_id":"bd-iz5t","type":"parent-child","created_at":"2025-12-23T12:44:07.732505-08:00","created_by":"daemon"}]} +{"id":"bd-hlsw.4","title":"Sync branch integrity guards","description":"Track sync branch parent commit. If sync branch was force-pushed, warn user and require confirmation before proceeding. Add option to reset to remote if user accepts rebase. Prevents silent corruption from forced pushes.","status":"open","priority":3,"issue_type":"task","created_at":"2025-12-14T10:40:20.645402352-07:00","updated_at":"2025-12-14T10:40:20.645402352-07:00","dependencies":[{"issue_id":"bd-hlsw.4","depends_on_id":"bd-hlsw","type":"parent-child","created_at":"2025-12-14T10:40:20.646425761-07:00","created_by":"daemon"}]} +{"id":"bd-svb5","title":"GH#505: Add bd reset/wipe command","description":"Add command to cleanly reset/wipe beads database. User reports painful manual process to start fresh. See GitHub issue #505.","status":"tombstone","priority":2,"issue_type":"feature","created_at":"2025-12-16T01:03:42.160966-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"feature"} +{"id":"bd-4qfb","title":"Improve bd doctor output formatting for better readability","description":"Improve bd doctor output formatting for better readability.\n\n## Current State\nDoctor output is a wall of text with:\n- All checks shown (even passing ones)\n- No visual hierarchy\n- Hard to spot failures in long output\n\n## Target Output\n\n```\n$ bd doctor\n\nbd doctor v0.35.0\n\nSummary: 24 checks passed, 1 warning, 0 errors\n\n─────────────────────────────────────────────────\n⚠ Warnings (1)\n─────────────────────────────────────────────────\n\n[hooks] Git hooks outdated\n Current version: 0.34.0\n Latest version: 0.35.0\n Fix: bd hooks install\n\n─────────────────────────────────────────────────\nβœ“ Passed (24) [use --verbose to show details]\n─────────────────────────────────────────────────\n```\n\nWith --verbose:\n```\n$ bd doctor --verbose\n\nbd doctor v0.35.0\n\nSummary: 24 checks passed, 1 warning, 0 errors\n\n─────────────────────────────────────────────────\n⚠ Warnings (1)\n─────────────────────────────────────────────────\n\n[hooks] Git hooks outdated\n ...\n\n─────────────────────────────────────────────────\nβœ“ Passed (24)\n─────────────────────────────────────────────────\n\n Database\n βœ“ Database exists\n βœ“ Database readable\n βœ“ Schema up to date\n \n Git Hooks\n βœ“ Pre-commit hook installed\n βœ“ Post-merge hook installed\n ⚠ Hooks version mismatch (see above)\n \n Sync\n βœ“ Sync branch configured\n βœ“ Remote accessible\n ...\n```\n\n## Implementation\n\n### 1. Add check categories (cmd/bd/doctor/categories.go)\n\n```go\ntype Category string\n\nconst (\n CatDatabase Category = \"Database\"\n CatHooks Category = \"Git Hooks\"\n CatSync Category = \"Sync\"\n CatDaemon Category = \"Daemon\"\n CatConfig Category = \"Configuration\"\n CatIntegrity Category = \"Data Integrity\"\n)\n\n// Assign categories to checks\nvar checkCategories = map[string]Category{\n \"database-exists\": CatDatabase,\n \"database-readable\": CatDatabase,\n \"schema-version\": CatDatabase,\n \"pre-commit-hook\": CatHooks,\n \"post-merge-hook\": CatHooks,\n \"hooks-version\": CatHooks,\n \"sync-branch\": CatSync,\n \"remote-access\": CatSync,\n // ... etc\n}\n```\n\n### 2. Add --verbose flag\n\n```go\n// In cmd/bd/doctor.go init()\ndoctorCmd.Flags().BoolP(\"verbose\", \"v\", false, \"Show all checks including passed\")\n```\n\n### 3. Create formatter (cmd/bd/doctor/format.go)\n\n```go\ntype Formatter struct {\n verbose bool\n noColor bool\n}\n\nfunc (f *Formatter) Format(results []CheckResult) string {\n var buf strings.Builder\n \n // Count by status\n passed, warnings, errors := countByStatus(results)\n \n // Header\n buf.WriteString(fmt.Sprintf(\"bd doctor v%s\\n\\n\", version.Version))\n buf.WriteString(fmt.Sprintf(\"Summary: %d passed, %d warnings, %d errors\\n\\n\", \n passed, warnings, errors))\n \n // Errors section (always show)\n if errors \u003e 0 {\n f.writeSection(\u0026buf, \"βœ— Errors\", filterByStatus(results, StatusError))\n }\n \n // Warnings section (always show)\n if warnings \u003e 0 {\n f.writeSection(\u0026buf, \"⚠ Warnings\", filterByStatus(results, StatusWarning))\n }\n \n // Passed section (only with --verbose)\n if f.verbose \u0026\u0026 passed \u003e 0 {\n f.writePassedSection(\u0026buf, filterByStatus(results, StatusPassed))\n } else if passed \u003e 0 {\n buf.WriteString(fmt.Sprintf(\"βœ“ Passed (%d) [use --verbose to show details]\\n\", passed))\n }\n \n return buf.String()\n}\n\nfunc (f *Formatter) writeSection(buf *strings.Builder, title string, results []CheckResult) {\n buf.WriteString(\"─────────────────────────────────────────────────\\n\")\n buf.WriteString(title + \"\\n\")\n buf.WriteString(\"─────────────────────────────────────────────────\\n\\n\")\n \n for _, r := range results {\n buf.WriteString(fmt.Sprintf(\"[%s] %s\\n\", r.CheckName, r.Message))\n if r.Details != \"\" {\n buf.WriteString(fmt.Sprintf(\" %s\\n\", r.Details))\n }\n if r.Fix != \"\" {\n buf.WriteString(fmt.Sprintf(\" Fix: %s\\n\", r.Fix))\n }\n buf.WriteString(\"\\n\")\n }\n}\n\nfunc (f *Formatter) writePassedSection(buf *strings.Builder, results []CheckResult) {\n // Group by category\n byCategory := groupByCategory(results)\n \n buf.WriteString(\"─────────────────────────────────────────────────\\n\")\n buf.WriteString(fmt.Sprintf(\"βœ“ Passed (%d)\\n\", len(results)))\n buf.WriteString(\"─────────────────────────────────────────────────\\n\\n\")\n \n for _, cat := range categoryOrder {\n if checks, ok := byCategory[cat]; ok {\n buf.WriteString(fmt.Sprintf(\" %s\\n\", cat))\n for _, r := range checks {\n buf.WriteString(fmt.Sprintf(\" βœ“ %s\\n\", r.Message))\n }\n buf.WriteString(\"\\n\")\n }\n }\n}\n```\n\n### 4. Update run function\n\n```go\nfunc runDoctor(cmd *cobra.Command, args []string) {\n verbose, _ := cmd.Flags().GetBool(\"verbose\")\n noColor, _ := cmd.Flags().GetBool(\"no-color\")\n \n results := runAllChecks()\n \n formatter := \u0026Formatter{verbose: verbose, noColor: noColor}\n fmt.Print(formatter.Format(results))\n \n // Exit code based on results\n if hasErrors(results) {\n os.Exit(1)\n }\n}\n```\n\n## Files to Modify\n\n1. **cmd/bd/doctor.go** - Add --verbose flag, update run function\n2. **cmd/bd/doctor/format.go** - New file for formatting logic\n3. **cmd/bd/doctor/categories.go** - New file for check categorization\n4. **cmd/bd/doctor/common.go** - Add Status field to CheckResult if missing\n\n## Testing\n\n```bash\n# Default output (concise)\nbd doctor\n\n# Verbose output\nbd doctor --verbose\n\n# JSON output (should still work)\nbd doctor --json\n```\n\n## Success Criteria\n- Summary line at top with counts\n- Only failures/warnings shown by default\n- --verbose shows grouped passed checks\n- Visual separators between sections\n- Exit code 1 if errors, 0 otherwise","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-13T09:29:27.557578+11:00","updated_at":"2025-12-23T13:37:18.48781-08:00","closed_at":"2025-12-23T13:37:18.48781-08:00","dependencies":[{"issue_id":"bd-4qfb","depends_on_id":"bd-iz5t","type":"parent-child","created_at":"2025-12-23T12:44:07.972517-08:00","created_by":"daemon"}]} +{"id":"bd-7di","title":"worktree: any bd command is slow","description":"in a git worktree any bd command is slow, with a 2-3s pause before any results are shown. The identical command with `--no-daemon` is near instant.","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-05T15:33:42.924618693-07:00","updated_at":"2025-12-05T15:33:42.924618693-07:00"} +{"id":"bd-2q6d","title":"Beads commands operate on stale database without warning","description":"All beads read operations should validate database is in sync with JSONL before proceeding.\n\n**Current Behavior:**\n- Commands can query/read from stale database\n- Only mutation operations (like 'bd sync') check if JSONL is newer\n- User gets incorrect results without realizing database is out of sync\n\n**Expected Behavior:**\n- All beads commands should have pre-flight check for database freshness\n- If JSONL is newer than database, refuse to operate with error: \"Database out of sync. Run 'bd import' first.\"\n- Same safety check that exists for 'bd sync' should apply to ALL operations\n\n**Impact:**\n- Users make decisions based on incomplete/outdated data\n- Silent failures lead to confusion (e.g., thinking issues don't exist when they do)\n- Similar to running git commands on stale repo without being warned to pull\n\n**Example:**\n- Searched for bd-g9eu issue file: not found\n- Issue exists in .beads/issues.jsonl (in git)\n- Database was stale, but no warning was given\n- Led to incorrect conclusion that issue was already closed/deleted","notes":"## Implementation Complete\n\n**Phase 1: Created staleness check (cmd/bd/staleness.go)**\n- ensureDatabaseFresh() function checks JSONL mtime vs last_import_time\n- Returns error with helpful message when database is stale\n- Auto-skips in daemon mode (daemon has auto-import)\n\n**Phase 2: Added to all read commands**\n- list, show, ready, status, stale, info, duplicates, validate\n- Check runs before database queries in direct mode\n- Daemon mode already protected via checkAndAutoImportIfStale()\n\n**Phase 3: Code Review Findings**\nSee follow-up issues:\n- bd-XXXX: Add warning when staleness check errors\n- bd-YYYY: Improve CheckStaleness error handling\n- bd-ZZZZ: Refactor redundant daemon checks (low priority)\n\n**Testing:**\n- Build successful: go build ./cmd/bd\n- Binary works: ./bd --version\n- Ready for manual testing\n\n**Next Steps:**\n1. Test with stale database scenario\n2. Implement review improvements\n3. Close issue when tests pass","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-11-20T19:33:40.019297-05:00","updated_at":"2025-12-17T23:13:40.535149-08:00","closed_at":"2025-12-17T19:11:12.982639-08:00"} +{"id":"bd-2l03","title":"Implement await type handlers (gh:run, gh:pr, timer, human, mail)","description":"Implement condition checking for each await type.\n\n## Handlers Needed\n- gh:run:\u003cid\u003e - Check GitHub Actions run status via gh CLI\n- gh:pr:\u003cid\u003e - Check PR merged/closed status via gh CLI \n- timer:\u003cduration\u003e - Simple elapsed time check\n- human:\u003cprompt\u003e - Check for human approval (via mail?)\n- mail:\u003cpattern\u003e - Check for mail matching pattern\n\n## Implementation Location\nThis is Deacon logic, so likely in Gas Town (gt) not beads.\n\n## Interface\n```go\ntype AwaitHandler interface {\n Check(awaitID string) (completed bool, result string, err error)\n}\n```","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-23T11:44:38.492837-08:00","updated_at":"2025-12-23T12:19:44.283318-08:00","closed_at":"2025-12-23T12:19:44.283318-08:00","dependencies":[{"issue_id":"bd-2l03","depends_on_id":"bd-udsi","type":"parent-child","created_at":"2025-12-23T11:44:52.990746-08:00","created_by":"daemon"},{"issue_id":"bd-2l03","depends_on_id":"bd-is6m","type":"blocks","created_at":"2025-12-23T11:44:56.510792-08:00","created_by":"daemon"}]} +{"id":"bd-xo1o.4","title":"Parallel step detection in molecules","description":"Detect and flag parallelizable steps in molecules.\n\n## Detection Rules\nSteps can run in parallel when:\n1. No Needs dependencies between them\n2. Not in same sequential chain\n3. Across dynamic arms (arm-ace and arm-nux can parallelize)\n\n## Output in bd mol show\n```\npatrol-x7k (mol-witness-patrol)\nβ”œβ”€β”€ inbox-check [completed]\nβ”œβ”€β”€ survey-workers [completed]\nβ”‚ β”œβ”€β”€ arm-ace [parallel group A]\nβ”‚ β”‚ β”œβ”€β”€ capture [can parallelize]\nβ”‚ β”‚ β”œβ”€β”€ assess [needs: capture]\nβ”‚ β”‚ └── execute [needs: assess]\nβ”‚ └── arm-nux [parallel group A]\nβ”‚ β”œβ”€β”€ capture [can parallelize]\nβ”‚ └── ...\nβ”œβ”€β”€ aggregate [gate: waits for all-children]\n```\n\n## Flags\n- `bd mol show \u003cid\u003e --parallel` - Highlight parallel opportunities\n- `bd ready --mol \u003cid\u003e` - List all steps that can run now\n\n## Future: Parallel Execution Hints\nFor agents using Task tool subagents:\n- Identify independent arms that can run simultaneously\n- Suggest parallelization strategy","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-23T02:33:17.660368-08:00","updated_at":"2025-12-23T03:56:39.653982-08:00","closed_at":"2025-12-23T03:56:39.653982-08:00","dependencies":[{"issue_id":"bd-xo1o.4","depends_on_id":"bd-xo1o","type":"parent-child","created_at":"2025-12-23T02:33:17.662232-08:00","created_by":"daemon"}]} +{"id":"bd-qqc.2","title":"Add {{version}} to versionChanges in info.go","description":"Add new entry at the TOP of versionChanges array in cmd/bd/info.go:\n\n```go\n{\n Version: \"{{version}}\",\n Date: \"{{date}}\",\n Changes: []string{\n // Add notable changes here\n },\n},\n```\n\nCopy changes from CHANGELOG.md [Unreleased] section.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T12:59:27.032117-08:00","updated_at":"2025-12-18T23:34:18.631996-08:00","closed_at":"2025-12-18T22:41:41.836137-08:00","dependencies":[{"issue_id":"bd-qqc.2","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T12:59:27.032746-08:00","created_by":"stevey"}]} +{"id":"bd-pdr2","title":"Consider backwards compatibility for ready() and list() return type change","description":"PR #481 changed the return types of `ready()` and `list()` from `list[Issue]` to `list[IssueMinimal] | CompactedResult`. This is a breaking change for MCP clients.\n\n## Impact Assessment\nBreaking change affects:\n- Any MCP client expecting `list[Issue]` from ready()\n- Any MCP client expecting `list[Issue]` from list()\n- Client code that accesses full Issue fields (description, design, acceptance_criteria, timestamps, dependencies, dependents)\n\n## Current Behavior\n- ready() returns `list[IssueMinimal] | CompactedResult`\n- list() returns `list[IssueMinimal] | CompactedResult`\n- show() still returns full `Issue` (good)\n\n## Considerations\n**Pros of current approach:**\n- Forces clients to use show() for full details (good for context efficiency)\n- Simple mental model (always use show for full data)\n- Documentation warns about this\n\n**Cons:**\n- Clients expecting list[Issue] will break\n- No graceful degradation option\n- No migration period\n\n## Potential Solutions\n1. Add optional parameter `full_details=false` to ready/list (would increase payload)\n2. Create separate tools: ready_minimal/list_minimal + ready_full/list_full\n3. Accept breaking change and document upgrade path (current approach)\n4. Version the MCP server and document migration guide\n\n## Recommendation\nCurrent approach (solution 3) is reasonable if:\n- Changelog clearly documents the breaking change\n- Migration guide provided to clients\n- Error handling is graceful for clients expecting specific fields","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-14T14:24:56.460465-08:00","updated_at":"2025-12-14T14:24:56.460465-08:00","dependencies":[{"issue_id":"bd-pdr2","depends_on_id":"bd-otf4","type":"discovered-from","created_at":"2025-12-14T14:24:56.461959-08:00","created_by":"stevey"}]} +{"id":"bd-kqo1","title":"Show pin indicator in bd list output","description":"Add a visual indicator (e.g., pin emoji or [P] marker) for pinned issues in bd list output so users can easily identify them.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-18T23:33:47.402549-08:00","updated_at":"2025-12-21T11:30:27.272768-08:00","closed_at":"2025-12-21T11:30:27.272768-08:00","dependencies":[{"issue_id":"bd-kqo1","depends_on_id":"bd-0vg","type":"blocks","created_at":"2025-12-18T23:33:56.771791-08:00","created_by":"daemon"},{"issue_id":"bd-kqo1","depends_on_id":"bd-7h5","type":"blocks","created_at":"2025-12-18T23:34:07.985271-08:00","created_by":"daemon"}]} +{"id":"bd-aydr","title":"Add bd reset command for clean slate restart","description":"Implement a `bd reset` command to reset beads to a clean starting state.\n\n## Context\nGitHub issue #479 - users sometimes get beads into an invalid state after updates, and there's no clean way to start fresh. The git backup/restore mechanism that protects against accidental deletion also makes it hard to intentionally reset.\n\n## Design\n\n### Command Interface\n```\nbd reset [--hard] [--force] [--backup] [--dry-run] [--no-init]\n```\n\n| Flag | Effect |\n|------|--------|\n| `--hard` | Also remove from git index and commit |\n| `--force` | Skip confirmation prompt |\n| `--backup` | Create `.beads-backup-{timestamp}/` first |\n| `--dry-run` | Preview what would happen |\n| `--no-init` | Don't re-initialize after clearing |\n\n### Reset Levels\n1. **Soft Reset (default)** - Kill daemons, clear .beads/, re-init. Git history unchanged.\n2. **Hard Reset (`--hard`)** - Also git rm and commit the removal, then commit fresh state.\n\n### Implementation Flow\n1. Validate .beads/ exists\n2. If not --force: show impact summary, prompt confirmation\n3. If --backup: copy .beads/ to .beads-backup-{timestamp}/\n4. Kill daemons\n5. If --hard: git rm + commit\n6. rm -rf .beads/*\n7. If not --no-init: bd init (and git add+commit if --hard)\n8. Print summary\n\n### Safety Mechanisms\n- Confirmation prompt (skip with --force)\n- Impact summary (issue/tombstone counts)\n- Backup option\n- Dry-run preview\n- Git dirty check warning\n\n### Code Structure\n- `cmd/bd/reset.go` - CLI command\n- `internal/reset/` - Core logic package","status":"closed","priority":2,"issue_type":"epic","created_at":"2025-12-13T08:44:01.38379+11:00","updated_at":"2025-12-13T06:24:29.561294-08:00","closed_at":"2025-12-13T10:18:19.965287+11:00"} +{"id":"bd-u0sb","title":"Merge: bd-uqfn","description":"branch: polecat/cheedo\ntarget: main\nsource_issue: bd-uqfn\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-20T01:11:52.033964-08:00","updated_at":"2025-12-20T23:17:26.994875-08:00","closed_at":"2025-12-20T23:17:26.994875-08:00"} +{"id":"bd-icfe","title":"gt spawn/crew setup should create .beads/redirect for worktrees","description":"Crew clones and polecats need a .beads/redirect file pointing to the shared beads database (../../mayor/rig/.beads). Currently:\n\n- redirect files can get deleted by git clean\n- not auto-created during gt spawn or worktree setup\n- missing redirects cause 'no beads database found' errors\n\nFound missing in: gastown/joe, beads/zoey (after git clean)\n\nFix options:\n1. gt spawn creates redirect during worktree setup\n2. gt prime regenerates missing redirects\n3. bd commands auto-detect worktree and find shared beads\n\nThis should be standard Gas Town rig configuration.","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-21T01:30:26.115872-08:00","updated_at":"2025-12-21T17:51:25.740811-08:00","closed_at":"2025-12-21T17:51:25.740811-08:00"} +{"id":"bd-n386","title":"Improve test coverage for internal/daemon (27.3% β†’ 60%)","description":"The daemon package has only 27.3% test coverage. The daemon is critical for background operations and reliability.\n\nKey areas needing tests:\n- Daemon autostart logic\n- Socket handling\n- PID file management\n- Health checks\n\nCurrent coverage: 27.3%\nTarget coverage: 60%","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-13T20:43:00.895238-08:00","updated_at":"2025-12-23T22:31:55.99167-08:00","closed_at":"2025-12-23T20:41:04.542935-08:00"} +{"id":"bd-hyp6","title":"Gate: timer:1m","status":"open","priority":1,"issue_type":"gate","created_at":"2025-12-23T13:41:18.201653-08:00","updated_at":"2025-12-23T13:41:18.201653-08:00"} +{"id":"bd-77gm","title":"Import reports misleading '0 created, 0 updated' when actually importing all issues","description":"When running 'bd import' on a fresh database (no existing issues), the command reports 'Import complete: 0 created, 0 updated' even though it successfully imported all issues from the JSONL file.\n\n**Steps to reproduce:**\n1. Delete .beads/beads.db\n2. Run: bd import .beads/issues.jsonl\n3. Observe output: 'Import complete: 0 created, 0 updated'\n4. Run: bd list\n5. Confirm: All issues are actually present in the database\n\n**Expected behavior:**\nReport the actual number of issues imported, e.g., 'Import complete: 523 created, 0 updated'\n\n**Actual behavior:**\n'Import complete: 0 created, 0 updated' (misleading - makes user think import failed)\n\n**Impact:**\n- Users think import failed when it succeeded\n- Confusing during database sync operations (e.g., after git pull)\n- Makes debugging harder (can't tell if import actually worked)\n\n**Context:**\nDiscovered during VC session when syncing database after git pull. The misleading message caused confusion about whether the database was properly synced with the canonical JSONL file.","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-11-09T16:20:13.191156-08:00","updated_at":"2025-12-21T21:13:27.057292-08:00","closed_at":"2025-12-21T21:13:27.057292-08:00"} +{"id":"bd-haze","title":"Fix beads-9yc: pinned column missing from schema. gt mail...","description":"Fix beads-9yc: pinned column missing from schema. gt mail send fails because some beads DBs lack the pinned column. Add migration to ensure it exists.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-19T15:05:33.394801-08:00","updated_at":"2025-12-21T15:26:35.171757-08:00","closed_at":"2025-12-21T15:26:35.171757-08:00"} +{"id":"bd-ykqu","title":"Add gate timeout tracking and notification","description":"Implement timeout and notification logic for gates.\n\n## Timeout Behavior\n1. Gate created with timeout (e.g., 30m)\n2. Deacon tracks elapsed time during patrol\n3. If timeout reached:\n - Notify all waiters: \"Gate timed out\"\n - Close gate with timeout reason\n - Waiter can retry, escalate, or fail gracefully\n\n## Notification\n- Use gt mail send to notify waiters\n- Include gate ID, await type, and reason in message\n- Support multiple waiters notification\n\n## Escalation Path\n- Witness sees stuck worker, nudges them\n- Worker can escalate to human if needed","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-23T11:44:40.1825-08:00","updated_at":"2025-12-23T12:19:44.362527-08:00","closed_at":"2025-12-23T12:19:44.362527-08:00","dependencies":[{"issue_id":"bd-ykqu","depends_on_id":"bd-udsi","type":"parent-child","created_at":"2025-12-23T11:44:53.072862-08:00","created_by":"daemon"},{"issue_id":"bd-ykqu","depends_on_id":"bd-is6m","type":"blocks","created_at":"2025-12-23T11:44:56.595085-08:00","created_by":"daemon"}]} +{"id":"bd-0w5","title":"Fix update-hooks verification in version-bump.yaml","description":"The update-hooks task verification command at version-bump.yaml:358 always succeeds due to '|| echo ...' fallback. Remove the fallback so verification actually fails when hooks aren't installed.","status":"closed","priority":3,"issue_type":"bug","created_at":"2025-12-17T22:23:06.55467-08:00","updated_at":"2025-12-17T22:34:07.290409-08:00","closed_at":"2025-12-17T22:34:07.290409-08:00"} +{"id":"bd-t4sb","title":"Work on beads-d8h: Fix prefix mismatch false positive wit...","description":"Work on beads-d8h: Fix prefix mismatch false positive with multi-hyphen prefixes like 'asianops-audit-' (GH#422). When done, submit MR (not PR) to integration branch for Refinery.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-19T22:56:19.545069-08:00","updated_at":"2025-12-19T23:28:32.429127-08:00","closed_at":"2025-12-19T23:21:45.471711-08:00"} +{"id":"bd-h27p","title":"Merge: bd-g4b4","description":"branch: polecat/Hooker\ntarget: main\nsource_issue: bd-g4b4\nrig: beads","status":"closed","priority":3,"issue_type":"merge-request","created_at":"2025-12-23T13:38:50.707153-08:00","updated_at":"2025-12-23T19:12:08.357806-08:00","closed_at":"2025-12-23T19:12:08.357806-08:00"} +{"id":"bd-n5ug","title":"Merge: bd-au0.7","description":"branch: polecat/dementus\ntarget: main\nsource_issue: bd-au0.7\nrig: beads","status":"closed","priority":1,"issue_type":"merge-request","created_at":"2025-12-23T20:43:36.024341-08:00","updated_at":"2025-12-23T21:21:57.692158-08:00","closed_at":"2025-12-23T21:21:57.692158-08:00"} +{"id":"bd-u2sc.3","title":"Split large cmd/bd files into logical modules","description":"Split large cmd/bd files into logical modules for maintainability.\n\n## Current State\n| File | Lines | Status |\n|------|-------|--------|\n| sync.go | 2203 | Too large |\n| init.go | 1742 | Too large |\n| show.go | 1419 | Too large |\n| compact.go | 1190 | Borderline |\n\n## Proposed Splits\n\n### 1. sync.go (2203 lines) β†’ 4 files\n\n**sync.go** (~500 lines) - Main command and coordination\n```go\nvar syncCmd = \u0026cobra.Command{...}\nfunc init() {...}\nfunc doSync(...) {...}\n```\n\n**sync_export.go** (~400 lines) - Export operations\n```go\nfunc exportToJSONL(...)\nfunc markDirtyAndScheduleFlush(...)\nfunc flushPendingChanges(...)\n```\n\n**sync_import.go** (~500 lines) - Import operations\n```go\nfunc importFromJSONL(...)\nfunc handleImportConflicts(...)\nfunc mergeImportedIssues(...)\n```\n\n**sync_branch.go** (~400 lines) - Git branch operations\n```go\nfunc commitToSyncBranch(...)\nfunc pullFromSyncBranch(...)\nfunc handleSyncBranchDivergence(...)\n```\n\n### 2. init.go (1742 lines) β†’ 3 files\n\n**init.go** (~400 lines) - Main init command\n```go\nvar initCmd = \u0026cobra.Command{...}\nfunc runInit(...)\nfunc determinePrefix(...)\n```\n\n**init_wizard.go** (~500 lines) - Interactive setup\n```go\nfunc runContributorWizard(...)\nfunc runTeamWizard(...)\nfunc promptForConfig(...)\n```\n\n**init_hooks.go** (~400 lines) - Git hooks setup\n```go\nfunc installGitHooks(...)\nfunc configureAutosync(...)\nfunc setupMergeDriver(...)\n```\n\n### 3. show.go (1419 lines) β†’ 3 files\n\n**show.go** (~400 lines) - Main show command\n```go\nvar showCmd = \u0026cobra.Command{...}\nfunc showIssue(...)\nfunc showMultipleIssues(...)\n```\n\n**show_format.go** (~400 lines) - Output formatting\n```go\nfunc formatIssueText(...)\nfunc formatIssueMarkdown(...)\nfunc formatDependencyTree(...)\n```\n\n**show_threads.go** (~300 lines) - Thread/conversation display\n```go\nfunc showThread(...)\nfunc formatThreadMessages(...)\n```\n\n### 4. compact.go (1190 lines) β†’ 2 files\n\n**compact.go** (~600 lines) - Main compact command\n```go\nvar compactCmd = \u0026cobra.Command{...}\nfunc runCompact(...)\n```\n\n**compact_tiers.go** (~400 lines) - Tier-specific logic\n```go\nfunc compactTier1(...)\nfunc compactTier2(...)\nfunc squashWisps(...)\n```\n\n## Implementation Steps\n\n1. **Start with sync.go** (largest file)\n2. **Create new files** with same package declaration\n3. **Move functions** maintaining related code together\n4. **Update any file-local variables** that need to be shared\n5. **Run tests** after each split:\n ```bash\n go test -short ./cmd/bd/...\n ```\n\n## File Organization After Split\n```\ncmd/bd/\nβ”œβ”€β”€ sync.go (~500 lines)\nβ”œβ”€β”€ sync_export.go (~400 lines)\nβ”œβ”€β”€ sync_import.go (~500 lines)\nβ”œβ”€β”€ sync_branch.go (~400 lines)\nβ”œβ”€β”€ init.go (~400 lines)\nβ”œβ”€β”€ init_wizard.go (~500 lines)\nβ”œβ”€β”€ init_hooks.go (~400 lines)\nβ”œβ”€β”€ show.go (~400 lines)\nβ”œβ”€β”€ show_format.go (~400 lines)\nβ”œβ”€β”€ show_threads.go (~300 lines)\nβ”œβ”€β”€ compact.go (~600 lines)\n└── compact_tiers.go (~400 lines)\n```\n\n## Success Criteria\n- No file \u003e 600 lines\n- All tests pass\n- Related code stays together\n- Clear file naming indicates purpose","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-22T14:27:06.146343-08:00","updated_at":"2025-12-23T14:14:39.023606-08:00","closed_at":"2025-12-23T14:14:39.023606-08:00","dependencies":[{"issue_id":"bd-u2sc.3","depends_on_id":"bd-u2sc","type":"parent-child","created_at":"2025-12-22T14:27:06.146704-08:00","created_by":"daemon"},{"issue_id":"bd-u2sc.3","depends_on_id":"bd-iz5t","type":"parent-child","created_at":"2025-12-23T12:44:07.583136-08:00","created_by":"daemon"}]} +{"id":"bd-f7p1","title":"Add tests for mol spawn --attach","description":"Code review (bd-obep) found no tests for the spawn --attach functionality.\n\n**Test cases needed:**\n1. Basic attach - spawn proto with one --attach\n2. Multiple attachments - spawn with --attach A --attach B\n3. Attach types - verify sequential vs parallel bonding\n4. Error case: attaching non-proto (missing template label)\n5. Variable aggregation - vars from primary + attachments combined\n6. Dry-run output includes attachment info\n\n**Implementation notes:**\n- Tests should use in-memory storage\n- Create test protos, spawn with attachments, verify dependency structure\n- Check that sequential uses 'blocks' type, parallel uses 'parent-child'","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-21T10:58:16.766461-08:00","updated_at":"2025-12-21T21:33:12.136215-08:00","closed_at":"2025-12-21T21:33:12.136215-08:00","dependencies":[{"issue_id":"bd-f7p1","depends_on_id":"bd-obep","type":"discovered-from","created_at":"2025-12-21T10:58:16.767616-08:00","created_by":"daemon"}]} +{"id":"bd-l13p","title":"Add GetWorkerStatus RPC endpoint","description":"New RPC endpoint to get all workers and their current molecule/step in one call. Returns: assignee, moleculeID, moleculeTitle, currentStep, totalSteps, stepTitle, lastActivity, status. Enables activity feed TUI to show worker state without multiple round trips.","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-12-23T16:26:36.248654-08:00","updated_at":"2025-12-23T16:40:59.772138-08:00","closed_at":"2025-12-23T16:40:59.772138-08:00"} +{"id":"bd-6s61","title":"Version Bump: {{version}}","description":"Release checklist for version {{version}}. This molecule ensures all release steps are completed properly.","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-19T22:55:42.487701-08:00","updated_at":"2025-12-20T17:59:26.261233-08:00","closed_at":"2025-12-20T01:18:47.905306-08:00"} +{"id":"bd-u8g4","title":"Merge: bd-dxtc","description":"branch: polecat/cheedo\ntarget: main\nsource_issue: bd-dxtc\nrig: beads","status":"closed","priority":1,"issue_type":"merge-request","created_at":"2025-12-23T20:42:17.222567-08:00","updated_at":"2025-12-23T21:21:57.696047-08:00","closed_at":"2025-12-23T21:21:57.696047-08:00"} +{"id":"bd-6fe4622f","title":"Remove unreachable utility functions","description":"Several small utility functions are unreachable:\n\nFiles to clean:\n1. `internal/storage/sqlite/hash.go` - `computeIssueContentHash` (line 17)\n - Check if entire file can be deleted if only contains this function\n\n2. `internal/config/config.go` - `FileUsed` (line 151)\n - Delete unused config helper\n\n3. `cmd/bd/git_sync_test.go` - `verifyIssueOpen` (line 300)\n - Delete dead test helper\n\n4. `internal/compact/haiku.go` - `HaikuClient.SummarizeTier2` (line 81)\n - Tier 2 summarization not implemented\n - Options: implement feature OR delete method\n\nImpact: Removes 50-100 LOC depending on decisions","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-28T16:20:02.434573-07:00","updated_at":"2025-12-17T22:58:34.563993-08:00","closed_at":"2025-12-17T22:58:34.563993-08:00"} +{"id":"bd-2oo.2","title":"Remove redundant edge fields from Issue struct","description":"Remove from Issue struct:\n- RepliesTo -\u003e dependency with type replies-to\n- RelatesTo -\u003e dependencies with type relates-to \n- DuplicateOf -\u003e dependency with type duplicates\n- SupersededBy -\u003e dependency with type supersedes\n\nKeep: Sender, Ephemeral (these are attributes, not relationships)","status":"closed","priority":0,"issue_type":"task","created_at":"2025-12-18T02:02:00.891206-08:00","updated_at":"2025-12-18T02:49:10.584381-08:00","closed_at":"2025-12-18T02:49:10.584381-08:00","dependencies":[{"issue_id":"bd-2oo.2","depends_on_id":"bd-2oo","type":"parent-child","created_at":"2025-12-18T02:02:00.891655-08:00","created_by":"daemon"}]} +{"id":"bd-m164","title":"Add 0.33.2 to versionChanges in info.go","description":"Add new entry at the TOP of versionChanges array in cmd/bd/info.go:\n\n```go\n{\n Version: \"0.33.2\",\n Date: \"2025-12-21\",\n Changes: []string{\n // Add notable changes here\n },\n},\n```\n\nCopy changes from CHANGELOG.md [Unreleased] section.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.761218-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-cb64c226.1","title":"Performance Validation","description":"Confirm no performance regression from cache removal","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-10-28T10:50:15.126019-07:00","updated_at":"2025-12-17T23:18:29.108883-08:00","deleted_at":"2025-12-17T23:18:29.108883-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} +{"id":"bd-68e4","title":"doctor --fix should export when DB has more issues than JSONL","description":"When 'bd doctor' detects a count mismatch (DB has more issues than JSONL), it currently recommends 'bd sync --import-only', which imports JSONL into DB. But JSONL is the source of truth, not the DB.\n\n**Current behavior:**\n- Doctor detects: DB has 355 issues, JSONL has 292\n- Recommends: 'bd sync --import-only' \n- User runs it: Returns '0 created, 0 updated' (no-op, because JSONL hasn't changed)\n- User is stuck\n\n**Root cause:**\nThe doctor fix is one-directional (JSONLβ†’DB) when it should be bidirectional. If DB has MORE issues, they haven't been exported yet - the fix should be 'bd export' (DBβ†’JSONL), not import.\n\n**Desired fix:**\nIn fix.DBJSONLSync(), detect which has more data:\n- If DB \u003e JSONL: Run 'bd export' to sync JSONL (since DB is the working copy)\n- If JSONL \u003e DB: Run 'bd sync --import-only' to import (JSONL is source of truth)\n- If equal but timestamps differ: Detect based on file mtime\n\nThis makes 'bd doctor --fix' actually fix the problem instead of being a no-op.","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-21T11:17:20.994319182-07:00","updated_at":"2025-12-21T11:23:24.38523731-07:00","closed_at":"2025-12-21T11:23:24.38523731-07:00"} +{"id":"bd-aydr.4","title":"Implement CLI command (cmd/bd/reset.go)","description":"Wire up the reset command with Cobra CLI.\n\n## Responsibilities\n- Define command and all flags\n- User confirmation prompt (unless --force)\n- Display impact summary before confirmation\n- Colored output and progress indicators\n- Call core reset package\n- Handle errors with user-friendly messages\n- Register command with rootCmd in init()\n\n## Flags\n```go\n--hard bool \"Also remove from git and commit\"\n--force bool \"Skip confirmation prompt\"\n--backup bool \"Create backup before reset\"\n--dry-run bool \"Preview what would happen\"\n--skip-init bool \"Do not re-initialize after reset\"\n--verbose bool \"Show detailed progress output\"\n```\n\n## Output Format\n```\n⚠️ This will reset beads to a clean state.\n\nWill be deleted:\n β€’ 47 issues (23 open, 24 closed)\n β€’ 12 tombstones\n\nContinue? [y/N] y\n\nβ†’ Stopping daemons... βœ“\nβ†’ Removing .beads/... βœ“\nβ†’ Initializing fresh... βœ“\n\nβœ“ Reset complete. Run 'bd onboard' to set up hooks.\n```\n\n## Implementation Notes\n- Confirmation logic lives HERE, not in core package\n- Use color package (github.com/fatih/color) for output\n- Follow patterns from other commands (init.go, doctor.go)\n- Add to rootCmd in init() function\n\n## File Location\n`cmd/bd/reset.go`","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-13T08:44:54.318854+11:00","updated_at":"2025-12-13T10:13:32.611434+11:00","closed_at":"2025-12-13T09:59:41.72638+11:00","dependencies":[{"issue_id":"bd-aydr.4","depends_on_id":"bd-aydr.3","type":"blocks","created_at":"2025-12-13T08:45:09.883658+11:00","created_by":"daemon"},{"issue_id":"bd-aydr.4","depends_on_id":"bd-aydr","type":"parent-child","created_at":"2025-12-13T08:44:54.319237+11:00","created_by":"daemon"},{"issue_id":"bd-aydr.4","depends_on_id":"bd-aydr.1","type":"blocks","created_at":"2025-12-13T08:45:09.762138+11:00","created_by":"daemon"},{"issue_id":"bd-aydr.4","depends_on_id":"bd-aydr.2","type":"blocks","created_at":"2025-12-13T08:45:09.817854+11:00","created_by":"daemon"}]} +{"id":"bd-r06v","title":"Merge: bd-phtv","description":"branch: polecat/Pinner\ntarget: main\nsource_issue: bd-phtv\nrig: beads","status":"closed","priority":1,"issue_type":"merge-request","created_at":"2025-12-23T13:48:16.853715-08:00","updated_at":"2025-12-23T19:12:08.342414-08:00","closed_at":"2025-12-23T19:12:08.342414-08:00"} +{"id":"bd-qqc.10","title":"Upgrade local Homebrew installation","description":"Upgrade bd via Homebrew:\n\n```bash\nbrew update\nbrew upgrade bd\n/opt/homebrew/bin/bd version # Verify shows {{version}}\n```\n\n**Note**: If `brew upgrade` fails with CLT (Command Line Tools) errors on bleeding-edge macOS versions (e.g., Tahoe 26.x):\n\n```bash\n# Reinstall CLT\nsudo rm -rf /Library/Developer/CommandLineTools\nxcode-select --install\n# Wait for GUI installer to complete, then retry brew upgrade\n```\n\nAlternative: Skip Homebrew and use go install:\n```bash\ngo install ./cmd/bd\n~/go/bin/bd version # Verify shows {{version}}\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T22:42:37.60241-08:00","updated_at":"2025-12-22T12:32:23.678806-08:00","closed_at":"2025-12-18T22:52:00.331429-08:00","dependencies":[{"issue_id":"bd-qqc.10","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T22:42:37.603893-08:00","created_by":"daemon"},{"issue_id":"bd-qqc.10","depends_on_id":"bd-qqc.9","type":"blocks","created_at":"2025-12-18T22:43:21.458817-08:00","created_by":"daemon"}]} +{"id":"bd-f5cc","title":"Thread Test","description":"Testing the thread feature","status":"tombstone","priority":2,"issue_type":"message","created_at":"2025-12-16T18:21:01.244501-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","dependencies":[{"issue_id":"bd-f5cc","depends_on_id":"bd-x36g","type":"supersedes","created_at":"2025-12-18T13:45:31.137191-08:00","created_by":"migration"}],"deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"message"} +{"id":"bd-sumr","title":"Merge: bd-t4sb","description":"branch: polecat/capable\ntarget: main\nsource_issue: bd-t4sb\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-19T23:22:21.343724-08:00","updated_at":"2025-12-20T23:17:26.997992-08:00","closed_at":"2025-12-20T23:17:26.997992-08:00"} +{"id":"bd-89f89fc0","title":"Remove unreachable RPC methods","description":"Several RPC server and client methods are unreachable and should be removed:\n\nServer methods (internal/rpc/server.go):\n- `Server.GetLastImportTime` (line 2116)\n- `Server.SetLastImportTime` (line 2123)\n- `Server.findJSONLPath` (line 2255)\n\nClient methods (internal/rpc/client.go):\n- `Client.Import` (line 311) - RPC import not used (daemon uses autoimport)\n\nEvidence:\n```bash\ngo run golang.org/x/tools/cmd/deadcode@latest -test ./...\n```\n\nImpact: Removes ~80 LOC of unused RPC code","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-28T16:20:02.432202-07:00","updated_at":"2025-12-17T22:58:34.564401-08:00","closed_at":"2025-12-17T22:58:34.564401-08:00"} +{"id":"bd-3x9o","title":"Merge: bd-by0d","description":"branch: polecat/furiosa\ntarget: main\nsource_issue: bd-by0d\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-19T23:21:26.817906-08:00","updated_at":"2025-12-20T23:17:26.998785-08:00","closed_at":"2025-12-20T23:17:26.998785-08:00"} +{"id":"bd-t3en","title":"Merge: bd-d28c","description":"branch: polecat/capable\ntarget: main\nsource_issue: bd-d28c\nrig: beads","status":"closed","priority":1,"issue_type":"merge-request","created_at":"2025-12-23T20:43:16.997802-08:00","updated_at":"2025-12-23T21:21:57.694201-08:00","closed_at":"2025-12-23T21:21:57.694201-08:00"} +{"id":"bd-c83r","title":"Prevent multiple daemons from running on the same repo","description":"Multiple bd daemons running on the same repo clone causes race conditions and data corruption risks.\n\n**Problem:**\n- Nothing prevents spawning multiple daemons for the same repository\n- Multiple daemons watching the same files can conflict during sync operations\n- Observed: 4 daemons running simultaneously caused sync race condition\n\n**Solution:**\nImplement daemon singleton enforcement per repo:\n1. Use a lock file (e.g., .beads/.daemon.lock) with PID\n2. On daemon start, check if lock exists and process is alive\n3. If stale lock (dead PID), clean up and acquire lock\n4. If active daemon exists, either:\n - Exit with message 'daemon already running (PID xxx)'\n - Or offer --replace flag to kill existing and take over\n5. Release lock on graceful shutdown\n\n**Edge cases to handle:**\n- Daemon crashes without releasing lock (stale PID detection)\n- Multiple repos in same directory tree (each repo gets own lock)\n- Race between two daemons starting simultaneously (atomic lock acquisition)","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-13T06:37:23.377131-08:00","updated_at":"2025-12-16T01:14:49.50347-08:00","closed_at":"2025-12-14T17:34:14.990077-08:00"} +{"id":"bd-rgd7","title":"Update CHANGELOG.md with release notes","description":"Add release notes for 0.32.1: MCP output control params (#667), pin field fix","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-20T21:53:16.031879-08:00","updated_at":"2025-12-20T21:54:07.982164-08:00","closed_at":"2025-12-20T21:54:07.982164-08:00","dependencies":[{"issue_id":"bd-rgd7","depends_on_id":"bd-an4s","type":"parent-child","created_at":"2025-12-20T21:53:16.034926-08:00","created_by":"daemon"}]} +{"id":"bd-elqd","title":"Systematic bd sync stability investigation","description":"## Context\n\nbd sync has chronic instability issues that have persisted since inception:\n- issues.jsonl is always dirty after push\n- bd sync often creates messes requiring manual cleanup\n- Problems escalating despite accumulated bug fixes\n- Workarounds are getting increasingly draconian\n\n## Goal\n\nSystematically observe and diagnose bd sync failures rather than applying band-aid fixes.\n\n## Approach\n\n1. Start fresh session with latest binary (all fixes applied)\n2. Run bd sync and carefully observe what happens\n3. Document exact sequence of events when things go wrong\n4. File specific issues for each discrete problem identified\n5. Track the root causes, not just symptoms\n\n## Test Environment\n\n- Fresh clone or clean state\n- Latest bd binary with all bug fixes\n- Monitor both local and remote JSONL state\n- Check for timing issues, race conditions, merge conflicts\n\n## Success Criteria\n\n- Identify root causes of sync instability\n- Create actionable issues for each problem\n- Eventually achieve stable bd sync (no manual intervention needed)","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-16T22:57:25.35289-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-ipva","title":"Update go install bd to 0.33.2","description":"Rebuild and install bd to ~/go/bin:\n\n```bash\ngo install ./cmd/bd\n~/go/bin/bd version # Verify shows 0.33.2\n```\n\nNote: If ~/go/bin is in PATH before /opt/homebrew/bin, this is the version that runs by default.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.760715-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-ohil","title":"refinery Handoff","description":"attached_molecule: bd-ndye\nattached_at: 2025-12-23T12:35:07Z","status":"pinned","priority":2,"issue_type":"task","created_at":"2025-12-23T04:35:07.488226-08:00","updated_at":"2025-12-23T04:35:07.785858-08:00"} +{"id":"bd-746","title":"Fix resolvePartialID stub in workflow.go","description":"The resolvePartialID function at workflow.go:921-925 is a stub that just returns the ID unchanged. Should use utils.ResolvePartialID for proper partial ID resolution in direct mode (non-daemon).","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-17T22:22:57.586917-08:00","updated_at":"2025-12-17T22:34:07.270168-08:00","closed_at":"2025-12-17T22:34:07.270168-08:00"} +{"id":"bd-ucgz","title":"Migration invariants should exclude external dependencies from orphan check","description":"## Summary\n\nThe `checkForeignKeys` function in `migration_invariants.go` flags external dependencies as orphaned because they dont exist in the local issues table.\n\n## Location\n\n`internal/storage/sqlite/migration_invariants.go` around line 150-162\n\n## Current Code (buggy)\n\n```go\n// Check for orphaned dependencies\nvar orphanCount int\nerr = db.QueryRowContext(ctx, \\`\n SELECT COUNT(*)\n FROM dependencies d\n WHERE NOT EXISTS (SELECT 1 FROM issues WHERE id = d.depends_on_id)\n\\`).Scan(\u0026orphanCount)\n```\n\n## Fix\n\nExclude external references (format: `external:\u003cproject\u003e:\u003ccapability\u003e`):\n\n```go\n// Check for orphaned dependencies (excluding external refs)\nvar orphanCount int\nerr = db.QueryRowContext(ctx, \\`\n SELECT COUNT(*)\n FROM dependencies d\n WHERE NOT EXISTS (SELECT 1 FROM issues WHERE id = d.depends_on_id)\n AND d.depends_on_id NOT LIKE 'external:%'\n\\`).Scan(\u0026orphanCount)\n```\n\n## Reproduction\n\n```bash\n# Add external dependency\nbd dep add bd-xxx external:other-project:some-capability\n\n# Try to sync - fails\nbd sync\n# Error: found 1 orphaned dependencies\n```\n\n## Files to Modify\n\n1. **internal/storage/sqlite/migration_invariants.go** - Add WHERE clause\n\n## Testing\n\n```bash\n# Create issue with external dep\nbd create \"Test external deps\" -t task\nbd dep add bd-xxx external:beads:release-workflow\n\n# Sync should succeed\nbd sync\n\n# Verify dep exists\nbd show bd-xxx --json | jq .dependencies\n```\n\n## Success Criteria\n- External dependencies dont trigger orphan check\n- `bd sync` succeeds with external deps\n- Regular orphan detection still works for internal deps","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-23T12:37:08.99387-08:00","updated_at":"2025-12-23T12:42:03.722691-08:00","closed_at":"2025-12-23T12:42:03.722691-08:00"} +{"id":"bd-d28c","title":"Test createTombstone and deleteIssue wrappers","description":"Add tests for the createTombstone and deleteIssue wrapper functions in cmd/bd/delete.go.\n\n## Functions under test\n- createTombstone (cmd/bd/delete.go:335) - Wrapper around SQLite CreateTombstone\n- deleteIssue (cmd/bd/delete.go:349) - Wrapper around SQLite DeleteIssue\n\n## Test scenarios for createTombstone\n1. Successful tombstone creation\n2. Tombstone with reason and actor tracking\n3. Error when issue doesn't exist\n4. Verify tombstone status set correctly\n5. Verify audit trail recorded\n6. Rollback/error handling\n\n## Test scenarios for deleteIssue\n1. Successful issue deletion\n2. Error on non-existent issue\n3. Verify issue removed from database\n4. Error handling when storage backend doesn't support delete\n\n## Coverage target\nCurrent: 0%\nTarget: \u003e85%\n\n## Related\n- Parent epic: bd-kyll\n- Original issue: bd-7z4","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T13:08:37.669214532-07:00","updated_at":"2025-12-23T21:44:33.169062-08:00","closed_at":"2025-12-23T21:44:33.169062-08:00","dependencies":[{"issue_id":"bd-d28c","depends_on_id":"bd-kyll","type":"parent-child","created_at":"2025-12-18T13:08:37.70588226-07:00","created_by":"mhwilkie"}]} +{"id":"bd-crgr","title":"GH#517: Claude sets priority wrong on new install","description":"Claude uses 'medium/high/low' for priority instead of P0-P4. Update bd prime/onboard output to be clearer about priority syntax. See GitHub issue #517.","status":"tombstone","priority":2,"issue_type":"bug","created_at":"2025-12-16T01:03:34.803084-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} +{"id":"bd-gqxd","title":"Enrich MutationEvent with title and assignee","description":"Current MutationEvent only has IssueID, no context. Add Title and Assignee fields so activity feeds can display meaningful info without extra lookups. Emit these fields when creating mutation events in server_core.go.","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-12-23T16:26:34.907259-08:00","updated_at":"2025-12-23T16:39:39.229462-08:00","closed_at":"2025-12-23T16:39:39.229462-08:00"} +{"id":"bd-cils","title":"Work on beads-2nh: Fix gt spawn --issue to find issues in...","description":"Work on beads-2nh: Fix gt spawn --issue to find issues in rig's beads database. When done, submit MR (not PR) to integration branch for Refinery.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-19T22:55:47.573854-08:00","updated_at":"2025-12-20T00:49:51.927884-08:00","closed_at":"2025-12-19T23:28:28.605343-08:00"} +{"id":"bd-pbh.2","title":"Update CHANGELOG.md for 0.30.4","description":"1. Change `## [Unreleased]` to `## [0.30.4] - 2025-12-17`\n2. Add new empty `## [Unreleased]` section at top\n3. Ensure all changes since 0.30.3 are documented\n","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:10.956332-08:00","updated_at":"2025-12-17T21:46:46.214512-08:00","closed_at":"2025-12-17T21:46:46.214512-08:00","dependencies":[{"issue_id":"bd-pbh.2","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:10.95683-08:00","created_by":"daemon"}]} +{"id":"bd-d73u","title":"Re: Thread Test 2","description":"Great! Thread is working well.","status":"tombstone","priority":2,"issue_type":"message","created_at":"2025-12-16T18:21:46.655093-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","dependencies":[{"issue_id":"bd-d73u","depends_on_id":"bd-vpan","type":"replies-to","created_at":"2025-12-18T13:45:31.137191-08:00","created_by":"migration"}],"deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"message"} +{"id":"bd-l7y3","title":"bd mol bond --pour should set Wisp=false","description":"In mol_bond.go bondProtoMol(), opts.Wisp is hardcoded to true (line 392). This ignores the --pour flag. When user specifies --pour to make an issue persistent, the Wisp field should be false so the issue is not marked for bulk deletion.\n\nCurrent behavior:\n- --pour flag correctly selects regular storage (not wisp storage)\n- But opts.Wisp=true means spawned issues are still marked for cleanup when closed\n\nExpected behavior:\n- --pour should set Wisp=false so persistent issues are not auto-cleaned\n\nComparison with mol_spawn.go (line 204):\n wisp := !pour // Correctly respects --pour flag\n result, err := spawnMolecule(ctx, store, subgraph, vars, assignee, actor, wisp)\n\nFix: Pass pour flag to bondProtoMol and set opts.Wisp = !pour","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-23T15:15:00.562346-08:00","updated_at":"2025-12-23T15:25:22.53144-08:00","closed_at":"2025-12-23T15:25:22.53144-08:00"} +{"id":"bd-dtl8","title":"Test deleteViaDaemon RPC client integration","description":"Add comprehensive tests for the deleteViaDaemon function (cmd/bd/delete.go:21) which handles client-side RPC deletion calls.\n\n## Function under test\n- deleteViaDaemon: CLI command handler that sends delete requests to daemon via RPC\n\n## Test scenarios needed\n1. Successful deletion via daemon\n2. Cascade deletion through daemon\n3. Force deletion through daemon\n4. Dry-run mode (no actual deletion)\n5. Error handling:\n - Daemon unavailable\n - Invalid issue IDs\n - Dependency conflicts\n6. JSON output validation\n7. Human-readable output formatting\n\n## Coverage target\nCurrent: 0%\nTarget: \u003e80%\n\n## Related\n- Parent epic: bd-kyll\n- Original issue: bd-7z4","status":"open","priority":1,"issue_type":"task","created_at":"2025-12-18T13:08:29.805706253-07:00","updated_at":"2025-12-23T21:22:12.35566-08:00","dependencies":[{"issue_id":"bd-dtl8","depends_on_id":"bd-kyll","type":"parent-child","created_at":"2025-12-18T13:08:29.807984381-07:00","created_by":"mhwilkie"}]} +{"id":"bd-aydr.2","title":"Implement backup functionality for reset","description":"Add backup capability that can be used by reset command.\n\n## Functionality\n- Copy .beads/ to .beads-backup-{timestamp}/\n- Timestamp format: YYYYMMDD-HHMMSS\n- Preserve file permissions\n- Return backup path for user feedback\n\n## Location\n`internal/reset/backup.go` - keep with reset package for now (YAGNI)\n\n## Interface\n```go\nfunc CreateBackup(beadsDir string) (backupPath string, err error)\n```\n\n## Notes\n- Simple recursive file copy, no compression needed\n- Error if backup dir already exists (unlikely with timestamp)\n- Backup directories SHOULD be gitignored\n- Add `.beads-backup-*/` pattern to .beads/.gitignore template in doctor package\n- Consider: ListBackups() for future `bd backup list` command (not for this PR)","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-13T08:44:51.306103+11:00","updated_at":"2025-12-13T10:13:32.610819+11:00","closed_at":"2025-12-13T09:20:20.590488+11:00","dependencies":[{"issue_id":"bd-aydr.2","depends_on_id":"bd-aydr","type":"parent-child","created_at":"2025-12-13T08:44:51.306474+11:00","created_by":"daemon"}]} +{"id":"bd-pbh.21","title":"Final release verification","description":"Verify all release artifacts are accessible:\n\n- [ ] `bd --version` shows 0.30.4\n- [ ] `bd version --daemon` shows 0.30.4\n- [ ] GitHub release exists: https://github.com/steveyegge/beads/releases/tag/v0.30.4\n- [ ] `brew upgrade beads \u0026\u0026 bd --version` shows 0.30.4 (if using Homebrew)\n- [ ] `pip show beads-mcp` shows 0.30.4\n- [ ] npm package available at 0.30.4\n- [ ] `bd info --whats-new` shows 0.30.4 notes\n\nRun final checks:\n```bash\nbd --version\nbd version --daemon\npip show beads-mcp | grep Version\nbd info --whats-new\n```\n","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:11.141249-08:00","updated_at":"2025-12-17T21:46:46.390985-08:00","closed_at":"2025-12-17T21:46:46.390985-08:00","dependencies":[{"issue_id":"bd-pbh.21","depends_on_id":"bd-pbh.19","type":"blocks","created_at":"2025-12-17T21:19:11.373656-08:00","created_by":"daemon"},{"issue_id":"bd-pbh.21","depends_on_id":"bd-pbh.20","type":"blocks","created_at":"2025-12-17T21:19:11.382-08:00","created_by":"daemon"},{"issue_id":"bd-pbh.21","depends_on_id":"bd-pbh.15","type":"blocks","created_at":"2025-12-17T21:19:11.389733-08:00","created_by":"daemon"},{"issue_id":"bd-pbh.21","depends_on_id":"bd-pbh.16","type":"blocks","created_at":"2025-12-17T21:19:11.398347-08:00","created_by":"daemon"},{"issue_id":"bd-pbh.21","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:11.141549-08:00","created_by":"daemon"},{"issue_id":"bd-pbh.21","depends_on_id":"bd-pbh.18","type":"blocks","created_at":"2025-12-17T21:19:11.364839-08:00","created_by":"daemon"}]} +{"id":"bd-1tw","title":"Fix G104 errors unhandled in internal/storage/sqlite/queries.go:1186","description":"Linting issue: G104: Errors unhandled (gosec) at internal/storage/sqlite/queries.go:1186:2. Error: rows.Close()","status":"closed","priority":0,"issue_type":"bug","created_at":"2025-12-07T15:35:13.051671889-07:00","updated_at":"2025-12-17T23:13:40.53486-08:00","closed_at":"2025-12-17T16:46:11.0289-08:00"} +{"id":"bd-dqck","title":"Version Bump: test-squash","description":"Release checklist for version test-squash. This molecule ensures all release steps are completed properly.","status":"tombstone","priority":1,"issue_type":"epic","created_at":"2025-12-21T13:52:33.065408-08:00","updated_at":"2025-12-21T13:53:41.946036-08:00","deleted_at":"2025-12-21T13:53:41.946036-08:00","deleted_by":"stevey","delete_reason":"manual delete","original_type":"epic"} +{"id":"bd-uwkp","title":"Phase 2.4: Git merge driver optimization for TOON format","description":"Optimize git 3-way merge for TOON line-oriented format.\n\n## Overview\nTOON is line-oriented (unlike binary formats), enabling smarter git merge strategies. Implement custom merge driver to handle TOON-specific merge patterns.\n\n## Required Work\n\n### 2.4.1 TOON Merge Driver\n- [ ] Create .git/info/attributes entry for *.toon files\n- [ ] Implement custom merge driver script/command\n- [ ] Handle tabular format row merges (line-based 3-way)\n- [ ] Handle YAML-style format merges\n- [ ] Conflict markers for unsolvable conflicts\n\n### 2.4.2 Merge Patterns\n- [ ] Row addition: both branches add different rows β†’ union\n- [ ] Row deletion: one branch deletes, other modifies β†’ conflict (manual review)\n- [ ] Row modification: concurrent field changes β†’ intelligent merge or conflict\n- [ ] Field ordering changes: ignore (TOON format resilient to order)\n\n### 2.4.3 Testing \u0026 Documentation\n- [ ] Unit tests for merge scenarios (3-way merge logic)\n- [ ] Integration tests with actual git merges\n- [ ] Conflict scenario testing\n- [ ] Documentation of merge strategy\n\n## Success Criteria\n- Git merge handles TOON conflicts intelligently\n- Fewer manual merge conflicts than JSONL\n- Round-trip preserved through merges\n- All 70+ tests still passing\n- Git history stays clean (minimal conflict markers)","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T14:43:14.339238776-07:00","updated_at":"2025-12-21T14:42:26.434306-08:00","closed_at":"2025-12-21T14:42:26.434306-08:00","dependencies":[{"issue_id":"bd-uwkp","depends_on_id":"bd-iic1","type":"discovered-from","created_at":"2025-12-19T14:43:14.34427988-07:00","created_by":"daemon"}]} +{"id":"bd-tj00","title":"Update local installation","description":"go build -o ~/.local/bin/bd ./cmd/bd \u0026\u0026 codesign -s - ~/.local/bin/bd","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-20T21:53:20.616907-08:00","updated_at":"2025-12-20T21:55:42.756171-08:00","closed_at":"2025-12-20T21:55:42.756171-08:00","dependencies":[{"issue_id":"bd-tj00","depends_on_id":"bd-an4s","type":"parent-child","created_at":"2025-12-20T21:53:20.619834-08:00","created_by":"daemon"},{"issue_id":"bd-tj00","depends_on_id":"bd-9l0h","type":"blocks","created_at":"2025-12-20T21:53:29.817989-08:00","created_by":"daemon"}]} +{"id":"bd-lw0x","title":"Fix bd sync race condition with daemon causing dirty working directory","description":"After bd sync completes with sync.branch mode, subsequent bd commands or daemon file watcher would see a hash mismatch and trigger auto-import, which then schedules re-export, dirtying the working directory.\n\n**Root cause:**\n1. bd sync exports JSONL with NEW content (hash H1)\n2. bd sync updates jsonl_content_hash = H1 in DB\n3. bd sync restores JSONL from HEAD (OLD content, hash H0)\n4. Now: file hash = H0, DB hash = H1 (MISMATCH)\n5. Daemon or next CLI command sees mismatch, imports from OLD JSONL\n6. Import triggers re-export β†’ file is dirty\n\n**Fix:**\nAfter restoreBeadsDirFromBranch(), update jsonl_content_hash to match the restored file's hash. This ensures daemon and CLI see file hash = DB hash β†’ no spurious import/export cycle.\n\nRelated: bd-c83r (multiple daemon prevention)","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-13T06:42:17.130839-08:00","updated_at":"2025-12-13T06:43:33.329042-08:00","closed_at":"2025-12-13T06:43:33.329042-08:00"} +{"id":"bd-0kai","title":"Work on beads-ocs: Thin shim hooks to eliminate version d...","description":"Work on beads-ocs: Thin shim hooks to eliminate version drift (GH#615). Replace full hook scripts with thin shims that call bd hooks run. When done, submit MR (not PR) to integration branch for Refinery.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-19T22:57:22.91347-08:00","updated_at":"2025-12-20T00:49:51.926425-08:00","closed_at":"2025-12-19T23:24:08.828172-08:00"} +{"id":"bd-rece","title":"Phase 1.1: TOON Library Integration - Add gotoon dependency","description":"Add gotoon (github.com/alpkeskin/gotoon) to go.mod and create internal/toon wrapper package for TOON encoding/decoding. This enables bdtoon to encode Issue structs to TOON format and decode TOON back to issues.\n\n## Subtasks\n1. Add gotoon dependency: go get github.com/alpkeskin/gotoon\n2. Create internal/toon package with wrapper functions\n3. Write encode tests for Issue struct round-trip conversion\n4. Write decode tests for TOON to Issue conversion\n5. Add gotoon API options to wrapper (indent, delimiter, length markers)\n\n## Success Criteria\n- go.mod includes gotoon dependency\n- internal/toon/encode.go exports EncodeTOON(issues) ([]byte, error)\n- internal/toon/decode.go exports DecodeTOON(data []byte) ([]Issue, error)\n- Round-trip tests verify Issue β†’ TOON β†’ Issue produces identical data\n- Tests pass with: go test ./internal/toon -v","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T11:48:30.018161133-07:00","updated_at":"2025-12-19T12:53:56.808833405-07:00","closed_at":"2025-12-19T12:53:56.808833405-07:00"} +{"id":"bd-llfl","title":"Improve test coverage for cmd/bd CLI (26.2% β†’ 50%)","description":"The main CLI package (cmd/bd) has only 26.2% test coverage. CLI commands should have at least 50% coverage to ensure reliability.\n\nKey areas with low/no coverage:\n- daemon_autostart.go (multiple 0% functions)\n- compact.go (several 0% functions)\n- Various command handlers\n\nCurrent coverage: 26.2%\nTarget coverage: 50%","status":"in_progress","priority":2,"issue_type":"task","created_at":"2025-12-13T20:43:03.123341-08:00","updated_at":"2025-12-23T22:31:40.043474-08:00"} +{"id":"bd-ao0s","title":"bd graph crashes with --no-daemon on closed issues","description":"The `bd graph` command panics with nil pointer dereference when using `--no-daemon` flag on an issue with closed children.\n\n**Reproduction:**\n```bash\nbd graph bd-qqc --no-daemon\n# panic: runtime error: invalid memory address or nil pointer dereference\n# in main.computeDependencyCounts\n```\n\n**Stack trace:**\n```\npanic: runtime error: invalid memory address or nil pointer dereference\n[signal SIGSEGV: segmentation violation code=0x2 addr=0x20 pc=0x1010bdfb0]\n\ngoroutine 1 [running]:\nmain.computeDependencyCounts(...)\n /Users/stevey/gt/beads/crew/emma/cmd/bd/graph.go:428\nmain.renderGraph(0x1400033bb80, 0x0)\n /Users/stevey/gt/beads/crew/emma/cmd/bd/graph.go:307 +0x300\n```\n\n**Location:** cmd/bd/graph.go:428 - computeDependencyCounts() not handling nil case","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-18T22:57:36.972585-08:00","updated_at":"2025-12-20T01:13:29.206821-08:00","closed_at":"2025-12-20T01:13:29.206821-08:00"} +{"id":"bd-ifuw","title":"test hook pin fix","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-12-23T04:43:15.598698-08:00","updated_at":"2025-12-23T04:51:29.438139-08:00","deleted_at":"2025-12-23T04:51:29.438139-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} +{"id":"bd-kyll","title":"Add daemon-side delete operation tests","description":"Follow-up epic for PR #626: Add comprehensive test coverage for delete operations at the daemon/RPC layer. PR #626 successfully added storage layer tests but identified gaps in daemon-side delete operations and RPC integration testing.\n\n## Scope\nTests needed for:\n1. deleteViaDaemon (cmd/bd/delete.go:21) - RPC client-side deletion command\n2. Daemon RPC delete handler - Server-side deletion via daemon\n3. createTombstone wrapper (cmd/bd/delete.go:335) - Tombstone creation wrapper\n4. deleteIssue wrapper (cmd/bd/delete.go:349) - Direct deletion wrapper\n\n## Coverage targets\n- Delete via RPC daemon (both success and error paths)\n- Cascade deletion through daemon\n- Force deletion through daemon\n- Dry-run mode validation\n- Tombstone creation and verification\n- Error handling and edge cases","status":"open","priority":1,"issue_type":"epic","created_at":"2025-12-18T13:08:26.039663309-07:00","updated_at":"2025-12-18T13:08:26.039663309-07:00"} +{"id":"bd-0j5y","title":"Merge: bd-05a8","description":"branch: polecat/valkyrie\ntarget: main\nsource_issue: bd-05a8\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-23T20:50:27.125378-08:00","updated_at":"2025-12-23T21:21:57.69697-08:00","closed_at":"2025-12-23T21:21:57.69697-08:00"} +{"id":"bd-2oo.3","title":"Update all code to use dependencies API for edges","description":"Find and update all code that reads/writes:\n- replies_to field -\u003e use dependency API\n- relates_to field -\u003e use dependency API\n- duplicate_of field -\u003e use dependency API\n- superseded_by field -\u003e use dependency API\n\nCommands affected: bd mail, bd relate, bd duplicate, bd supersede, bd show, etc.","status":"closed","priority":0,"issue_type":"task","created_at":"2025-12-18T02:02:01.317006-08:00","updated_at":"2025-12-18T02:49:10.59233-08:00","closed_at":"2025-12-18T02:49:10.59233-08:00","dependencies":[{"issue_id":"bd-2oo.3","depends_on_id":"bd-2oo","type":"parent-child","created_at":"2025-12-18T02:02:01.31856-08:00","created_by":"daemon"}]} +{"id":"bd-otf4","title":"Code Review: PR #481 - Context Engineering Optimizations","description":"Comprehensive code review of the merged context engineering PR (PR #481) that reduces MCP context usage by 80-90%.\n\n## Summary\nThe PR successfully implements lazy tool schema loading and minimal issue models to reduce context window usage. Overall implementation is solid and well-tested.\n\n## Positive Findings\nβœ… Well-designed models (IssueMinimal, CompactedResult)\nβœ… Comprehensive test coverage (28 tests, all passing)\nβœ… Clear documentation and comments\nβœ… Backward compatibility preserved (show() still returns full Issue)\nβœ… Sensible defaults (COMPACTION_THRESHOLD=20, PREVIEW_COUNT=5)\nβœ… Tool catalog complete with all 15 tools documented\n\n## Issues Identified\nSee linked issues for specific followup tasks.\n\n## Context Engineering Architecture\n- discover_tools(): List tool names only (~500 bytes vs ~15KB)\n- get_tool_info(name): Get specific tool details on-demand\n- IssueMinimal: Lightweight model for list views (~80 bytes vs ~400 bytes)\n- CompactedResult: Auto-compacts results with \u003e20 issues\n- _to_minimal(): Conversion function (efficient, no N+1 issues)","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-14T14:24:13.523532-08:00","updated_at":"2025-12-14T14:24:13.523532-08:00"} +{"id":"bd-6pc","title":"Implement bd pin/unpin commands","description":"Add 'bd pin \u003cid\u003e' and 'bd unpin \u003cid\u003e' commands to toggle the pinned status of issues. Should support multiple IDs like other bd commands.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T23:33:28.292937-08:00","updated_at":"2025-12-19T17:43:35.713398-08:00","closed_at":"2025-12-19T00:35:31.612589-08:00","dependencies":[{"issue_id":"bd-6pc","depends_on_id":"bd-7h5","type":"blocks","created_at":"2025-12-18T23:34:07.352848-08:00","created_by":"daemon"},{"issue_id":"bd-6pc","depends_on_id":"bd-0vg","type":"blocks","created_at":"2025-12-18T23:33:56.119852-08:00","created_by":"daemon"}]} +{"id":"bd-2oo.1","title":"Add metadata and thread_id columns to dependencies table","description":"Schema changes:\n- ALTER TABLE dependencies ADD COLUMN metadata TEXT DEFAULT '{}'\n- ALTER TABLE dependencies ADD COLUMN thread_id TEXT DEFAULT ''\n- CREATE INDEX idx_dependencies_thread ON dependencies(thread_id) WHERE thread_id != ''","status":"closed","priority":0,"issue_type":"task","created_at":"2025-12-18T02:02:00.468223-08:00","updated_at":"2025-12-18T02:49:10.575133-08:00","closed_at":"2025-12-18T02:49:10.575133-08:00","dependencies":[{"issue_id":"bd-2oo.1","depends_on_id":"bd-2oo","type":"parent-child","created_at":"2025-12-18T02:02:00.470012-08:00","created_by":"daemon"}]} +{"id":"bd-pbh.11","title":"Commit changes and create v0.30.4 tag","description":"```bash\ngit add -A\ngit commit -m \"chore: Bump version to 0.30.4\"\ngit tag -a v0.30.4 -m \"Release v0.30.4\"\n```\n\n\n```verify\ngit describe --tags --exact-match HEAD 2\u003e/dev/null | grep -q 'v0.30.4'\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:11.056575-08:00","updated_at":"2025-12-17T21:46:46.292166-08:00","closed_at":"2025-12-17T21:46:46.292166-08:00","dependencies":[{"issue_id":"bd-pbh.11","depends_on_id":"bd-pbh.3","type":"blocks","created_at":"2025-12-17T21:19:11.255362-08:00","created_by":"daemon"},{"issue_id":"bd-pbh.11","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:11.056934-08:00","created_by":"daemon"},{"issue_id":"bd-pbh.11","depends_on_id":"bd-pbh.10","type":"blocks","created_at":"2025-12-17T21:19:11.234175-08:00","created_by":"daemon"},{"issue_id":"bd-pbh.11","depends_on_id":"bd-pbh.2","type":"blocks","created_at":"2025-12-17T21:19:11.245316-08:00","created_by":"daemon"}]} +{"id":"bd-cb64c226.10","title":"Delete server_cache_storage.go","description":"Remove the entire cache implementation file (~286 lines)","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-10-27T22:55:38.729299-07:00","updated_at":"2025-12-17T23:18:29.110716-08:00","deleted_at":"2025-12-17T23:18:29.110716-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} +{"id":"bd-o91r","title":"Polymorphic bond command: bd mol bond A B","description":"Implement proto-to-proto bonding to create compound protos.\n\nCOMMAND: bd mol bond proto-feature proto-testing [--as proto-feature-tested] [--type sequential]\n\nBEHAVIOR:\n- Load both proto subgraphs\n- Create new compound proto with combined structure\n- B's root becomes child of A's root (sequential) or sibling (parallel)\n- Wire dependencies: B depends on A's leaf nodes (sequential) or runs parallel\n- Store bonded_from metadata for lineage tracking\n\nFLAGS:\n- --as NAME: Custom ID for compound proto (default: generates hash)\n- --type: sequential (default) or parallel\n- --dry-run: Preview compound structure\n\nOUTPUT:\n- New compound proto in catalog\n- Shows combined variable requirements","notes":"UPDATE: bond is now polymorphic - handles proto+proto, proto+mol, and mol+mol based on operand types. Separate 'attach' command eliminated.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-21T00:58:55.604705-08:00","updated_at":"2025-12-21T10:10:25.385995-08:00","closed_at":"2025-12-21T10:10:25.385995-08:00","dependencies":[{"issue_id":"bd-o91r","depends_on_id":"bd-o5xe","type":"parent-child","created_at":"2025-12-21T00:59:51.30026-08:00","created_by":"daemon"},{"issue_id":"bd-o91r","depends_on_id":"bd-mh4w","type":"blocks","created_at":"2025-12-21T00:59:51.569391-08:00","created_by":"daemon"},{"issue_id":"bd-o91r","depends_on_id":"bd-rnnr","type":"blocks","created_at":"2025-12-21T00:59:51.652397-08:00","created_by":"daemon"}]} +{"id":"bd-aks","title":"Add tests for import/export functionality","description":"Import/export functions like ImportIssues, exportToJSONLWithStore, and AutoImportIfNewer have low coverage. These are critical for data integrity and multi-repo synchronization.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T07:00:53.067006711-07:00","updated_at":"2025-12-19T09:54:57.011374404-07:00","closed_at":"2025-12-18T10:13:11.821944156-07:00","dependencies":[{"issue_id":"bd-aks","depends_on_id":"bd-6ss","type":"discovered-from","created_at":"2025-12-18T07:00:53.07185201-07:00","created_by":"matt"}]} +{"id":"bd-aay","title":"Warn on invalid depends_on references in workflow templates","description":"workflow.go:780-781 silently skips invalid dependency names. Should log a warning when a depends_on reference doesn't match any task ID in the template.","status":"closed","priority":4,"issue_type":"task","created_at":"2025-12-17T22:23:04.325253-08:00","updated_at":"2025-12-17T22:34:07.309495-08:00","closed_at":"2025-12-17T22:34:07.309495-08:00"} +{"id":"bd-nuh1","title":"GH#403: bd doctor --fix circular error message","description":"bd doctor --fix suggests running bd doctor --fix for deletions manifest issue. Fix to provide actual resolution. See GitHub issue #403.","status":"tombstone","priority":2,"issue_type":"bug","created_at":"2025-12-16T01:03:16.290018-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} +{"id":"bd-w193","title":"Work on beads-399: Add omitempty to JSONL fields for smal...","description":"Work on beads-399: Add omitempty to JSONL fields for smaller notifications. When done, submit MR (not PR) to integration branch for Refinery.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-19T22:55:37.440894-08:00","updated_at":"2025-12-19T23:28:32.42751-08:00","closed_at":"2025-12-19T23:23:09.542288-08:00"} +{"id":"bd-pvu0","title":"Merge: bd-4opy","description":"branch: polecat/angharad\ntarget: main\nsource_issue: bd-4opy\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-23T00:24:44.057267-08:00","updated_at":"2025-12-23T01:33:25.730271-08:00","closed_at":"2025-12-23T01:33:25.730271-08:00"} +{"id":"bd-qqc.4","title":"Run tests and verify build","description":"Run the test suite to verify nothing is broken:\n\n```bash\n./scripts/test.sh\n```\n\nOr manually:\n```bash\ngo build ./cmd/bd/...\ngo test ./...\n```\n\nFix any failures before proceeding.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T12:59:52.308314-08:00","updated_at":"2025-12-18T23:34:18.631671-08:00","closed_at":"2025-12-18T22:41:41.856318-08:00","dependencies":[{"issue_id":"bd-qqc.4","depends_on_id":"bd-qqc.3","type":"blocks","created_at":"2025-12-18T13:00:51.014568-08:00","created_by":"stevey"},{"issue_id":"bd-qqc.4","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T12:59:52.308943-08:00","created_by":"stevey"},{"issue_id":"bd-qqc.4","depends_on_id":"bd-qqc.1","type":"blocks","created_at":"2025-12-18T13:00:40.62142-08:00","created_by":"stevey"},{"issue_id":"bd-qqc.4","depends_on_id":"bd-qqc.2","type":"blocks","created_at":"2025-12-18T13:00:45.820132-08:00","created_by":"stevey"}]} +{"id":"bd-pgcs","title":"Clean up orphaned child issues (bd-cb64c226.*, bd-cbed9619.*)","description":"## Problem\n\nEvery bd command shows warnings about 12 orphaned child issues:\n- bd-cb64c226.1, .6, .8, .9, .10, .12, .13\n- bd-cbed9619.1, .2, .3, .4, .5\n\nThese are hierarchical IDs (parent.child format) where the parent issues no longer exist.\n\n## Impact\n\n- Clutters output of every bd command\n- Confusing for users\n- Indicates incomplete cleanup of deleted parent issues\n\n## Proposed Solution\n\n1. Delete the orphaned issues since their parents no longer exist:\n ```bash\n bd delete bd-cb64c226.1 bd-cb64c226.6 bd-cb64c226.8 ...\n ```\n\n2. Or convert them to top-level issues if they contain useful content\n\n## Investigation Needed\n\n- What were the parent issues bd-cb64c226 and bd-cbed9619?\n- Why were they deleted without their children?\n- Should bd delete cascade to children automatically?","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-12-16T23:06:17.240571-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-4y4g","title":"Bump version in all files","description":"Run ./scripts/bump-version.sh {{version}} to update 10 version files. Then run with --commit after info.go is updated.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T22:43:01.859728-08:00","updated_at":"2025-12-18T22:46:24.537336-08:00","closed_at":"2025-12-18T22:46:24.537336-08:00","dependencies":[{"issue_id":"bd-4y4g","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T22:43:16.623724-08:00","created_by":"daemon"},{"issue_id":"bd-4y4g","depends_on_id":"bd-8v2","type":"blocks","created_at":"2025-12-18T22:43:20.823329-08:00","created_by":"daemon"}]} +{"id":"bd-fgw3","title":"Update local installation","description":"Run install script or brew upgrade to get new version locally: curl -fsSL .../install.sh | bash","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T22:56:05.052016-08:00","updated_at":"2025-12-20T00:49:51.928221-08:00","closed_at":"2025-12-20T00:25:52.805029-08:00","dependencies":[{"issue_id":"bd-fgw3","depends_on_id":"bd-6s61","type":"parent-child","created_at":"2025-12-19T22:56:15.248427-08:00","created_by":"daemon"},{"issue_id":"bd-fgw3","depends_on_id":"bd-si4g","type":"blocks","created_at":"2025-12-19T22:56:23.497325-08:00","created_by":"daemon"}]} +{"id":"bd-toy3","title":"Test hook","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-12-16T18:33:39.717036-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-801b","title":"Merge: bd-bqcc","description":"branch: polecat/capable\ntarget: main\nsource_issue: bd-bqcc\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-23T00:26:04.306756-08:00","updated_at":"2025-12-23T01:33:25.728087-08:00","closed_at":"2025-12-23T01:33:25.728087-08:00"} +{"id":"bd-kwro.1","title":"Schema: Add message type and new fields","description":"Add to internal/storage/sqlite/schema.go and models:\n\nNew issue_type value:\n- message\n\nNew optional fields on Issue struct:\n- Sender string (who sent this)\n- Ephemeral bool (can be bulk-deleted)\n- RepliesTo string (issue ID for threading)\n- RelatesTo []string (issue IDs for knowledge graph)\n- Duplicates string (canonical issue ID)\n- SupersededBy string (replacement issue ID)\n\nUpdate:\n- internal/storage/sqlite/schema.go - add columns\n- internal/models/issue.go - add fields to struct\n- internal/storage/sqlite/sqlite.go - CRUD operations\n- Create migration from v0.30.1\n\nEnsure backward compatibility - all new fields optional.","status":"tombstone","priority":0,"issue_type":"task","created_at":"2025-12-16T03:01:19.777604-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-2vh3.3","title":"Tier 2: Basic bd mol squash command","description":"Add bd mol squash command for basic molecule execution compression.\n\n## Command\n\nbd mol squash \u003cmolecule-id\u003e [flags]\n --dry-run Preview what would be squashed\n --keep-children Don't delete ephemeral children after squash\n --json JSON output\n\n## Implementation\n\n1. Find all ephemeral children of molecule (parent-child deps)\n2. Concatenate child descriptions/notes into digest\n3. Create digest issue in main repo with:\n - Title: 'Molecule Execution Summary: \u003coriginal-title\u003e'\n - digest_of: [list of squashed child IDs]\n - ephemeral: false (digest is permanent)\n4. Delete ephemeral children (unless --keep-children)\n5. Link digest to parent work item\n\n## Schema Changes\n\nAdd to Issue struct:\n- SquashedAt *time.Time\n- SquashDigest string (ID of digest)\n- DigestOf []string (IDs of squashed children)\n\n## Acceptance Criteria\n\n- bd mol squash \u003cid\u003e creates digest, removes children\n- --dry-run shows preview\n- Digest has proper metadata linking","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-21T12:57:48.338114-08:00","updated_at":"2025-12-21T13:53:58.974433-08:00","closed_at":"2025-12-21T13:53:58.974433-08:00","dependencies":[{"issue_id":"bd-2vh3.3","depends_on_id":"bd-2vh3","type":"parent-child","created_at":"2025-12-21T12:57:48.338636-08:00","created_by":"stevey"},{"issue_id":"bd-2vh3.3","depends_on_id":"bd-2vh3.2","type":"blocks","created_at":"2025-12-21T12:58:22.601321-08:00","created_by":"stevey"}]} +{"id":"bd-ork0","title":"Add comments to 30+ silently ignored errors or fix them","description":"Code health review found 30+ instances of error suppression using blank identifier without explanation:\n\nGood examples (with comments):\n- merge.go: _ = gitRmCmd.Run() // Ignore errors\n- daemon_watcher.go: _ = watcher.Add(...) // Ignore error\n\nBad examples (no context):\n- create.go:213: dbPrefix, _ = store.GetConfig(ctx, \"issue_prefix\")\n- daemon_sync_branch.go: _ = daemonClient.Close()\n- migrate_hash_ids.go, version_tracking.go: _ = store.Close()\n\nFix: Add comments explaining WHY errors are ignored, or handle them properly.","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-16T18:17:25.899372-08:00","updated_at":"2025-12-22T21:28:32.898258-08:00","closed_at":"2025-12-22T21:28:32.898258-08:00"} +{"id":"bd-hy9p","title":"Add --body-file flag to bd create for reading descriptions from files","description":"## Problem\n\nCreating issues with long/complex descriptions via CLI requires shell escaping gymnastics:\n\n```bash\n# Current workaround - awkward heredoc quoting\nbd create --title=\"...\" --description=\"$(cat \u003c\u003c'EOF'\n...markdown...\nEOF\n)\"\n\n# Often fails with quote escaping errors in eval context\n# Agents resort to writing temp files then reading them\n```\n\n## Proposed Solution\n\nAdd `--body-file` and `--description-file` flags to read description from a file, matching `gh` CLI pattern.\n\n```bash\n# Natural pattern that aligns with training data\ncat \u003e /tmp/desc.md \u003c\u003c 'EOF'\n...markdown content...\nEOF\n\nbd create --title=\"...\" --body-file=/tmp/desc.md\n```\n\n## Implementation\n\n### 1. Add new flags to `bd create`\n\n```go\ncreateCmd.Flags().String(\"body-file\", \"\", \"Read description from file (use - for stdin)\")\ncreateCmd.Flags().String(\"description-file\", \"\", \"Alias for --body-file\")\n```\n\n### 2. Flag precedence\n\n- If `--body-file` or `--description-file` is provided, read from file\n- If value is `-`, read from stdin\n- Otherwise fall back to `--body` or `--description` flag\n- If neither provided, description is empty (current behavior)\n\n### 3. Error handling\n\n- File doesn't exist β†’ clear error message\n- File not readable β†’ clear error message\n- stdin specified but not available β†’ clear error message\n\n## Benefits\n\nβœ… **Matches training data**: `gh issue create --body-file file.txt` is a common pattern\nβœ… **No shell escaping issues**: File content is read directly\nβœ… **Works with any content**: Markdown, special characters, quotes, etc.\nβœ… **Agent-friendly**: Agents already write complex content to temp files\nβœ… **User-friendly**: Easier for humans too when pasting long descriptions\n\n## Related Commands\n\nConsider adding similar support to:\n- `bd update --body-file` (for updating descriptions)\n- `bd comment --body-file` (if/when we add comments)\n\n## Examples\n\n```bash\n# From file\nbd create --title=\"Add new feature\" --body-file=feature.md\n\n# From stdin\necho \"Quick description\" | bd create --title=\"Bug fix\" --body-file=-\n\n# With other flags\nbd create \\\n --title=\"Security issue\" \\\n --type=bug \\\n --priority=0 \\\n --body-file=security-report.md \\\n --label=security\n```\n\n## Testing\n\n- Test with normal files\n- Test with stdin (`-`)\n- Test with non-existent files (error handling)\n- Test with binary files (should handle gracefully)\n- Test with empty files (valid - empty description)\n- Test that `--description-file` and `--body-file` are equivalent aliases","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-11-22T00:02:08.762684-08:00","updated_at":"2025-12-17T23:13:40.536024-08:00","closed_at":"2025-12-17T17:28:52.505239-08:00"} +{"id":"bd-68bf","title":"Code review: bd mol bond implementation","description":"Review the mol bond command implementation before shipping.\n\nFocus areas:\n1. runMolBond() - polymorphic dispatch logic correctness\n2. bondProtoProto() - compound proto creation, dependency wiring\n3. bondProtoMol() / bondMolProto() - spawn and attach logic\n4. bondMolMol() - joining molecules, lineage tracking\n5. BondRef usage - is lineage tracked correctly?\n6. Error handling - are all failure modes covered?\n7. Edge cases - what could go wrong?\n\nFile: cmd/bd/mol.go (lines 485-859)\nCommit: 386b513e","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-21T10:13:09.425229-08:00","updated_at":"2025-12-21T11:18:14.206869-08:00","closed_at":"2025-12-21T11:18:14.206869-08:00","dependencies":[{"issue_id":"bd-68bf","depends_on_id":"bd-o91r","type":"discovered-from","created_at":"2025-12-21T10:13:09.426471-08:00","created_by":"daemon"}]} +{"id":"bd-xo1o.3","title":"bd activity: Real-time molecule state feed","description":"Implement activity feed command for watching molecule state transitions.\n\n## Commands\n```bash\nbd activity --follow # Real-time streaming\nbd activity --mol \u003cid\u003e # Activity for specific molecule\nbd activity --since 5m # Last 5 minutes\nbd activity --type step # Only step transitions\n```\n\n## Output Format\n```\n[14:32:01] βœ“ patrol-x7k.inbox-check completed\n[14:32:03] βœ“ patrol-x7k.check-refinery completed\n[14:32:08] + patrol-x7k.arm-ace bonded (5 steps)\n[14:32:09] β†’ patrol-x7k.arm-ace.capture in_progress\n[14:32:10] βœ“ patrol-x7k.arm-ace.capture completed\n[14:32:14] βœ“ patrol-x7k.arm-ace.decide completed (action: nudge-1)\n[14:32:17] βœ“ patrol-x7k.arm-ace COMPLETE\n[14:32:23] βœ“ patrol-x7k SQUASHED β†’ digest-x7k\n```\n\n## Event Types\n- `+` bonded - New molecule/step created\n- `β†’` in_progress - Step started\n- `βœ“` completed - Step/molecule finished\n- `βœ—` failed - Step failed\n- `⊘` burned - Wisp discarded\n- `β—‰` squashed - Wisp condensed to digest\n\n## Implementation\n- Could use SQLite triggers or polling\n- --follow uses OS file watching or polling\n- Filter by mol ID, type, time range","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-23T02:33:16.298764-08:00","updated_at":"2025-12-23T03:18:33.434079-08:00","closed_at":"2025-12-23T03:18:33.434079-08:00","dependencies":[{"issue_id":"bd-xo1o.3","depends_on_id":"bd-xo1o","type":"parent-child","created_at":"2025-12-23T02:33:16.301522-08:00","created_by":"daemon"}]} +{"id":"bd-au0.6","title":"Add comprehensive filters to bd export","description":"Enhance bd export with filtering options for selective exports.\n\n**Currently only has:**\n- --status\n\n**Add filters:**\n- --label, --label-any\n- --assignee\n- --type\n- --priority, --priority-min, --priority-max\n- --created-after, --created-before\n- --updated-after, --updated-before\n\n**Use case:**\n- Export only open issues: bd export --status open\n- Export high-priority bugs: bd export --type bug --priority-max 1\n- Export recent issues: bd export --created-after 2025-01-01\n\n**Files to modify:**\n- cmd/bd/export.go\n- Reuse filter logic from list.go","status":"closed","priority":1,"issue_type":"task","created_at":"2025-11-21T21:07:19.431307-05:00","updated_at":"2025-12-23T21:22:14.757819-08:00","closed_at":"2025-12-23T20:41:46.101952-08:00","dependencies":[{"issue_id":"bd-au0.6","depends_on_id":"bd-au0","type":"parent-child","created_at":"2025-11-21T21:07:19.432983-05:00","created_by":"daemon"}]} +{"id":"bd-r6a.1","title":"Revert/remove YAML workflow system","description":"Revert the recent commit and remove all YAML workflow code:\n\n1. `git revert aae8407a` (the commit we just pushed with workflow fixes)\n2. Remove `cmd/bd/templates/workflows/` directory\n3. Remove workflow.go or gut it to minimal stub\n4. Remove WorkflowTemplate types from internal/types/workflow.go\n5. Remove any workflow-related RPC handlers\n\nKeep only minimal scaffolding if needed for the new template system.","status":"closed","priority":0,"issue_type":"task","created_at":"2025-12-17T22:42:07.339684-08:00","updated_at":"2025-12-17T22:46:08.606088-08:00","closed_at":"2025-12-17T22:46:08.606088-08:00","dependencies":[{"issue_id":"bd-r6a.1","depends_on_id":"bd-r6a","type":"parent-child","created_at":"2025-12-17T22:42:07.340117-08:00","created_by":"daemon"}]} +{"id":"bd-kqw0","title":"Update local installation","description":"Run install script or brew upgrade to get new version locally: curl -fsSL .../install.sh | bash","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T13:52:33.066452-08:00","updated_at":"2025-12-21T13:53:49.656073-08:00","deleted_at":"2025-12-21T13:53:49.656073-08:00","deleted_by":"stevey","delete_reason":"manual delete","original_type":"task"} +{"id":"bd-pbh.4","title":"Update .claude-plugin/plugin.json to 0.30.4","description":"Update version field in .claude-plugin/plugin.json:\n```json\n\"version\": \"0.30.4\"\n```\n\n\n```verify\njq -e '.version == \"0.30.4\"' .claude-plugin/plugin.json\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:10.976866-08:00","updated_at":"2025-12-17T21:46:46.23159-08:00","closed_at":"2025-12-17T21:46:46.23159-08:00","dependencies":[{"issue_id":"bd-pbh.4","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:10.97729-08:00","created_by":"daemon"}]} +{"id":"bd-m7ib","title":"Add creator field to Issue struct","description":"Add Creator *EntityRef field to Issue. Tracks who created the issue. Optional, omitted if nil in JSONL. This enables CV chain tracking - every piece of work is attributed to its creator.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-22T17:53:31.599447-08:00","updated_at":"2025-12-22T20:03:24.264672-08:00","closed_at":"2025-12-22T20:03:24.264672-08:00","dependencies":[{"issue_id":"bd-m7ib","depends_on_id":"bd-7pwh","type":"parent-child","created_at":"2025-12-22T17:53:43.39957-08:00","created_by":"daemon"},{"issue_id":"bd-m7ib","depends_on_id":"bd-nmch","type":"blocks","created_at":"2025-12-22T17:53:47.826309-08:00","created_by":"daemon"}]} {"id":"bd-z8a6","title":"bd delete --from-file should add deleted issues to deletions manifest","description":"When using bd delete --from-file to bulk delete issues, the deleted issue IDs are not being added to the deletions.jsonl manifest.\n\nThis causes those issues to be resurrected during bd sync when git history scanning finds them in old commits.\n\nExpected: All deleted issues should be added to deletions.jsonl so they wont be reimported from git history.\n\nWorkaround: Manually add deletion records to deletions.jsonl.","status":"tombstone","priority":1,"issue_type":"bug","created_at":"2025-12-16T01:48:14.099855-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} -{"id":"bd-zc3","title":"Add --pinned and --no-pinned flags to bd list","description":"Add filtering flags to bd list: --pinned shows only pinned issues, --no-pinned excludes pinned issues. Default behavior shows all issues with a pin indicator.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T23:33:29.518028-08:00","updated_at":"2025-12-21T11:30:01.484978-08:00","closed_at":"2025-12-21T11:30:01.484978-08:00","close_reason":"Already implemented - --pinned and --no-pinned flags exist in bd list","dependencies":[{"issue_id":"bd-zc3","depends_on_id":"bd-0vg","type":"blocks","created_at":"2025-12-18T23:33:56.256764-08:00","created_by":"daemon"},{"issue_id":"bd-zc3","depends_on_id":"bd-7h5","type":"blocks","created_at":"2025-12-18T23:34:07.486361-08:00","created_by":"daemon"}]} -{"id":"bd-zf5w","title":"bd mail uses git user.name for sender instead of BEADS_AGENT_NAME","description":"When sending mail via `bd mail send`, the sender field in the stored issue uses git config user.name instead of the BEADS_AGENT_NAME environment variable.\n\nReproduction:\n1. Set BEADS_AGENT_NAME=gastown-alpha\n2. Run: bd mail send mayor/ -s 'Test' -m 'Body'\n3. Check the issue.jsonl: sender is 'Steve Yegge' (git user.name) not 'gastown-alpha'\n\nExpected: The sender field should use BEADS_AGENT_NAME when set.\n\nThis breaks the mail system for multi-agent workflows where agents need to identify themselves by their role (polecat, refinery, etc.) rather than the human user's git identity.\n\nRelated: gt mail routing integration with Gas Town","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-20T21:46:33.646746-08:00","updated_at":"2025-12-20T21:59:25.771325-08:00","closed_at":"2025-12-20T21:59:25.771325-08:00","close_reason":"Not applicable - filed against stale bd v0.30.6"} -{"id":"bd-zgb9","title":"gt polecat done should auto-stop running session","description":"Currently 'gt polecat done' fails if session is running, requiring a separate 'gt session stop' first. This is unnecessary friction - done should just stop the session automatically since that's always what you want.","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-23T04:11:23.899653-08:00","updated_at":"2025-12-23T04:12:13.029479-08:00","closed_at":"2025-12-23T04:12:13.029479-08:00","close_reason":"Moving to gastown - this is a gt issue not bd"} +{"id":"bd-kwjh.1","title":".beads-ephemeral/ storage backend","description":"Implement ephemeral storage layer for wisps.\n\n## Requirements\n- New storage location: .beads-ephemeral/issues.jsonl (sibling to .beads/)\n- Gitignored by default (add to .beads/.gitignore)\n- Same JSONL format as regular beads\n- Config option: ephemeral.directory (relative path)\n- ephemeral.enabled config flag\n\n## Storage Behavior\n- Ephemeral issues have `ephemeral: true` field\n- No sync to remote (local only)\n- No daemon tracking needed (transient)\n\n## Implementation\n- Add EphemeralStore in storage package\n- Initialize on demand when --ephemeral flag used\n- Share Issue struct, just different storage path","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-22T00:06:46.706026-08:00","updated_at":"2025-12-22T00:08:26.009875-08:00","dependencies":[{"issue_id":"bd-kwjh.1","depends_on_id":"bd-kwjh","type":"parent-child","created_at":"2025-12-22T00:06:46.706461-08:00","created_by":"daemon"}],"deleted_at":"2025-12-22T00:08:26.009875-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} +{"id":"bd-0fvq","title":"bd doctor should recommend bd prime migration for existing repos","description":"bd doctor should detect old beads integration patterns and recommend migrating to bd prime approach.\n\n## Current behavior\n- bd doctor checks if Claude hooks are installed globally\n- Doesn't check project-level integration (AGENTS.md, CLAUDE.md)\n- Doesn't recommend migration for repos using old patterns\n\n## Desired behavior\nbd doctor should detect and suggest:\n\n1. **Old slash command pattern detected**\n - Check for /beads:* references in AGENTS.md, CLAUDE.md\n - Suggest: These slash commands are deprecated, use bd prime hooks instead\n \n2. **No agent documentation**\n - Check if AGENTS.md or CLAUDE.md exists\n - Suggest: Run 'bd onboard' or 'bd setup claude' to document workflow\n \n3. **Old MCP-only pattern**\n - Check for instructions to use MCP tools but no bd prime hooks\n - Suggest: Add bd prime hooks for better token efficiency\n\n4. **Migration path**\n - Show: 'Run bd setup claude to add SessionStart/PreCompact hooks'\n - Show: 'Update AGENTS.md to reference bd prime instead of slash commands'\n\n## Example output\n\n⚠ Warning: Old beads integration detected in CLAUDE.md\n Found: /beads:* slash command references (deprecated)\n Recommend: Migrate to bd prime hooks for better token efficiency\n Fix: Run 'bd setup claude' and update CLAUDE.md\n\nπŸ’‘ Tip: bd prime + hooks reduces token usage by 80-99% vs slash commands\n MCP mode: ~50 tokens vs ~10.5k for full MCP scan\n CLI mode: ~1-2k tokens with automatic context recovery\n\n## Benefits\n- Helps existing repos adopt new best practices\n- Clear migration path for users\n- Better token efficiency messaging","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-11-12T03:20:25.567748-08:00","updated_at":"2025-12-23T22:33:23.931274-08:00","closed_at":"2025-12-23T22:33:23.931274-08:00"} +{"id":"bd-3zzh","title":"Merge: bd-tvu3","description":"branch: polecat/Beader\ntarget: main\nsource_issue: bd-tvu3\nrig: beads","status":"closed","priority":1,"issue_type":"merge-request","created_at":"2025-12-23T13:36:55.016496-08:00","updated_at":"2025-12-23T19:12:08.347363-08:00","closed_at":"2025-12-23T19:12:08.347363-08:00"} +{"id":"bd-74w1","title":"Consolidate duplicate path-finding utilities (findJSONLPath, findBeadsDir, findGitRoot)","description":"Code health review found these functions defined in multiple places:\n\n- findJSONLPath() in autoflush.go:45-73 and doctor/fix/migrate.go\n- findBeadsDir() in autoimport.go:197-239 (with git worktree handling)\n- findGitRoot() in autoimport.go:242-269 (Windows path conversion)\n\nThe beads package has public FindBeadsDir() and FindJSONLPath() APIs that should be used consistently.\n\nImpact: Bug fixes need to be applied in multiple places. Git worktree handling may not be replicated everywhere.\n\nFix: Consolidate all implementations to use the beads package APIs. Remove duplicates.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-16T18:17:16.694293-08:00","updated_at":"2025-12-22T21:13:46.83103-08:00","closed_at":"2025-12-22T21:13:46.83103-08:00"} +{"id":"bd-csnr","title":"activity --follow: Silent error handling","description":"In activity.go:175-179, when the daemon is down or errors occur during polling in --follow mode, errors are silently ignored:\n\n```go\nnewEvents, err := fetchMutations(lastPoll)\nif err != nil {\n // Daemon might be down, continue trying\n continue\n}\n```\n\nThis means:\n- Users won't know if the daemon is unreachable\n- Could appear frozen when actually failing\n- No indication of lost events\n\nShould at least show a warning after N consecutive failures, or show '...' indicator to show polling status.\n\nDiscovered during code review of bd-xo1o implementation.","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-23T04:06:18.590743-08:00","updated_at":"2025-12-23T04:16:04.64978-08:00","closed_at":"2025-12-23T04:16:04.64978-08:00"} +{"id":"bd-qqc.13","title":"Upgrade beads-mcp to {{version}}","description":"Upgrade the MCP server via pip:\n\n```bash\npip install --upgrade beads-mcp\npip show beads-mcp | grep Version # Verify {{version}}\n```\n\nNote: Restart Claude Code or MCP session to use new version.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T23:08:04.318233-08:00","updated_at":"2025-12-18T23:09:05.77824-08:00","closed_at":"2025-12-18T23:09:05.77824-08:00","dependencies":[{"issue_id":"bd-qqc.13","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T23:08:04.318709-08:00","created_by":"daemon"},{"issue_id":"bd-qqc.13","depends_on_id":"bd-qqc.11","type":"blocks","created_at":"2025-12-18T23:08:19.927825-08:00","created_by":"daemon"}]} +{"id":"bd-ymqn","title":"Code review: bd mol bond --ref and bd activity (bd-xo1o work)","description":"Review dave's recent commits for bd-xo1o (Dynamic Molecule Bonding):\n\n## Commits to Review\n- ee04b1ea: feat: add dynamic molecule bonding with --ref flag (bd-xo1o.1)\n- be520d90: feat: add bd activity command for real-time state feed (bd-xo1o.3)\n\n## Review Focus\n1. Code quality and correctness\n2. Error handling\n3. Edge cases\n4. Test coverage\n5. Documentation\n\n## Deliverables\n- File beads for any issues found\n- Note any concerns or suggestions\n- Verify the implementation matches the bd-xo1o epic requirements","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-23T03:47:55.217363-08:00","updated_at":"2025-12-23T04:11:00.226326-08:00","closed_at":"2025-12-23T04:11:00.226326-08:00"} +{"id":"bd-cbed9619.4","title":"Make DetectCollisions read-only (separate detection from modification)","description":"**Summary:** The project restructured the collision detection process in the database to separate read-only detection from state modification, eliminating race conditions and improving system reliability. This was achieved by introducing a two-phase approach: first detecting potential collisions, then applying resolution separately.\n\n**Key Decisions:**\n- Create read-only DetectCollisions method\n- Add RenameDetail to track potential issue renames\n- Implement atomic ApplyCollisionResolution function\n- Separate detection logic from database modification\n\n**Resolution:** The refactoring creates a more robust, composable collision handling mechanism that prevents partial failures and maintains database consistency during complex issue import scenarios.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-10-28T18:37:09.652326-07:00","updated_at":"2025-12-17T23:18:29.112637-08:00","dependencies":[{"issue_id":"bd-cbed9619.4","depends_on_id":"bd-cbed9619.5","type":"blocks","created_at":"2025-10-28T18:39:28.285653-07:00","created_by":"daemon"}],"deleted_at":"2025-12-17T23:18:29.112637-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} +{"id":"bd-y4vz","title":"Work on beads-eub: Consolidated context tool for MCP serv...","description":"Work on beads-eub: Consolidated context tool for MCP server (GH#636). Merge set_context, where_am_i, init into single 'context' tool. When done, submit MR (not PR) to integration branch for Refinery.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-19T22:56:58.527144-08:00","updated_at":"2025-12-20T00:49:51.929597-08:00","closed_at":"2025-12-19T23:31:11.906952-08:00"} +{"id":"bd-3bsz","title":"gt mail send: support reading message body from stdin","description":"Currently gt mail send -m requires the message as a command-line argument, which causes shell escaping issues with backticks, quotes, and special characters.\n\nAdd support for reading message body from stdin:\n- gt mail send addr -s 'Subject' --stdin # Read body from stdin\n- echo 'body' | gt mail send addr -s 'Subject' -m - # Convention: -m - means stdin\n\nThis would allow:\ncat \u003c\u003c'EOF' | gt mail send addr -s 'Subject' --stdin\nMessage with `backticks` and 'quotes' safely\nEOF\n\nWithout this, agents struggle to send handoff messages containing code snippets.","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-12-23T03:21:39.496208-08:00","updated_at":"2025-12-23T12:19:44.443554-08:00","closed_at":"2025-12-23T12:19:44.443554-08:00"} +{"id":"bd-uutv","title":"Work on beads-rs0: Namepool configuration for themed pole...","description":"Work on beads-rs0: Namepool configuration for themed polecat names. See bd show beads-rs0 for full details.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-19T21:49:48.129778-08:00","updated_at":"2025-12-19T21:59:25.565894-08:00","closed_at":"2025-12-19T21:59:25.565894-08:00"} +{"id":"bd-d4jl","title":"Commit and push release","description":"git add -A \u0026\u0026 git commit -m 'chore: bump version to 0.32.1' \u0026\u0026 git push","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-20T21:53:21.928138-08:00","updated_at":"2025-12-20T21:57:12.81943-08:00","closed_at":"2025-12-20T21:57:12.81943-08:00","dependencies":[{"issue_id":"bd-d4jl","depends_on_id":"bd-an4s","type":"parent-child","created_at":"2025-12-20T21:53:21.930015-08:00","created_by":"daemon"},{"issue_id":"bd-d4jl","depends_on_id":"bd-tj00","type":"blocks","created_at":"2025-12-20T21:53:29.884457-08:00","created_by":"daemon"}]} +{"id":"bd-ieyy","title":"bd close --continue: auto-advance to next molecule step","description":"Add --continue flag to bd close for seamless molecule step transitions.\n\n## Usage\n\nbd close \u003cstep-id\u003e --continue [--no-auto]\n\n## Behavior\n\n1. Closes the specified step\n2. Finds next ready step in same molecule (sibling/child)\n3. By default, marks it in_progress (--no-auto to skip)\n4. Outputs the transition\n\n## Output\n\n[done] Closed gt-abc.3: Implement feature\n\nNext ready in molecule:\n gt-abc.4: Write tests\n\n[arrow] Marked in_progress (use --no-auto to skip)\n\n## If no next step\n\n[done] Closed gt-abc.6: Exit decision\n\nMolecule gt-abc complete! All steps closed.\nConsider: bd mol squash gt-abc --summary '...'\n\n## Key behaviors\n- Detects parent molecule from closed step\n- Finds next unblocked sibling\n- Auto-claims by default (propulsion principle)\n- Graceful handling when molecule is complete\n\n## Gas Town integration\n- gt-lz13: Update templates with nav workflow\n- gt-um6q: Update docs with nav workflow","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-12-22T17:03:44.238243-08:00","updated_at":"2025-12-22T17:36:31.937727-08:00","closed_at":"2025-12-22T17:36:31.937727-08:00"} +{"id":"bd-2vh3.4","title":"Tier 3: AI-powered squash summarization","description":"## Design: Agent-Provided Summarization (Inversion of Control)\n\nbd is a tool FOR agents, not an agent itself. The calling agent provides\nthe summary; bd just stores it.\n\n### API\n\n```bash\n# Agent generates summary, passes to bd\nbd mol squash bd-xxx --summary \"Agent-generated summary here\"\n\n# Without --summary, falls back to basic concatenation\nbd mol squash bd-xxx\n```\n\n### Gas Town Integration Pattern\n\n```go\n// In polecat completion handler or witness\nraw := exec.Command(\"bd\", \"mol\", \"show\", molID, \"--json\").Output()\nsummary := callHaiku(buildSummaryPrompt(raw)) // agent's job\nexec.Command(\"bd\", \"mol\", \"squash\", molID, \"--summary\", summary).Run()\n```\n\n### Why This Design\n\n| Concern | bd's job | Agent's job |\n|---------|----------|-------------|\n| Store data | βœ… | |\n| Query data | βœ… | |\n| Generate summaries | | βœ… |\n| Call LLMs | | βœ… |\n| Manage API keys | | βœ… |\n\n### Implementation Status\n\n- [x] --summary flag added to bd mol squash\n- [x] Tests for agent-provided summary\n- [ ] Gas Town integration (separate task)\n\n### Acceptance Criteria\n\n- βœ… bd mol squash --summary uses provided text\n- βœ… Without --summary, falls back to concatenation\n- βœ… No LLM calls in bd itself","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-21T12:58:00.732749-08:00","updated_at":"2025-12-21T14:29:16.288713-08:00","closed_at":"2025-12-21T14:29:16.288713-08:00","dependencies":[{"issue_id":"bd-2vh3.4","depends_on_id":"bd-2vh3","type":"parent-child","created_at":"2025-12-21T12:58:00.733264-08:00","created_by":"stevey"},{"issue_id":"bd-2vh3.4","depends_on_id":"bd-2vh3.3","type":"blocks","created_at":"2025-12-21T12:58:22.698686-08:00","created_by":"stevey"}]} +{"id":"bd-ndye","title":"mergeDependencies uses union instead of 3-way merge","description":"## Critical Bug\n\nThe `mergeDependencies` function in internal/merge/merge.go performs a UNION of left and right dependencies instead of a proper 3-way merge. This causes removed dependencies to be resurrected.\n\n### Root Cause\n\n```go\n// Current code (lines 795-816):\nfunc mergeDependencies(left, right []Dependency) []Dependency {\n // Just unions left + right\n // NEVER REMOVES anything\n // Doesn't even look at base!\n}\n```\n\nAnd `mergeIssue` (line 579) doesn't pass `base`:\n```go\nresult.Dependencies = mergeDependencies(left.Dependencies, right.Dependencies)\n```\n\n### Impact\n\nIf:\n- Base has dependency D\n- Left removes D (intentional)\n- Right still has D (stale)\n\nCurrent: D is in result (resurrection!)\nCorrect: Left removed it, D should NOT be in result\n\nThis breaks Gas Town's workflow and data integrity. Closed means closed.\n\n### Fix\n\nChange `mergeDependencies` to take `base` and do proper 3-way merge:\n- If dep was in base and removed by left β†’ exclude (left wins)\n- If dep was in base and removed by right β†’ exclude (right wins)\n- If dep wasn't in base and added by either β†’ include\n- If dep was in base and both still have it β†’ include\n\nKey principle: **REMOVALS ARE AUTHORITATIVE**\n\n### Files to Change\n\n1. internal/merge/merge.go:\n - `mergeDependencies(left, right)` β†’ `mergeDependencies(base, left, right)`\n - `mergeIssue` line 579: pass `base.Dependencies`\n\n### Related\n\nThis also explains why `ProtectLocalExportIDs` in importer is defined but never used - the protection was never actually implemented.","status":"closed","priority":0,"issue_type":"bug","created_at":"2025-12-18T23:15:54.475872-08:00","updated_at":"2025-12-18T23:21:10.709571-08:00","closed_at":"2025-12-18T23:21:10.709571-08:00"} +{"id":"bd-whgv","title":"Merge: bd-401h","description":"branch: polecat/rictus\ntarget: main\nsource_issue: bd-401h\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-19T23:20:37.854953-08:00","updated_at":"2025-12-20T23:17:26.999477-08:00","closed_at":"2025-12-20T23:17:26.999477-08:00"} +{"id":"bd-pe4s","title":"JSON test issue","description":"Line 1\nLine 2\nLine 3","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-12-16T16:14:36.969074-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-pbh.15","title":"Monitor npm publish","description":"Watch the npm publish action:\nhttps://github.com/steveyegge/beads/actions/workflows/npm-publish.yml\n\nVerify at: https://www.npmjs.com/package/@anthropics/claude-code-beads-plugin/v/0.30.4\n\nCheck:\n```bash\nnpm view @anthropics/claude-code-beads-plugin@0.30.4 version\n```\n\n\n```verify\nnpm view @anthropics/claude-code-beads-plugin@0.30.4 version 2\u003e/dev/null | grep -q '0.30.4'\n```","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-17T21:19:11.091806-08:00","updated_at":"2025-12-17T21:46:46.333213-08:00","closed_at":"2025-12-17T21:46:46.333213-08:00","dependencies":[{"issue_id":"bd-pbh.15","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:11.092205-08:00","created_by":"daemon"},{"issue_id":"bd-pbh.15","depends_on_id":"bd-pbh.12","type":"blocks","created_at":"2025-12-17T21:19:11.301843-08:00","created_by":"daemon"}]} +{"id":"bd-r6a.3","title":"Create version-bump template as native Beads","description":"Migrate the version-bump workflow from YAML to a native Beads template:\n\n1. Create epic with template label: Release {{version}}\n2. Create child tasks for each step (update version files, changelog, commit, push, publish)\n3. Set up dependencies between tasks\n4. Add verification commands in task descriptions\n\nThis serves as both migration and validation of the new system.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-17T22:43:40.694931-08:00","updated_at":"2025-12-18T17:42:26.001149-08:00","closed_at":"2025-12-18T13:02:09.039457-08:00","dependencies":[{"issue_id":"bd-r6a.3","depends_on_id":"bd-r6a","type":"parent-child","created_at":"2025-12-17T22:43:40.695392-08:00","created_by":"daemon"},{"issue_id":"bd-r6a.3","depends_on_id":"bd-r6a.2","type":"blocks","created_at":"2025-12-17T22:44:03.311902-08:00","created_by":"daemon"}]} +{"id":"bd-x5wg","title":"Create and push git tag v0.33.2","description":"Create the release tag and push it:\n\n```bash\ngit tag v0.33.2\ngit push origin v0.33.2\n```\n\nThis triggers the GoReleaser GitHub Action to build release binaries.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.76223-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-aydr.9","title":"Add .beads-backup-* pattern to gitignore template","description":"Update the gitignore template in doctor package to include backup directories.\n\n## Change\nAdd `.beads-backup-*/` to the GitignoreTemplate in `cmd/bd/doctor/gitignore.go`\n\n## Why\nBackup directories created by `bd reset --backup` should not be committed to git.\nThey are local-only recovery tools.\n\n## File\n`cmd/bd/doctor/gitignore.go` - look for GitignoreTemplate constant","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-13T08:49:42.453483+11:00","updated_at":"2025-12-13T09:16:44.201889+11:00","closed_at":"2025-12-13T09:16:44.201889+11:00","dependencies":[{"issue_id":"bd-aydr.9","depends_on_id":"bd-aydr","type":"parent-child","created_at":"2025-12-13T08:49:42.453886+11:00","created_by":"daemon"}]} +{"id":"bd-dju6","title":"Commit and push release","description":"git add -A \u0026\u0026 git commit \u0026\u0026 git push to trigger CI","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T13:52:33.065863-08:00","updated_at":"2025-12-21T13:53:49.957804-08:00","deleted_at":"2025-12-21T13:53:49.957804-08:00","deleted_by":"stevey","delete_reason":"manual delete","original_type":"task"} +{"id":"bd-kwro.9","title":"Cleanup: --ephemeral flag","description":"Update bd cleanup to handle ephemeral issues.\n\nNew flag:\n- bd cleanup --ephemeral - deletes all CLOSED issues with ephemeral=true\n\nBehavior:\n- Only deletes if status=closed AND ephemeral=true\n- Respects --dry-run flag\n- Reports count of deleted ephemeral issues\n\nThis allows swarm cleanup to remove transient messages without affecting permanent issues.","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-12-16T03:02:28.563871-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-fx7v","title":"Improve test coverage for cmd/bd/doctor/fix (23.9% β†’ 50%)","description":"The doctor/fix package has only 23.9% test coverage. The doctor fix functionality is important for troubleshooting.\n\nCurrent coverage: 23.9%\nTarget coverage: 50%","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-13T20:43:05.67127-08:00","updated_at":"2025-12-23T22:32:34.337963-08:00","closed_at":"2025-12-23T22:32:34.337963-08:00"} +{"id":"bd-cb64c226.9","title":"Remove Cache-Related Tests","description":"Delete or update tests that assume multi-repo caching","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-10-27T22:55:44.511897-07:00","updated_at":"2025-12-17T23:18:29.110385-08:00","deleted_at":"2025-12-17T23:18:29.110385-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} +{"id":"bd-psg","title":"Add tests for dependency management","description":"Key dependency functions like mergeBidirectionalTrees, GetDependencyTree, and DetectCycles have low or no coverage. These are essential for maintaining data integrity in the dependency graph.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T07:00:43.458548462-07:00","updated_at":"2025-12-19T09:54:57.018745301-07:00","closed_at":"2025-12-18T10:24:56.271508339-07:00","dependencies":[{"issue_id":"bd-psg","depends_on_id":"bd-6ss","type":"discovered-from","created_at":"2025-12-18T07:00:43.463910911-07:00","created_by":"matt"}]} +{"id":"bd-jgxi","title":"Auto-migrate database on CLI version bump","description":"When CLI is upgraded (e.g., 0.24.0 β†’ 0.24.1), database version becomes stale. Add auto-migration in PersistentPreRun or daemon startup. Check dbVersion != CLIVersion and run bd migrate automatically. Fixes recurring UX issue where bd doctor shows version mismatch after every CLI upgrade.","status":"closed","priority":0,"issue_type":"feature","created_at":"2025-11-21T23:16:09.004619-08:00","updated_at":"2025-12-17T23:13:40.535453-08:00","closed_at":"2025-12-17T17:15:43.605762-08:00","dependencies":[{"issue_id":"bd-jgxi","depends_on_id":"bd-tbz3","type":"parent-child","created_at":"2025-11-21T23:16:09.005513-08:00","created_by":"daemon"}]} +{"id":"bd-eijl","title":"bd ship command for publishing capabilities","description":"Add `bd ship \u003ccapability\u003e` command that:\n\n1. Finds issue with `export:\u003ccapability\u003e` label\n2. Validates issue is closed (or --force to override)\n3. Adds `provides:\u003ccapability\u003e` label\n4. Protects `provides:*` namespace (only bd ship can add these labels)\n\nExample:\n```bash\nbd ship mol-run-assignee\n# Output: Shipped mol-run-assignee (bd-xyz)\n```\n\nPart of cross-project dependency system.\nSee: gastown/docs/cross-project-deps.md","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-12-21T22:37:19.123024-08:00","updated_at":"2025-12-21T23:11:47.498859-08:00","closed_at":"2025-12-21T23:11:47.498859-08:00"} {"id":"bd-ziy5","title":"GH#409: bd init uses issues.jsonl but docs say beads.jsonl","description":"bd init creates config referencing issues.jsonl but README/docs reference beads.jsonl as canonical. Standardize naming. See GitHub issue #409.","status":"tombstone","priority":2,"issue_type":"bug","created_at":"2025-12-16T01:03:58.109954-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} -{"id":"bd-zmmy","title":"bd ready resolves external dependencies","description":"Extend bd ready to check external blocked_by references:\n\n1. Parse external:\u003cproject\u003e:\u003ccapability\u003e from blocked_by\n2. Look up project path from external_projects config\n3. Check if target project has provides:\u003ccapability\u003e label on a closed issue\n4. If not satisfied, issue is blocked\n\nExample output:\n```bash\nbd ready\n# gt-xyz: blocked by external:beads:mol-run-assignee (not provided)\n# gt-abc: ready\n```\n\nDepends on: bd-om4a (external: prefix), bd-66w1 (config)\n\nPart of cross-project dependency system.\nSee: gastown/docs/cross-project-deps.md","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-12-21T22:37:50.03794-08:00","updated_at":"2025-12-21T23:42:25.042402-08:00","closed_at":"2025-12-21T23:42:25.042402-08:00","close_reason":"Implemented external dep resolution in GetReadyWork - filters issues with unsatisfied external:project:capability deps","dependencies":[{"issue_id":"bd-zmmy","depends_on_id":"bd-om4a","type":"blocks","created_at":"2025-12-21T22:38:38.106657-08:00","created_by":"daemon"},{"issue_id":"bd-zmmy","depends_on_id":"bd-66w1","type":"blocks","created_at":"2025-12-21T22:38:38.175633-08:00","created_by":"daemon"}]} -{"id":"bd-zt59","title":"Deferred HOP schema additions (P2/P3)","description":"Deferred from bd-7pwh after review. Add when semantics are clearer and actually needed:\n\n- assignee_ref: Structured EntityRef alongside string assignee\n- work_type: 'mutex' vs 'open_competition' (everything is mutex in v0.1)\n- crystallizes: bool for work that compounds vs evaporates (can derive from issue_type)\n- cross_refs: URIs to beads in other repos (needs federation first)\n- skill_vector: []float32 embeddings placeholder (YAGNI)\n\nThese can be added later without breaking changes (all optional fields).","status":"deferred","priority":4,"issue_type":"task","created_at":"2025-12-22T17:54:20.02496-08:00","updated_at":"2025-12-23T12:27:02.445219-08:00"} +{"id":"bd-a62m","title":"Update version to 0.33.2 in version.go","description":"Edit cmd/bd/version.go line 17:\n\n```go\nVersion = \"0.33.2\"\n```\n\nVerify with: `grep 'Version =' cmd/bd/version.go`","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.760384-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-rgyd","title":"Split internal/storage/sqlite/queries.go (1586 lines)","description":"Split internal/storage/sqlite/queries.go (1704 lines) into logical modules.\n\n## Current State\nqueries.go is 1704 lines with mixed responsibilities:\n- Issue CRUD operations\n- Search/filter operations\n- Delete operations (complex cascade logic)\n- Helper functions (parsing, formatting)\n\n## Proposed Split\n\n### 1. queries.go (keep ~400 lines) - Core CRUD\n```go\n// Core issue operations\nfunc (s *SQLiteStorage) CreateIssue(...)\nfunc (s *SQLiteStorage) GetIssue(...)\nfunc (s *SQLiteStorage) UpdateIssue(...)\nfunc (s *SQLiteStorage) CloseIssue(...)\n```\n\n### 2. queries_search.go (~300 lines) - Search/Filter\n```go\n// Search and filtering\nfunc (s *SQLiteStorage) SearchIssues(...)\nfunc (s *SQLiteStorage) GetIssueByExternalRef(...)\nfunc (s *SQLiteStorage) GetCloseReason(...)\nfunc (s *SQLiteStorage) GetCloseReasonsForIssues(...)\n```\n\n### 3. queries_delete.go (~400 lines) - Delete Operations\n```go\n// Delete operations with cascade logic\nfunc (s *SQLiteStorage) CreateTombstone(...)\nfunc (s *SQLiteStorage) DeleteIssue(...)\nfunc (s *SQLiteStorage) DeleteIssues(...)\nfunc (s *SQLiteStorage) resolveDeleteSet(...)\nfunc (s *SQLiteStorage) expandWithDependents(...)\nfunc (s *SQLiteStorage) validateNoDependents(...)\nfunc (s *SQLiteStorage) checkSingleIssueValidation(...)\nfunc (s *SQLiteStorage) trackOrphanedIssues(...)\nfunc (s *SQLiteStorage) collectOrphansForID(...)\nfunc (s *SQLiteStorage) populateDeleteStats(...)\nfunc (s *SQLiteStorage) executeDelete(...)\nfunc (s *SQLiteStorage) findAllDependentsRecursive(...)\n```\n\n### 4. queries_helpers.go (~100 lines) - Utilities\n```go\n// Helper functions (already at top of file)\nfunc parseNullableTimeString(...)\nfunc parseJSONStringArray(...)\nfunc formatJSONStringArray(...)\n```\n\n### 5. queries_rename.go (~100 lines) - ID/Prefix Operations\n```go\n// ID and prefix management\nfunc (s *SQLiteStorage) UpdateIssueID(...)\nfunc (s *SQLiteStorage) RenameDependencyPrefix(...)\nfunc (s *SQLiteStorage) RenameCounterPrefix(...)\nfunc (s *SQLiteStorage) ResetCounter(...)\n```\n\n## Implementation Steps\n\n1. **Create new files** with package declaration:\n ```go\n // queries_delete.go\n package sqlite\n \n import (...)\n ```\n\n2. **Move functions** - cut/paste, maintaining order within each file\n\n3. **Update imports** - each file needs its own imports\n\n4. **Run tests** after each file split:\n ```bash\n go test ./internal/storage/sqlite/...\n ```\n\n5. **Run linter** to catch any issues:\n ```bash\n golangci-lint run ./internal/storage/sqlite/...\n ```\n\n## File Organization\n```\ninternal/storage/sqlite/\nβ”œβ”€β”€ queries.go # Core CRUD (~400 lines)\nβ”œβ”€β”€ queries_search.go # Search/filter (~300 lines)\nβ”œβ”€β”€ queries_delete.go # Delete cascade (~400 lines)\nβ”œβ”€β”€ queries_helpers.go # Utilities (~100 lines)\n└── queries_rename.go # ID operations (~100 lines)\n```\n\n## Success Criteria\n- No file \u003e 500 lines\n- All tests pass\n- No functionality changes\n- Clear separation of concerns","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-16T18:17:23.85869-08:00","updated_at":"2025-12-23T13:40:51.62551-08:00","closed_at":"2025-12-23T13:40:51.62551-08:00","dependencies":[{"issue_id":"bd-rgyd","depends_on_id":"bd-iz5t","type":"parent-child","created_at":"2025-12-23T12:44:07.50733-08:00","created_by":"daemon"}]} +{"id":"bd-mql4","title":"getLocalSyncBranch silently ignores YAML parse errors","description":"In autoimport.go:170-172, YAML parsing errors are silently ignored. If a user has malformed YAML in config.yaml, sync-branch will just silently be empty with no feedback.\n\nRecommendation: Add debug logging since this function is only called during auto-import, and debugging silent failures is painful.\n\nAdd: debug.Logf(\"Warning: failed to parse config.yaml: %v\", err)","status":"open","priority":4,"issue_type":"task","created_at":"2025-12-07T02:03:44.217728-08:00","updated_at":"2025-12-07T02:03:44.217728-08:00"} +{"id":"bd-7h7","title":"bd init should stop running daemon to avoid stale cache","description":"When running bd init, any running daemon continues with stale cached data, causing bd stats and other commands to show old counts.\n\nRepro:\n1. Have daemon running with 788 issues cached\n2. Clean JSONL to 128 issues, delete db, run bd init\n3. bd stats still shows 788 (daemon cache)\n4. Must manually run bd daemon --stop\n\nFix: bd init should automatically stop any running daemon before reinitializing.","status":"tombstone","priority":2,"issue_type":"bug","created_at":"2025-12-16T13:26:47.117226-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} +{"id":"bd-aydr.3","title":"Add git operations for --hard reset","description":"Implement git integration for hard reset mode.\n\n## Operations Needed\n1. `git rm -rf .beads/*.jsonl` - remove data files from index\n2. `git commit -m 'beads: reset to clean state'` - commit removal\n3. After re-init: `git add .beads/` and commit fresh state\n\n## Edge Cases to Handle\n- Uncommitted changes in .beads/ - warn or error\n- Detached HEAD state - warn, maybe block\n- Git not initialized - skip git ops, warn\n- Git operations fail mid-way - clear error messaging\n\n## Interface\n```go\ntype GitState struct {\n IsRepo bool\n IsDirty bool // uncommitted changes in .beads/\n IsDetached bool // detached HEAD\n Branch string // current branch name\n}\n\nfunc CheckGitState(beadsDir string) (*GitState, error)\nfunc GitRemoveBeads(beadsDir string) error\nfunc GitCommitReset(message string) error\nfunc GitAddAndCommit(beadsDir, message string) error\n```\n\n## Location\n`internal/reset/git.go` - keep with reset package for now\n\nNote: Codebase has no central git package. internal/compact/git.go is compact-specific.\nFuture refactoring could extract shared git utilities, but YAGNI for now.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-13T08:44:52.798312+11:00","updated_at":"2025-12-13T10:13:32.611131+11:00","closed_at":"2025-12-13T09:17:40.785927+11:00","dependencies":[{"issue_id":"bd-aydr.3","depends_on_id":"bd-aydr","type":"parent-child","created_at":"2025-12-13T08:44:52.798715+11:00","created_by":"daemon"}]} +{"id":"bd-qqc.6","title":"Create git tag v{{version}}","description":"Create the release tag:\n\n```bash\ngit tag v{{version}}\n```\n\nVerify: `git tag | grep {{version}}`","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T13:00:15.495086-08:00","updated_at":"2025-12-18T23:34:18.632308-08:00","closed_at":"2025-12-18T22:41:41.874099-08:00","dependencies":[{"issue_id":"bd-qqc.6","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T13:00:15.496036-08:00","created_by":"stevey"},{"issue_id":"bd-qqc.6","depends_on_id":"bd-qqc.5","type":"blocks","created_at":"2025-12-18T13:01:07.478315-08:00","created_by":"stevey"}]} +{"id":"bd-gocx","title":"Run bump-version.sh 0.32.1","description":"Execute ./scripts/bump-version.sh 0.32.1 to update all version references","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-20T21:53:18.470174-08:00","updated_at":"2025-12-20T21:54:54.500836-08:00","closed_at":"2025-12-20T21:54:54.500836-08:00","dependencies":[{"issue_id":"bd-gocx","depends_on_id":"bd-an4s","type":"parent-child","created_at":"2025-12-20T21:53:18.471793-08:00","created_by":"daemon"},{"issue_id":"bd-gocx","depends_on_id":"bd-x3j8","type":"blocks","created_at":"2025-12-20T21:53:29.688436-08:00","created_by":"daemon"}]} +{"id":"bd-r6a.4","title":"Add bd template list command","description":"Add a convenience command to list available templates:\n\nbd template list\n\nThis is equivalent to 'bd list --label=template' but more discoverable.\nCould also show variable placeholders found in each template.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-17T22:43:47.525316-08:00","updated_at":"2025-12-17T23:02:45.700582-08:00","closed_at":"2025-12-17T23:02:45.700582-08:00","dependencies":[{"issue_id":"bd-r6a.4","depends_on_id":"bd-r6a","type":"parent-child","created_at":"2025-12-17T22:43:47.525743-08:00","created_by":"daemon"},{"issue_id":"bd-r6a.4","depends_on_id":"bd-r6a.2","type":"blocks","created_at":"2025-12-17T22:44:03.474353-08:00","created_by":"daemon"}]} +{"id":"bd-kwro.2","title":"Graph Link: replies_to for conversation threading","description":"Implement replies_to link type for message threading.\n\nNew command:\n- bd mail reply \u003cid\u003e -m 'Response' creates a message with replies_to set\n\nQuery support:\n- bd show \u003cid\u003e --thread shows full conversation thread\n- Thread traversal in storage layer\n\nStorage:\n- replies_to column in issues table\n- Index for efficient thread queries\n\nThis enables Reddit-style nested threads where messages reply to other messages.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-16T03:01:25.292728-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-23z9","title":"Upgrade beads-mcp to 0.33.2","description":"Upgrade the MCP server via pip:\n\n```bash\npip install --upgrade beads-mcp\npip show beads-mcp | grep Version # Verify 0.33.2\n```\n\nNote: Restart Claude Code or MCP session to use new version.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.761057-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-f3ll","title":"Merge: bd-ot0w","description":"branch: polecat/dementus\ntarget: main\nsource_issue: bd-ot0w\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-19T23:20:33.495772-08:00","updated_at":"2025-12-20T23:17:27.000252-08:00","closed_at":"2025-12-20T23:17:27.000252-08:00"} +{"id":"bd-dyy","title":"Review PR #513: fix hooks install docs","description":"Review and merge PR #513 from aspiers. This PR fixes incorrect docs for how to install git hooks - updates README to use bd hooks install instead of removed install.sh. Simple 1-line change. URL: https://github.com/anthropics/beads/pull/513","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-13T08:15:14.838772+11:00","updated_at":"2025-12-13T07:07:19.718544-08:00","closed_at":"2025-12-13T07:07:19.718544-08:00"} +{"id":"bd-7yg","title":"Git merge driver uses invalid placeholders (%L, %R instead of %A, %B)","description":"## Problem\n\nThe beads git merge driver is configured with invalid Git placeholders:\n\n```\ngit config merge.beads.driver \"bd merge %A %O %L %R\"\n```\n\nGit doesn't recognize `%L` or `%R` as valid merge driver placeholders. The valid placeholders are:\n- `%O` = base (common ancestor)\n- `%A` = current version (ours)\n- `%B` = other version (theirs)\n\n## Impact\n\n- Affects ALL users when they have `.beads/beads.jsonl` merge conflicts\n- Automatic JSONL merge fails with error: \"error reading left file: failed to open file: open 7: no such file or directory\"\n- Users must manually resolve conflicts instead of getting automatic merge\n\n## Root Cause\n\nThe `bd init` command (or wherever the merge driver is configured) is using non-standard placeholders. When Git encounters `%L` and `%R`, it either passes them literally or interprets them incorrectly.\n\n## Fix\n\nUpdate the merge driver configuration to:\n```\ngit config merge.beads.driver \"bd merge %A %O %A %B\"\n```\n\nWhere:\n- 1st `%A` = output file (current file, will be overwritten)\n- `%O` = base (common ancestor)\n- 2nd `%A` = left/current version\n- `%B` = right/other version\n\n## Action Items\n\n1. Fix `bd init` (or equivalent setup command) to use correct placeholders\n2. Add migration/warning for existing users with misconfigured merge driver\n3. Update documentation with correct merge driver setup\n4. Consider adding validation when `bd init` is run","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-11-21T19:51:55.747608-05:00","updated_at":"2025-12-17T23:13:40.532368-08:00","closed_at":"2025-12-17T17:24:52.678668-08:00"} +{"id":"bd-i0rx","title":"Merge: bd-ao0s","description":"branch: polecat/rictus\ntarget: main\nsource_issue: bd-ao0s\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-20T01:13:42.716658-08:00","updated_at":"2025-12-20T23:17:26.993744-08:00","closed_at":"2025-12-20T23:17:26.993744-08:00"} +{"id":"bd-xj2e","title":"GH#522: Add --type flag to bd update command","description":"Add --type flag to bd update for changing issue type (task/epic/bug/feature). Storage layer already supports it. See GitHub issue #522.","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-12-16T01:03:12.506583-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-iz5t","title":"Swarm: 13 beads backlog issues for polecat execution","description":"## Swarm Overview\n\n13 issues prepared for parallel polecat execution. All issues have been enhanced with concrete implementation guidance, file lists, and success criteria.\n\n## Issue List\n\n### Bug (1) - HIGH PRIORITY\n| ID | Priority | Title |\n|----|----------|-------|\n| bd-phtv | P1 | Pinned field overwritten by subsequent commands |\n\n### Test Coverage (3)\n| ID | Package | Target |\n|----|---------|--------|\n| bd-io8c | internal/syncbranch | 27% β†’ 70% |\n| bd-thgk | internal/compact | 17% β†’ 70% |\n| bd-tvu3 | internal/beads | 48% β†’ 70% |\n\n### Code Quality (3)\n| ID | Task |\n|----|------|\n| bd-qioh | FatalError pattern standardization |\n| bd-rgyd | Split queries.go (1704 lines β†’ 5 files) |\n| bd-u2sc.3 | Split cmd/bd files (sync/init/show/compact) |\n\n### Features (4)\n| ID | Task |\n|----|------|\n| bd-au0.5 | Search date/priority filters |\n| bd-ykd9 | Doctor --fix auto-repair |\n| bd-g4b4 | Close hooks system |\n| bd-likt | Gate daemon RPC |\n\n### Polish (2)\n| ID | Task |\n|----|------|\n| bd-4qfb | Doctor output formatting |\n| bd-u2sc.4 | slog structured logging |\n\n## Issue Details\n\nAll issues have been enhanced with:\n- Concrete file lists to modify\n- Code snippets and patterns\n- Success criteria\n- Test commands\n\nRun `bd show \u003cid\u003e` for full details on any issue.\n\n## Execution Notes\n\n- All issues are independent (no blockers between them)\n- bd-phtv (P1 bug) should get priority - affects bd pin functionality\n- Test coverage tasks are straightforward but time-consuming\n- File split tasks (bd-rgyd, bd-u2sc.3) are mechanical but important\n\n## Completed During Prep\n\n- bd-ucgz (P2 bug) - Fixed inline: external deps orphan check (commit f2db0a1d)\n- Moved 5 gastown issues out of beads backlog (gt-dh65, gt-ng6g, gt-fqcz, gt-gswn, gt-rw2z)\n- Deferred 4 premature/post-1.0 issues\n- Closed bd-udsi epic (core implementation complete)","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-23T12:43:58.427835-08:00","updated_at":"2025-12-23T20:26:50.629471-08:00","closed_at":"2025-12-23T20:26:50.629471-08:00"} +{"id":"bd-de6","title":"Fix FindBeadsDir to prioritize main repo .beads for worktrees","description":"The FindBeadsDir function should prioritize finding .beads in the main repository root when accessed from a worktree, rather than finding worktree-local .beads directories. This ensures proper sharing of the database across all worktrees.","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-07T16:48:36.883117467-07:00","updated_at":"2025-12-23T22:33:23.795459-08:00","closed_at":"2025-12-23T22:33:23.795459-08:00"} +{"id":"bd-c2xs","title":"Exclude pinned issues from bd blocked","description":"Update bd blocked to exclude pinned issues. Pinned issues are context markers and should not appear in the blocked work list.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T23:33:44.684242-08:00","updated_at":"2025-12-21T11:29:42.179389-08:00","closed_at":"2025-12-21T11:29:42.179389-08:00","dependencies":[{"issue_id":"bd-c2xs","depends_on_id":"bd-0vg","type":"blocks","created_at":"2025-12-18T23:33:56.521323-08:00","created_by":"daemon"},{"issue_id":"bd-c2xs","depends_on_id":"bd-7h5","type":"blocks","created_at":"2025-12-18T23:34:07.736681-08:00","created_by":"daemon"}]} +{"id":"bd-drxs","title":"Make merge requests ephemeral wisps instead of permanent issues","description":"## Problem\n\nMerge requests (MRs) are currently created as regular beads issues (type: merge-request). This means they:\n- Sync to JSONL and propagate via git\n- Accumulate in the issue database indefinitely\n- Clutter `bd list` output with closed MRs\n- Create permanent records for inherently transient artifacts\n\nMRs are process artifacts, not work products. They exist briefly while code awaits merge, then their purpose is fulfilled. The git merge commit and GitHub PR (if applicable) provide the permanent audit trail - the beads MR is redundant.\n\n## Proposed Solution\n\nMake MRs ephemeral wisps that exist only during the merge process:\n\n1. **Create MRs as wisps**: When a polecat completes work and requests merge, create the MR in `.beads-wisp/` instead of `.beads/`\n\n2. **Refinery visibility**: This works because all clones within a rig share the same database:\n ```\n beads/ ← Rig root\n β”œβ”€β”€ .beads/ ← Permanent issues (synced to JSONL)\n β”œβ”€β”€ .beads-wisp/ ← Ephemeral wisps (NOT synced)\n β”œβ”€β”€ crew/dave/ ← Uses rig's shared DB\n β”œβ”€β”€ polecats/*/ ← Uses rig's shared DB\n └── refinery/ ← Uses rig's shared DB\n ```\n The refinery can see wisp MRs immediately - same SQLite database.\n\n3. **On merge completion**: Burn the wisp (delete without digest). The git merge commit IS the permanent record. No digest needed since:\n - Digest wouldn't be smaller than the MR itself (~200-300 bytes either way)\n - Git history provides complete audit trail\n - GitHub PR (if used) provides discussion/approval record\n\n4. **On merge rejection/abandonment**: Burn the wisp. Optionally notify the source polecat via mail.\n\n## Benefits\n\n- **Clean JSONL**: MRs never pollute the permanent issue history\n- **No accumulation**: Wisps are burned on completion, no cleanup needed\n- **Correct semantics**: Wisps are for \"operational ephemera\" - MRs fit perfectly\n- **Reduced sync churn**: Fewer JSONL updates, faster `bd sync`\n- **Cleaner queries**: `bd list` shows work items, not process artifacts\n\n## Implementation Notes\n\n### Where MRs are created\n\nCurrently MRs are created by the witness or polecat when work is ready for merge. This code needs to:\n- Set `wisp: true` on the MR issue\n- Or use a dedicated wisp creation path\n\n### Refinery changes\n\nThe refinery queries for pending MRs to process. It needs to:\n- Query wisp storage as well as (or instead of) permanent storage\n- Use `bd mol burn` or equivalent to delete processed MRs\n\n### What about cross-rig MRs?\n\nIf an MR needs to be visible outside the rig (e.g., external collaborators):\n- They would see the GitHub PR anyway\n- Or we could create a permanent \"merge completed\" notification issue\n- But this is likely unnecessary - MRs are internal coordination\n\n### Migration\n\nExisting MRs in permanent storage:\n- Can be cleaned up with `bd cleanup` or manual deletion\n- Or left to age out naturally\n- No migration of open MRs needed (they'll complete under old system\n\n## Alternatives Considered\n\n1. **Auto-cleanup of closed MRs**: Keep MRs as permanent issues but auto-delete after 24h. Simpler but still creates sync churn and temporary JSONL pollution.\n\n2. **MRs as mail only**: Polecat sends mail to refinery with merge details, no MR issue at all. Loses queryability (bd-801b [P2] [merge-request] closed - Merge: bd-bqcc\nbd-pvu0 [P2] [merge-request] closed - Merge: bd-4opy\nbd-i0rx [P2] [merge-request] closed - Merge: bd-ao0s\nbd-u0sb [P2] [merge-request] closed - Merge: bd-uqfn\nbd-8e0q [P2] [merge-request] closed - Merge: beads-ocs\nbd-hvng [P2] [merge-request] closed - Merge: bd-w193\nbd-4sfl [P2] [merge-request] closed - Merge: bd-14ie\nbd-sumr [P2] [merge-request] closed - Merge: bd-t4sb\nbd-3x9o [P2] [merge-request] closed - Merge: bd-by0d\nbd-whgv [P2] [merge-request] closed - Merge: bd-401h\nbd-f3ll [P2] [merge-request] closed - Merge: bd-ot0w\nbd-fmdy [P3] [merge-request] closed - Merge: bd-kzda).\n\n3. **Separate merge queue**: Refinery maintains internal state for pending merges, not in beads at all. Clean but requires new infrastructure.\n\nWisps are the cleanest solution - they already exist, have the right semantics, and require minimal changes.\n\n## Related\n\n- Wisp architecture: \n- Current MR creation: witness/refinery code paths\n- bd-pvu0, bd-801b: Example MRs currently in permanent storage\nEOF\n)","status":"tombstone","priority":0,"issue_type":"feature","created_at":"2025-12-23T01:39:25.4918-08:00","updated_at":"2025-12-23T01:58:23.550668-08:00","deleted_at":"2025-12-23T01:58:23.550668-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"feature"} +{"id":"bd-qqc.12","title":"Restart daemon with {{version}}","description":"Restart the bd daemon to pick up new version:\n\n```bash\nbd daemon --stop\nbd daemon --start\nbd daemon --health # Verify Version: {{version}}\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T23:08:04.155448-08:00","updated_at":"2025-12-18T23:09:05.777375-08:00","closed_at":"2025-12-18T23:09:05.777375-08:00","dependencies":[{"issue_id":"bd-qqc.12","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T23:08:04.155832-08:00","created_by":"daemon"},{"issue_id":"bd-qqc.12","depends_on_id":"bd-qqc.11","type":"blocks","created_at":"2025-12-18T23:08:19.779897-08:00","created_by":"daemon"}]} +{"id":"bd-hhv3","title":"Test and document molecular chemistry commands","description":"## Context\n\nImplemented the molecular chemistry UX commands per the design docs:\n- gastown/mayor/rig/docs/molecular-chemistry.md\n- gastown/mayor/rig/docs/chemistry-design-changes.md\n\nCommit: cadf798b\n\n## New Commands to Test\n\n| Command | Purpose |\n|---------|---------|\n| `bd pour \u003cproto\u003e` | Instantiate proto as persistent mol |\n| `bd wisp create \u003cproto\u003e` | Instantiate proto as ephemeral wisp |\n| `bd hook [--agent]` | Inspect what's on an agent's hook |\n\n## Enhanced Commands to Test\n\n| Command | Changes |\n|---------|---------|\n| `bd mol spawn --pour` | New flag, `--persistent` deprecated |\n| `bd mol bond --pour` | Force liquid phase on wisp target |\n| `bd pin --for \u003cagent\u003e --start` | Chemistry workflow support |\n\n## Test Scenarios\n\n1. **bd pour**: Create persistent mol from a proto\n - Verify creates in .beads/ (not .beads-wisp/)\n - Verify variable substitution works\n - Verify --dry-run works\n\n2. **bd wisp create**: Create ephemeral wisp from proto\n - Verify creates in .beads-wisp/\n - Verify bd wisp list shows it\n - Verify bd mol squash works\n - Verify bd mol burn works\n\n3. **bd hook**: Inspect pinned work\n - Pin something, verify bd hook shows it\n - Test --agent flag\n - Test --json output\n\n4. **bd pin --for**: Assign work to agent\n - Verify sets pinned=true\n - Verify sets assignee\n - Verify --start sets status=in_progress\n\n5. **bd mol bond --pour**: Force liquid on wisp target\n - Bond a proto to a wisp with --pour\n - Verify spawned issues are in .beads/\n\n## Documentation\n\n- Update CLAUDE.md with new commands\n- Add examples to --help output (already done)\n- Consider adding to docs/CLI_REFERENCE.md\n\n## Code Review\n\n- Check for edge cases\n- Verify error messages are helpful\n- Ensure --json output is consistent","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-22T02:22:10.906646-08:00","updated_at":"2025-12-22T02:55:37.983703-08:00","closed_at":"2025-12-22T02:55:37.983703-08:00"} +{"id":"bd-313v","title":"rpc: Rich mutation events not emitted","description":"The activity command (activity.go) references rich mutation event types (MutationBonded, MutationSquashed, MutationBurned, MutationStatus) that include metadata like OldStatus, NewStatus, ParentID, and StepCount.\n\nHowever, the emitMutation() function in server_core.go:141 only accepts (eventType, issueID) and only populates Type, IssueID, and Timestamp. The additional metadata fields are never set.\n\nNeed to either:\n1. Add an emitRichMutation() function that accepts the additional metadata\n2. Update call sites (close, bond, squash, burn operations) to emit rich events\n\nWithout this fix, the activity feed will never show:\n- Status transitions (in_progress -\u003e closed)\n- Bonded events with step counts\n- Parent molecule relationships\n\nDiscovered during code review of bd-xo1o implementation.","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-23T04:06:17.39523-08:00","updated_at":"2025-12-23T04:13:19.205249-08:00","closed_at":"2025-12-23T04:13:19.205249-08:00"} +{"id":"bd-d3e5","title":"Test issue 2","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-14T11:21:13.878680387-07:00","updated_at":"2025-12-14T11:21:13.878680387-07:00","closed_at":"2025-12-14T00:32:13.890274-08:00"} +{"id":"bd-h8ym","title":"Wait for CI to pass","description":"Monitor GitHub Actions - all checks must pass before release artifacts are built","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T13:52:33.066792-08:00","updated_at":"2025-12-21T13:53:49.454536-08:00","deleted_at":"2025-12-21T13:53:49.454536-08:00","deleted_by":"stevey","delete_reason":"manual delete","original_type":"task"} +{"id":"bd-nmch","title":"Add EntityRef type for structured entity references","description":"Create EntityRef struct with Name, Platform, Org, ID fields. This is the foundation for HOP entity tracking. Can render as entity://hop/\u003cplatform\u003e/\u003corg\u003e/\u003cid\u003e when needed. Add to internal/types/types.go.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-22T17:53:25.104328-08:00","updated_at":"2025-12-22T17:58:00.014103-08:00","closed_at":"2025-12-22T17:58:00.014103-08:00","dependencies":[{"issue_id":"bd-nmch","depends_on_id":"bd-7pwh","type":"parent-child","created_at":"2025-12-22T17:53:43.325405-08:00","created_by":"daemon"}]} +{"id":"bd-rupw","title":"Run bump-version.sh 0.30.7","description":"Run ./scripts/bump-version.sh 0.30.7 to update version in all files","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T22:56:48.649647-08:00","updated_at":"2025-12-19T22:57:31.512956-08:00","closed_at":"2025-12-19T22:57:31.512956-08:00","dependencies":[{"issue_id":"bd-rupw","depends_on_id":"bd-8pyn","type":"parent-child","created_at":"2025-12-19T22:56:48.653475-08:00","created_by":"stevey"}]} +{"id":"bd-cbed9619.3","title":"Implement global N-way collision resolution algorithm","description":"**Summary:** Replaced pairwise collision resolution with a global N-way algorithm that deterministically resolves issue ID conflicts across multiple clones. The new approach groups collisions, deduplicates by content hash, and assigns sequential IDs to ensure consistent synchronization.\n\n**Key Decisions:**\n- Use content hash for global, stable sorting\n- Group collisions by base ID\n- Assign sequential IDs based on sorted unique versions\n- Eliminate order-dependent remapping logic\n\n**Resolution:** Implemented ResolveNWayCollisions function that guarantees deterministic issue ID assignment across multiple synchronization scenarios, solving the core challenge of maintaining consistency in distributed systems with potential conflicts.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-10-28T18:37:42.85616-07:00","updated_at":"2025-12-17T23:18:29.112335-08:00","dependencies":[{"issue_id":"bd-cbed9619.3","depends_on_id":"bd-cbed9619.5","type":"blocks","created_at":"2025-10-28T18:39:28.30886-07:00","created_by":"daemon"},{"issue_id":"bd-cbed9619.3","depends_on_id":"bd-cbed9619.4","type":"blocks","created_at":"2025-10-28T18:39:28.336312-07:00","created_by":"daemon"}],"deleted_at":"2025-12-17T23:18:29.112335-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} +{"id":"bd-2wh","title":"Test pinned for stats","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-12-18T21:47:09.334108-08:00","updated_at":"2025-12-18T21:47:25.17917-08:00","deleted_at":"2025-12-18T21:47:25.17917-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} +{"id":"bd-06px","title":"bd sync --from-main fails: unknown flag --no-git-history","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-17T14:32:02.998106-08:00","updated_at":"2025-12-17T23:13:40.531756-08:00","closed_at":"2025-12-17T17:21:48.506039-08:00"} +{"id":"bd-ldb0","title":"Rename ephemeral β†’ wisp throughout codebase","description":"## The Change\n\nRename 'ephemeral' to 'wisp' throughout the beads codebase.\n\n## Why\n\n**Ephemeral** is:\n- 4 syllables (too long)\n- Greek/academic (doesn't match bond/burn/squash)\n- Overused in tech (K8s, networking, storage)\n- Passive/descriptive\n\n**Wisp** is:\n- 1 syllable (matches bond/burn/squash)\n- Evocative - you can SEE a wisp\n- Steam engine metaphor - Gas Town is engines, steam wisps rise and dissipate\n- Will-o'-the-wisp - transient spirits that guide then vanish\n- Unique - nobody else uses it\n\n## The Steam Engine Metaphor\n\n```\nEngine does work β†’ generates steam\nSteam wisps rise β†’ execution trace\nSteam condenses β†’ digest (distillate)\nSteam dissipates β†’ cleaned up (burned)\n```\n\n## Full Vocabulary\n\n| Term | Meaning |\n|------|---------|\n| bond | Attach proto to work (creates wisps) |\n| wisp | Temporary execution step |\n| squash | Condense wisps into digest |\n| burn | Destroy wisps without record |\n| digest | Permanent condensed record |\n\n## Changes Required\n\n### Code\n- `Ephemeral bool` β†’ `Wisp bool` in types/issue.go\n- `--ephemeral` flag β†’ remove (wisp is default)\n- `--persistent` flag β†’ keep as opt-out\n- `bd cleanup --ephemeral` β†’ `bd cleanup --wisps`\n- Update all references in mol_*.go files\n\n### Docs\n- Update all documentation\n- Update CLAUDE.md examples\n- Update CLI help text\n\n### Database Migration\n- Add migration to rename field (or keep internal name, just change API)\n\n## Example Usage After\n\n```bash\nbd mol bond mol-polecat-work # Creates wisps (default)\nbd mol bond mol-xxx --persistent # Creates permanent issues\nbd mol squash bd-xxx # Condenses wisps β†’ digest\nbd cleanup --wisps # Clean old wisps\nbd list --wisps # Show wisp issues\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-21T14:44:41.576068-08:00","updated_at":"2025-12-22T00:32:31.153738-08:00","closed_at":"2025-12-22T00:32:31.153738-08:00"} +{"id":"bd-bwk2","title":"Centralize error handling patterns in storage layer","description":"80+ instances of inconsistent error handling across sqlite.go with mix of %w, %v, and no wrapping.\n\nLocation: internal/storage/sqlite/sqlite.go (throughout)\n\nProblem:\n- Some use fmt.Errorf(\"op failed: %w\", err) - correct wrapping\n- Some use fmt.Errorf(\"op failed: %v\", err) - loses error chain\n- Some return err directly - no context\n- Hard to debug production issues\n- Can't distinguish error types\n\nSolution: Create internal/storage/sqlite/errors.go:\n- Define sentinel errors (ErrNotFound, ErrInvalidID, etc.)\n- Create wrapDBError(op string, err error) helper\n- Convert sql.ErrNoRows to ErrNotFound\n- Always wrap with operation context\n\nImpact: Lost error context; inconsistent messages; hard to debug\n\nEffort: 5-7 hours","status":"closed","priority":1,"issue_type":"task","created_at":"2025-11-16T14:51:54.974909-08:00","updated_at":"2025-12-21T21:44:37.237175-08:00","closed_at":"2025-12-21T21:44:37.237175-08:00"} +{"id":"bd-g4b4","title":"bd close hooks: context check and notifications","description":"Add hook system to bd close for notifications and custom actions.\n\n## Scope (MVP)\n\nImplement **command hooks only** for bd close. Deferred: notify, webhook types.\n\n## Implementation\n\n### 1. Config Schema\n\nAdd to internal/configfile/config.go:\n\n```go\ntype HooksConfig struct {\n OnClose []HookEntry `yaml:\"on_close,omitempty\"`\n}\n\ntype HookEntry struct {\n Command string `yaml:\"command\"` // Shell command to run\n Name string `yaml:\"name,omitempty\"` // Optional display name\n}\n```\n\nAdd `Hooks HooksConfig` field to Config struct.\n\n### 2. Hook Execution\n\nCreate internal/hooks/close_hooks.go:\n\n```go\nfunc RunCloseHooks(ctx context.Context, cfg *configfile.Config, issue *types.Issue) error {\n for _, hook := range cfg.Hooks.OnClose {\n cmd := exec.CommandContext(ctx, \"sh\", \"-c\", hook.Command)\n cmd.Env = append(os.Environ(),\n \"BEAD_ID=\"+issue.ID,\n \"BEAD_TITLE=\"+issue.Title,\n \"BEAD_TYPE=\"+string(issue.IssueType),\n \"BEAD_PRIORITY=\"+strconv.Itoa(issue.Priority),\n \"BEAD_CLOSE_REASON=\"+issue.CloseReason,\n )\n cmd.Stdout = os.Stdout\n cmd.Stderr = os.Stderr\n if err := cmd.Run(); err \\!= nil {\n // Log warning but dont fail the close\n fmt.Fprintf(os.Stderr, \"Warning: close hook %q failed: %v\\n\", hook.Name, err)\n }\n }\n return nil\n}\n```\n\n### 3. Integration Point\n\nIn cmd/bd/close.go, after successful close:\n\n```go\n// Run close hooks\nif cfg := configfile.Load(); cfg \\!= nil {\n hooks.RunCloseHooks(ctx, cfg, closedIssue)\n}\n```\n\n### 4. Example Config\n\n```yaml\n# .beads/config.yaml\nhooks:\n on_close:\n - name: show-next\n command: bd ready --limit 1\n - name: context-check \n command: echo \"Issue $BEAD_ID closed. Check context if nearing limit.\"\n```\n\n## Environment Variables\n\n| Variable | Description |\n|----------|-------------|\n| BEAD_ID | Issue ID (e.g., bd-abc1) |\n| BEAD_TITLE | Issue title |\n| BEAD_TYPE | Issue type (task, bug, feature, etc.) |\n| BEAD_PRIORITY | Priority (0-4) |\n| BEAD_CLOSE_REASON | Close reason if provided |\n\n## Testing\n\nAdd test in internal/hooks/close_hooks_test.go:\n- Test hook execution with mock config\n- Test env vars are set correctly\n- Test hook failure doesnt block close\n\n## Files to Create/Modify\n\n1. **Create:** internal/hooks/close_hooks.go\n2. **Create:** internal/hooks/close_hooks_test.go \n3. **Modify:** internal/configfile/config.go (add HooksConfig)\n4. **Modify:** cmd/bd/close.go (call RunCloseHooks)\n5. **Modify:** docs/CONFIG.md (document hooks config)\n\n## Out of Scope (Future)\n\n- notify hook type (gt mail integration)\n- webhook type (HTTP POST)\n- on_create, on_update hooks\n- Hook timeout configuration\n- Parallel hook execution","status":"closed","priority":3,"issue_type":"feature","created_at":"2025-12-22T17:03:56.183461-08:00","updated_at":"2025-12-23T13:38:15.898746-08:00","closed_at":"2025-12-23T13:38:15.898746-08:00","dependencies":[{"issue_id":"bd-g4b4","depends_on_id":"bd-iz5t","type":"parent-child","created_at":"2025-12-23T12:44:07.811793-08:00","created_by":"daemon"}]} +{"id":"bd-u2sc.2","title":"Migrate sort.Slice to slices.SortFunc","description":"Go 1.21+ provides slices.SortFunc which is cleaner and slightly faster than sort.Slice.\n\nFound 15+ instances of sort.Slice in:\n- cmd/bd/autoflush.go\n- cmd/bd/count.go\n- cmd/bd/daemon_sync.go\n- cmd/bd/doctor.go\n- cmd/bd/export.go\n- cmd/bd/import.go\n- cmd/bd/integrity.go\n- cmd/bd/jira.go\n- cmd/bd/list.go\n- cmd/bd/migrate_hash_ids.go\n- cmd/bd/rename_prefix.go\n- cmd/bd/show.go\n\nExample migration:\n```go\n// Before\nsort.Slice(issues, func(i, j int) bool {\n return issues[i].Priority \u003c issues[j].Priority\n})\n\n// After\nslices.SortFunc(issues, func(a, b *types.Issue) int {\n return cmp.Compare(a.Priority, b.Priority)\n})\n```\n\nBenefits:\n- Cleaner 3-way comparison\n- Slightly better performance\n- Modern idiomatic Go","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-22T14:26:55.573524-08:00","updated_at":"2025-12-22T15:10:12.639807-08:00","closed_at":"2025-12-22T15:10:12.639807-08:00","dependencies":[{"issue_id":"bd-u2sc.2","depends_on_id":"bd-u2sc","type":"parent-child","created_at":"2025-12-22T14:26:55.573978-08:00","created_by":"daemon"}]} +{"id":"bd-pbh.13","title":"Monitor GoReleaser CI job","description":"Watch the GoReleaser action:\nhttps://github.com/steveyegge/beads/actions/workflows/release.yml\n\nShould complete in ~10 minutes and create:\n- GitHub Release with binaries for all platforms\n- Checksums and signatures\n\nCheck status:\n```bash\ngh run list --workflow=release.yml -L 1\ngh run watch # to monitor live\n```\n\nVerify release exists:\n```bash\ngh release view v0.30.4\n```\n\n\n```verify\ngh release view v0.30.4 --json tagName -q .tagName | grep -q 'v0.30.4'\n```","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-17T21:19:11.074476-08:00","updated_at":"2025-12-17T21:46:46.311506-08:00","closed_at":"2025-12-17T21:46:46.311506-08:00","dependencies":[{"issue_id":"bd-pbh.13","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:11.074833-08:00","created_by":"daemon"},{"issue_id":"bd-pbh.13","depends_on_id":"bd-pbh.12","type":"blocks","created_at":"2025-12-17T21:19:11.279092-08:00","created_by":"daemon"}]} +{"id":"bd-tvu3","title":"Improve test coverage for internal/beads (48.1% β†’ 70%)","description":"Improve test coverage for internal/beads package from 48% to 70%.\n\n## Current State\n- Coverage: 48.4%\n- Files: beads.go, fingerprint.go\n- Tests: beads_test.go (moderate coverage)\n\n## Functions Needing Tests\n\n### beads.go (database discovery)\n- [ ] followRedirect - needs redirect file tests\n- [ ] findDatabaseInBeadsDir - needs various dir structures\n- [x] NewSQLiteStorage - likely covered\n- [ ] FindDatabasePath - needs BEADS_DB env var tests\n- [ ] hasBeadsProjectFiles - needs file existence tests\n- [ ] FindBeadsDir - needs directory traversal tests\n- [ ] FindJSONLPath - needs path derivation tests\n- [ ] findGitRoot - needs git repo tests\n- [ ] findDatabaseInTree - needs nested directory tests\n- [ ] FindAllDatabases - needs multi-database tests\n- [ ] FindWispDir - needs wisp directory tests\n- [ ] FindWispDatabasePath - needs wisp path tests\n- [ ] NewWispStorage - needs wisp storage tests\n- [ ] EnsureWispGitignore - needs gitignore creation tests\n- [ ] IsWispDatabase - needs path classification tests\n\n### fingerprint.go (repo identification)\n- [ ] ComputeRepoID - needs various remote URL tests\n- [ ] canonicalizeGitURL - needs URL normalization tests\n- [ ] GetCloneID - needs clone identification tests\n\n## Implementation Guide\n\n1. **Use temp directories:**\n ```go\n func TestFindBeadsDir(t *testing.T) {\n tmpDir := t.TempDir()\n beadsDir := filepath.Join(tmpDir, \".beads\")\n os.MkdirAll(beadsDir, 0755)\n \n // Create test files\n os.WriteFile(filepath.Join(beadsDir, \"beads.db\"), []byte{}, 0644)\n \n // Change to tmpDir and test\n oldWd, _ := os.Getwd()\n os.Chdir(tmpDir)\n defer os.Chdir(oldWd)\n \n result := FindBeadsDir()\n assert.Equal(t, beadsDir, result)\n }\n ```\n\n2. **Test scenarios:**\n - BEADS_DB environment variable set\n - .beads/ in current directory\n - .beads/ in parent directory\n - Redirect file pointing elsewhere\n - No beads directory found\n - Wisp directory alongside main beads\n\n3. **Git remote URL tests:**\n ```go\n tests := []struct{\n input string\n expected string\n }{\n {\"git@github.com:user/repo.git\", \"github.com/user/repo\"},\n {\"https://github.com/user/repo\", \"github.com/user/repo\"},\n {\"ssh://git@github.com/user/repo.git\", \"github.com/user/repo\"},\n }\n ```\n\n## Success Criteria\n- Coverage β‰₯ 70%\n- All FindXxx functions have tests\n- Environment variable handling tested\n- Edge cases (missing dirs, redirects) covered\n\n## Run Tests\n```bash\ngo test -v -cover ./internal/beads\ngo test -race ./internal/beads\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-13T20:42:59.739142-08:00","updated_at":"2025-12-23T13:36:17.885237-08:00","closed_at":"2025-12-23T13:36:17.885237-08:00","dependencies":[{"issue_id":"bd-tvu3","depends_on_id":"bd-iz5t","type":"parent-child","created_at":"2025-12-23T12:44:07.362967-08:00","created_by":"daemon"}]} +{"id":"bd-yqhh","title":"bd list --parent: filter by parent issue","description":"Add --parent flag to bd list to filter issues by parent.\n\nExample:\n```bash\nbd list --parent=gt-h5n --status=open\n```\n\nWould show all open children of gt-h5n.\n\nUseful for:\n- Checking epic progress\n- Finding swarmable work within an epic\n- Molecule step listing","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-12-23T01:51:26.830952-08:00","updated_at":"2025-12-23T02:10:12.909803-08:00","closed_at":"2025-12-23T02:10:12.909803-08:00"} +{"id":"bd-s1pz","title":"Merge: bd-u2sc.4","description":"branch: polecat/Logger\ntarget: main\nsource_issue: bd-u2sc.4\nrig: beads","status":"closed","priority":3,"issue_type":"merge-request","created_at":"2025-12-23T13:45:52.412757-08:00","updated_at":"2025-12-23T19:12:08.356689-08:00","closed_at":"2025-12-23T19:12:08.356689-08:00"} +{"id":"bd-kwjh.5","title":"bd wisp list command","description":"Add bd wisp list command to show ephemeral molecules.\n\n## Usage\n```bash\nbd wisp list # List all wisps in current context\nbd wisp list --json # JSON output\nbd wisp list --all # Include orphaned wisps\n```\n\n## Output\n- Shows in-progress ephemeral molecules\n- Columns: ID, Title, Started, Last Update, Status\n- Warns about orphaned wisps (old updated_at)\n\n## Implementation\n- New 'wisp' command group\n- Read from .beads-ephemeral/issues.jsonl\n- Filter to ephemeral:true issues","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-22T00:07:29.514936-08:00","updated_at":"2025-12-22T01:09:03.514376-08:00","closed_at":"2025-12-22T01:09:03.514376-08:00","dependencies":[{"issue_id":"bd-kwjh.5","depends_on_id":"bd-kwjh","type":"parent-child","created_at":"2025-12-22T00:07:29.515301-08:00","created_by":"daemon"},{"issue_id":"bd-kwjh.5","depends_on_id":"bd-kwjh.2","type":"blocks","created_at":"2025-12-22T00:07:29.516134-08:00","created_by":"daemon"}]} +{"id":"bd-bha9","title":"Add missing updated_at index on issues table","description":"GetStaleIssues queries filter by updated_at but there's no index on this column.\n\n**Current query (ready.go:253-254):**\n```sql\nWHERE status != 'closed'\n AND datetime(updated_at) \u003c datetime('now', '-' || ? || ' days')\n```\n\n**Problem:** Full table scan when filtering stale issues.\n\n**Solution:** Add migration to create:\n```sql\nCREATE INDEX IF NOT EXISTS idx_issues_updated_at ON issues(updated_at);\n```\n\n**Note:** The datetime() function wrapper may prevent index usage. Consider also storing updated_at as INTEGER (unix timestamp) for better index efficiency, or test if SQLite can use the index despite the function.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-22T22:58:49.166051-08:00","updated_at":"2025-12-22T23:15:13.837078-08:00","closed_at":"2025-12-22T23:15:13.837078-08:00","dependencies":[{"issue_id":"bd-bha9","depends_on_id":"bd-h0we","type":"discovered-from","created_at":"2025-12-22T22:58:49.166949-08:00","created_by":"daemon"}]} +{"id":"bd-pbh","title":"Release v0.30.4","description":"## Version Bump Workflow\n\nCoordinating release from 0.30.3 to 0.30.4.\n\n### Components Updated\n- Go CLI (cmd/bd/version.go)\n- Claude Plugin (.claude-plugin/*.json)\n- MCP Server (integrations/beads-mcp/)\n- npm Package (npm-package/package.json)\n- Git hooks (cmd/bd/templates/hooks/)\n\n### Release Channels\n- GitHub Releases (GoReleaser)\n- PyPI (beads-mcp)\n- npm (@beads/cli)\n- Homebrew (homebrew-beads tap)\n","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-17T21:19:10.926133-08:00","updated_at":"2025-12-17T21:46:46.192948-08:00","closed_at":"2025-12-17T21:46:46.192948-08:00"} {"id":"bd-zw72","title":"Investigate incremental blocked_issues_cache updates at scale","description":"Current blocked_issues_cache strategy does full DELETE + INSERT on every dependency/status change.\n\n**Problem at scale:**\n- 10K issues: ~50ms rebuild (acceptable)\n- 100K issues: ~500ms rebuild (noticeable)\n- 1M issues: multi-second rebuilds (problematic)\n\n**Current implementation (blocked_cache.go:104-154):**\n- DELETE FROM blocked_issues_cache\n- INSERT with recursive CTE\n\n**Potential optimizations:**\n1. **Incremental updates:** Only add/remove affected issue IDs instead of full rebuild\n2. **Dirty tracking:** Skip rebuild if cache is already valid\n3. **Async rebuild:** Rebuild in background, serve stale cache briefly\n4. **Partial invalidation:** Only invalidate affected subtree\n\n**Decision needed:** Is this premature optimization? Current target is \u003c100K issues.\n\n**Benchmark:** Add benchmark for cache rebuild at 100K scale to measure actual impact.","status":"deferred","priority":3,"issue_type":"task","created_at":"2025-12-22T22:58:55.165718-08:00","updated_at":"2025-12-23T12:27:02.297059-08:00","dependencies":[{"issue_id":"bd-zw72","depends_on_id":"bd-h0we","type":"discovered-from","created_at":"2025-12-22T22:58:55.166427-08:00","created_by":"daemon"}]} -{"id":"bd-zwtq","title":"Run bd doctor at end of bd init to verify setup","description":"Run bd doctor diagnostics at end of bd init (after line 398 in init.go). If issues found, warn user immediately: '⚠ Setup incomplete. Run bd doctor --fix to complete setup.' Catches configuration problems before user encounters them in normal workflow.","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-11-21T23:16:09.596778-08:00","updated_at":"2025-12-23T04:20:51.887338-08:00","closed_at":"2025-12-23T04:20:51.887338-08:00","close_reason":"Already implemented in commits ec4117d0 and 3a36d0b9 - hooks/merge driver install by default, doctor runs at end of init","dependencies":[{"issue_id":"bd-zwtq","depends_on_id":"bd-tbz3","type":"parent-child","created_at":"2025-11-21T23:16:09.597617-08:00","created_by":"daemon","metadata":"{}"}]} +{"id":"bd-pbh.5","title":"Update .claude-plugin/marketplace.json to 0.30.4","description":"Update version field in .claude-plugin/marketplace.json:\n```json\n\"version\": \"0.30.4\"\n```\n\n\n```verify\njq -e '.plugins[0].version == \"0.30.4\"' .claude-plugin/marketplace.json\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:10.985619-08:00","updated_at":"2025-12-17T21:46:46.239122-08:00","closed_at":"2025-12-17T21:46:46.239122-08:00","dependencies":[{"issue_id":"bd-pbh.5","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:10.985942-08:00","created_by":"daemon"}]} +{"id":"bd-8hy","title":"Kill running daemons","description":"Stop all bd daemons before release:\n\n```bash\npkill -f 'bd.*daemon' || true\nsleep 1\npgrep -lf 'bd.*daemon' # Should show nothing\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T22:42:58.255478-08:00","updated_at":"2025-12-18T22:43:55.394966-08:00","closed_at":"2025-12-18T22:43:55.394966-08:00","dependencies":[{"issue_id":"bd-8hy","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T22:43:16.23168-08:00","created_by":"daemon"}]} +{"id":"bd-vs9","title":"Fix unparam unused parameter in cmd/bd/doctor.go:541","description":"Linting issue: checkHooksQuick - path is unused (unparam) at cmd/bd/doctor.go:541:22. Error: func checkHooksQuick(path string) string {","status":"closed","priority":0,"issue_type":"bug","created_at":"2025-12-07T15:35:17.02177046-07:00","updated_at":"2025-12-17T23:13:40.535743-08:00","closed_at":"2025-12-17T16:46:11.028332-08:00"} +{"id":"bd-gxq","title":"Simplify bd onboard to minimal AGENTS.md snippet pointing to bd prime","description":"## Context\nGH#604 raised concerns about bd onboard bloating AGENTS.md with ~100+ lines of static instructions that:\n- Load every session whether beads is being used or not\n- Get stale when bd upgrades\n- Waste tokens\n\n## Solution\nSimplify `bd onboard` to output a minimal snippet (~2 lines) that points to `bd prime`:\n\n```markdown\n## Issue Tracking\nThis project uses beads (`bd`) for issue tracking.\nRun `bd prime` for workflow context, or hooks auto-inject it.\n```\n\n## Rationale\n- `bd prime` is dynamic, concise (~80 lines), and always matches installed bd version\n- Hooks already auto-inject `bd prime` at session start when .beads/ detected\n- AGENTS.md only needs to mention beads exists, not contain full instructions\n\n## Implementation\n1. Update `cmd/bd/onboard.go` to output minimal snippet\n2. Keep `--output` flag for BD_GUIDE.md generation (may still be useful)\n3. Update help text to explain the new approach","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T11:42:38.604891-08:00","updated_at":"2025-12-18T11:47:28.020419-08:00","closed_at":"2025-12-18T11:47:28.020419-08:00"} +{"id":"bd-fy4q","title":"Phase 1.2 follow-up: Clarify format storage","description":"Phase 1.2 created the bdt executable structure but issues.toon is currently stored in JSONL format, not TOON format.\n\nThis is intentional for now:\n- Phase 1.2 (bd-jv4w): Just infrastructure - separate binary, separate directory\n- Phase 1.3 (bd-j0tr): Implement actual TOON encoding/writing\n\nFor now, keep as-is: filename '.toon' signals intent, content is JSONL (interim format). Phase 1.3 will switch to actual TOON.","status":"open","priority":3,"issue_type":"task","created_at":"2025-12-19T14:03:19.491040345-07:00","updated_at":"2025-12-19T14:03:19.491040345-07:00","dependencies":[{"issue_id":"bd-fy4q","depends_on_id":"bd-jv4w","type":"discovered-from","created_at":"2025-12-19T14:03:19.498933555-07:00","created_by":"daemon"}]} +{"id":"bd-ibl9","title":"Merge: bd-4qfb","description":"branch: polecat/Polish\ntarget: main\nsource_issue: bd-4qfb\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-23T13:37:57.255125-08:00","updated_at":"2025-12-23T19:12:08.352249-08:00","closed_at":"2025-12-23T19:12:08.352249-08:00"} +{"id":"bd-95k8","title":"Pinned field available in beads v0.37.0","description":"Hey max,\n\nHeads up on your mail overhaul work:\n\n1. **Pinned field is available** - beads v0.37.0 (released by dave earlier) includes the pinned field on issues. You'll want to add this to BeadsMessage in types.go.\n\n2. **Database migration** - Check if existing .beads databases need migration to support the pinned field. Run `bd doctor` to see if it flags anything.\n\n3. **Sorting task** - Once you have the pinned field, gt-ngu1 (pinned beads first in mail inbox) needs implementing. Since messages now come from `bd list --type=message`, you'll need to either:\n - Sort in listBeads() after fetching, or\n - Ensure bd list returns pinned items first (may already do this?)\n\nCheck what version of bd you're building against.\n\n-- Mayor","status":"closed","priority":2,"issue_type":"message","created_at":"2025-12-20T17:51:57.315956-08:00","updated_at":"2025-12-21T17:52:18.542169-08:00","closed_at":"2025-12-21T17:52:18.542169-08:00"} +{"id":"bd-pbh.17","title":"Install 0.30.4 Go binary locally","description":"Rebuild and install the Go binary:\n```bash\ngo install ./cmd/bd\n# OR\nmake install\n```\n\nVerify:\n```bash\nbd --version\n```\n\n\n```verify\nbd --version 2\u003e\u00261 | grep -q '0.30.4'\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:11.108597-08:00","updated_at":"2025-12-17T21:46:46.352702-08:00","closed_at":"2025-12-17T21:46:46.352702-08:00","dependencies":[{"issue_id":"bd-pbh.17","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:11.108917-08:00","created_by":"daemon"},{"issue_id":"bd-pbh.17","depends_on_id":"bd-pbh.13","type":"blocks","created_at":"2025-12-17T21:19:11.322091-08:00","created_by":"daemon"}]} +{"id":"bd-bqcc","title":"Consolidate maintenance commands into bd doctor --fix","description":"Per rsnodgrass in GH#692:\n\u003e \"The biggest improvement to beads from an ergonomics perspective would be to prune down commands. We have a lot of 'maintenance' commands that probably should just be folded into 'bd doctor --fix' automatically.\"\n\nCurrent maintenance commands that could be consolidated:\n- clean - Clean up temporary git merge artifacts\n- cleanup - Delete closed issues and prune expired tombstones\n- compact - Compact old closed issues\n- detect-pollution - Detect and clean test issues\n- migrate-* (5 commands) - Various migration utilities\n- repair-deps - Fix orphaned dependency references\n- validate - Database health checks\n\nProposal:\n1. Make `bd doctor` the single entry point for health checks\n2. Add `bd doctor --fix` to auto-fix common issues\n3. Deprecate (but keep working) individual commands\n4. Add `bd doctor --all` for comprehensive maintenance\n\nThis would reduce cognitive load for users - they just need to remember 'bd doctor'.\n\nNote: This is higher impact but also higher risk - needs careful design to avoid breaking existing workflows.","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-12-22T14:27:31.466556-08:00","updated_at":"2025-12-23T01:33:25.732363-08:00","closed_at":"2025-12-23T01:33:25.732363-08:00"} +{"id":"bd-8pyn","title":"Version Bump: 0.30.7","description":"Release checklist for version 0.30.7. This molecule ensures all release steps are completed properly.","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-19T22:56:48.648694-08:00","updated_at":"2025-12-20T00:49:51.927518-08:00","closed_at":"2025-12-20T00:25:59.529183-08:00"} +{"id":"bd-x2bd","title":"Merge: bd-likt","description":"branch: polecat/Gater\ntarget: main\nsource_issue: bd-likt\nrig: beads","status":"closed","priority":3,"issue_type":"merge-request","created_at":"2025-12-23T13:46:27.091846-08:00","updated_at":"2025-12-23T19:12:08.355637-08:00","closed_at":"2025-12-23T19:12:08.355637-08:00"} +{"id":"bd-hzvz","title":"Update info.go versionChanges","description":"Add entry to versionChanges in cmd/bd/info.go with agent-actionable changes for 0.30.7","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T22:56:48.649359-08:00","updated_at":"2025-12-19T22:57:31.604229-08:00","closed_at":"2025-12-19T22:57:31.604229-08:00","dependencies":[{"issue_id":"bd-hzvz","depends_on_id":"bd-8pyn","type":"parent-child","created_at":"2025-12-19T22:56:48.652068-08:00","created_by":"stevey"},{"issue_id":"bd-hzvz","depends_on_id":"bd-2ep8","type":"blocks","created_at":"2025-12-19T22:56:48.652376-08:00","created_by":"stevey"}]} +{"id":"bd-fa2h","title":"🀝 HANDOFF: v0.31.0 released, molecules discussion","description":"Session completed 0.31.0 release and had important molecules discussion.\n\n## Completed\n- v0.31.0 released (deferred status, audit trail, directory labels, etc.)\n- Fixed lint issues, hook version markers, codesigning\n- All CI green, artifacts verified\n\n## Filed Issues\n- bd-usro: Rename template instantiate β†’ bd mol bond\n- bd-y8bj: Auto-detect identity for bd mail (P1 bug)\n- gt-975: Molecule execution support for polecats/crew\n- gt-976: Crew lifecycle support in Deacon\n\n## Key Insight\nMolecules are the future - TodoWrite is ephemeral, molecules are persistent institutional memory on the world chain. I tried to use TodoWrite for version bump and missed steps (codesigning, MCP verification). Molecules would have caught this.\n\n## Next Steps\n- bd mol bond implementation is priority\n- Max has gt-976 for crew lifecycle (enables automated refresh mid-molecule)\n\nCheck bd ready and gt-975/976 status.","status":"closed","priority":2,"issue_type":"message","created_at":"2025-12-20T17:23:09.889562-08:00","updated_at":"2025-12-21T17:52:18.467069-08:00","closed_at":"2025-12-21T17:52:18.467069-08:00"} +{"id":"bd-7z4","title":"Add tests for delete operations","description":"Core delete functionality including deleteViaDaemon, createTombstone, and deleteIssue functions have 0% coverage. These are critical for data integrity and need comprehensive test coverage.","status":"open","priority":1,"issue_type":"task","created_at":"2025-12-18T07:00:34.867680882-07:00","updated_at":"2025-12-18T07:00:34.867680882-07:00","dependencies":[{"issue_id":"bd-7z4","depends_on_id":"bd-6ss","type":"discovered-from","created_at":"2025-12-18T07:00:34.870254935-07:00","created_by":"matt"}]} +{"id":"bd-581b80b3","title":"bd find-duplicates - AI-powered duplicate detection","description":"Find semantically duplicate issues.\n\nApproaches:\n1. Mechanical: Exact title/description matching\n2. Embeddings: Cosine similarity (cheap, scalable)\n3. AI: LLM-based semantic comparison (expensive, accurate)\n\nUses embeddings by default for \u003e100 issues.\n\nFiles: cmd/bd/find_duplicates.go (new)","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-29T20:49:49.126801-07:00","updated_at":"2025-12-17T22:58:34.563511-08:00","closed_at":"2025-12-17T22:58:34.563511-08:00"} +{"id":"bd-kyo","title":"Run tests and linting","description":"Run the full test suite and linter:\n\n```bash\nTMPDIR=/tmp go test -short ./...\ngolangci-lint run ./...\n```\n\nFix any failures. Linting warnings acceptable (see LINTING.md).","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T22:42:59.290588-08:00","updated_at":"2025-12-18T22:44:36.570262-08:00","closed_at":"2025-12-18T22:44:36.570262-08:00","dependencies":[{"issue_id":"bd-kyo","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T22:43:16.370234-08:00","created_by":"daemon"},{"issue_id":"bd-kyo","depends_on_id":"bd-8hy","type":"blocks","created_at":"2025-12-18T22:43:20.570742-08:00","created_by":"daemon"}]} +{"id":"bd-kwro.7","title":"Identity Configuration","description":"Implement identity system for sender field.\n\nConfiguration sources (in priority order):\n1. --identity flag on commands\n2. BEADS_IDENTITY environment variable\n3. .beads/config.json: {\"identity\": \"worker-name\"}\n4. Default: git user.name or hostname\n\nNew config file support:\n- .beads/config.json for per-repo settings\n- identity field for messaging\n\nHelper function:\n- GetIdentity() string - resolves identity from sources\n\nUpdate bd mail send to use GetIdentity() for sender field.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-16T03:02:17.603608-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-cbed9619.1","title":"Fix multi-round convergence for N-way collisions","description":"**Summary:** Multi-round collision resolution was identified as a critical issue preventing complete synchronization across distributed clones. The problem stemmed from incomplete final pulls that didn't fully propagate all changes between system instances.\n\n**Key Decisions:**\n- Implement multi-round sync mechanism\n- Ensure bounded convergence (≀N rounds)\n- Guarantee idempotent import without data loss\n\n**Resolution:** Developed a sync strategy that ensures all clones converge to the same complete set of issues, unblocking the bd-cbed9619 epic and improving distributed system reliability.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-10-28T21:22:21.486109-07:00","updated_at":"2025-12-17T23:18:29.111713-08:00","deleted_at":"2025-12-17T23:18:29.111713-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} +{"id":"bd-0oqz","title":"Add GetMoleculeProgress RPC endpoint","description":"New RPC endpoint to get detailed progress for a specific molecule. Returns: moleculeID, title, assignee, and list of steps with their status (done/current/ready/blocked), start/close times. Used when user expands a worker in the activity feed TUI.","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-12-23T16:26:38.137866-08:00","updated_at":"2025-12-23T18:27:49.033335-08:00","closed_at":"2025-12-23T18:27:49.033335-08:00"} +{"id":"bd-oryk","title":"Fix update-homebrew.sh awk script corrupts formula","description":"The awk script in scripts/update-homebrew.sh incorrectly removes platform conditionals (on_macos do, on_linux do, if Hardware::CPU.arm?, etc.) when updating SHA256 hashes. This corrupts the Homebrew formula.\n\nThe issue is the awk script uses 'next' to skip lines containing platform conditionals but never reconstructs them, resulting in a syntax-invalid formula.\n\nFound during v0.34.0 release - had to manually fix the formula.\n\nFix options:\n1. Rewrite awk script to properly preserve structure while updating sha256 lines only\n2. Use sed instead with targeted sha256 replacements\n3. Template approach - store formula template and fill in version/hashes","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-22T12:17:17.748792-08:00","updated_at":"2025-12-22T13:13:31.947353-08:00","closed_at":"2025-12-22T13:13:31.947353-08:00"} +{"id":"bd-ot0w","title":"Work on beads-tip: Fix broken Claude integration link in ...","description":"Work on beads-tip: Fix broken Claude integration link in bd doctor (GH#623). Update URL that doesn't exist. When done, submit MR (not PR) to integration branch for Refinery.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-19T22:56:08.429157-08:00","updated_at":"2025-12-19T23:20:39.790305-08:00","closed_at":"2025-12-19T23:20:39.790305-08:00"} +{"id":"bd-b6xo","title":"Remove or fix ClearDirtyIssues() - race condition risk (bd-52)","description":"Code health review found internal/storage/sqlite/dirty.go still exposes old ClearDirtyIssues() method (lines 103-108) which clears ALL dirty issues without checking what was actually exported.\n\nData loss risk: If export fails after some issues written to JSONL but before ClearDirtyIssues called, changes to remaining dirty issues will be lost.\n\nThe safer ClearDirtyIssuesByID() (lines 113-132) exists and clears only exported issues.\n\nFix: Either remove old method or mark it deprecated and ensure no code paths use it.","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-16T18:17:20.534625-08:00","updated_at":"2025-12-17T23:13:40.530703-08:00","closed_at":"2025-12-17T18:59:18.693791-08:00","dependencies":[{"issue_id":"bd-b6xo","depends_on_id":"bd-tggf","type":"blocks","created_at":"2025-12-16T18:19:05.633738-08:00","created_by":"daemon"}]} +{"id":"bd-y8bj","title":"Auto-detect identity from directory context for bd mail","description":"Currently bd mail inbox defaults to git user name, requiring --identity flag with exact format.\n\n## Problem\n- Mail sent to `gastown/crew/max`\n- Max runs `bd mail inbox` β†’ defaults to 'Steve Yegge' (git user)\n- Max must know to use `--identity 'gastown/crew/max'` with exact slashes\n\n## Proposed Fix\nAuto-detect identity from directory context when in a Gas Town workspace:\n- In `/Users/stevey/gt/gastown/crew/max`, infer identity = `gastown/crew/max`\n- Pattern: `\u003ctown\u003e/\u003crig\u003e/\u003crole\u003e/\u003cname\u003e` β†’ `\u003crig\u003e/\u003crole\u003e/\u003cname\u003e`\n\n## Additional Improvements\n1. Support GT_IDENTITY env var (set by gt crew at / session spawning)\n2. Support identity in .beads/config.yaml\n3. Normalize format: accept both slashes and dashes as equivalent\n\n## Context\nDiscovered during crew-to-crew work assignment. Max couldn't see mail despite correct nudge because identity defaulted wrong.","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-20T17:22:53.938586-08:00","updated_at":"2025-12-20T18:12:58.472262-08:00","closed_at":"2025-12-20T17:58:51.034201-08:00"} +{"id":"bd-qqc.5","title":"Commit release v{{version}}","description":"Stage and commit the version bump:\n\n```bash\ngit add cmd/bd/version.go cmd/bd/info.go CHANGELOG.md\ngit commit -m \"release: v{{version}}\"\n```\n\nDo NOT push yet - tag first.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T13:00:04.097628-08:00","updated_at":"2025-12-18T23:34:18.630946-08:00","closed_at":"2025-12-18T22:41:41.864839-08:00","dependencies":[{"issue_id":"bd-qqc.5","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T13:00:04.098265-08:00","created_by":"stevey"},{"issue_id":"bd-qqc.5","depends_on_id":"bd-qqc.4","type":"blocks","created_at":"2025-12-18T13:01:02.275008-08:00","created_by":"stevey"}]} +{"id":"bd-obep","title":"Spawn-time bonding: --attach flag","description":"Add --attach flag to bd mol spawn for on-the-fly composition.\n\nCOMMAND: bd mol spawn proto-feature --attach proto-docs --attach proto-testing\n\nBEHAVIOR:\n- Spawn the primary proto as normal\n- For each --attach: spawn that proto and wire to primary\n- Attachments become children of primary's root epic\n- Dependencies wired based on bond type (default: sequential)\n\nFLAGS:\n- --attach PROTO: Attach a proto (can repeat)\n- --attach-type TYPE: sequential (default) or parallel for all attachments\n- --after ISSUE: Attachment point for attached protos\n\nVARIABLE HANDLING:\n- All attached protos share variable namespace\n- Warn on variable name conflicts\n- All --var flags apply to all protos","notes":"DESIGN NOTE: This is syntactic sugar. Equivalent to:\n bd mol spawn proto-A\n bd mol bond $new_epic_id proto-B\n \nKeeping as separate task because it's a common UX pattern worth optimizing.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-21T00:59:06.178092-08:00","updated_at":"2025-12-21T10:42:50.554816-08:00","closed_at":"2025-12-21T10:42:50.554816-08:00","dependencies":[{"issue_id":"bd-obep","depends_on_id":"bd-o5xe","type":"parent-child","created_at":"2025-12-21T00:59:51.368491-08:00","created_by":"daemon"},{"issue_id":"bd-obep","depends_on_id":"bd-o91r","type":"blocks","created_at":"2025-12-21T00:59:51.733369-08:00","created_by":"daemon"}]} +{"id":"bd-uz8r","title":"Phase 2.3: TOON deletion tracking","description":"Implement deletion tracking in TOON format.\n\n## Overview\nPhase 2.2 switched storage to TOON format. Phase 2.3 adds deletion tracking in TOON format for propagating deletions across clones.\n\n## Required Work\n\n### 2.3.1 Deletion Tracking (TOON Format)\n- [ ] Implement deletions.toon file (tracking deleted issue records)\n- [ ] Add DeleteTracker struct to record deleted issue IDs and metadata\n- [ ] Update bdt delete command to record in deletions.toon\n- [ ] Design deletion record format (ID, timestamp, reason, hash)\n- [ ] Implement auto-prune of old deletion records (configurable TTL)\n\n### 2.3.2 Sync Propagation\n- [ ] Load deletions.toon during import\n- [ ] Remove deleted issues from local database when imported from remote\n- [ ] Handle edge cases (delete same issue in multiple clones)\n- [ ] Deletion ordering and conflict resolution\n\n### 2.3.3 Testing\n- [ ] Unit tests for deletion tracking\n- [ ] Integration tests for deletion propagation\n- [ ] Multi-clone deletion scenarios\n- [ ] TTL expiration tests\n\n## Success Criteria\n- deletions.toon stores deletion records in TOON format\n- Deletions propagate across clones via git sync\n- Old records auto-prune after TTL\n- All 70+ tests still passing\n- bdt delete command works seamlessly","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T14:37:23.722066816-07:00","updated_at":"2025-12-21T14:42:27.491932-08:00","closed_at":"2025-12-21T14:42:27.491932-08:00","dependencies":[{"issue_id":"bd-uz8r","depends_on_id":"bd-iic1","type":"discovered-from","created_at":"2025-12-19T14:37:23.726825771-07:00","created_by":"daemon"}]} +{"id":"bd-qqc.3","title":"Update CHANGELOG.md for {{version}}","description":"In CHANGELOG.md:\n\n1. Change `## [Unreleased]` section header to `## [{{version}}] - {{date}}`\n2. Add new empty `## [Unreleased]` section above it\n3. Review and clean up the changes list\n\nFormat:\n```markdown\n## [Unreleased]\n\n## [{{version}}] - {{date}}\n\n### Added\n- ...\n\n### Changed\n- ...\n\n### Fixed\n- ...\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T12:59:39.738561-08:00","updated_at":"2025-12-18T23:34:18.629213-08:00","closed_at":"2025-12-18T22:41:41.846609-08:00","dependencies":[{"issue_id":"bd-qqc.3","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T12:59:39.739138-08:00","created_by":"stevey"}]} +{"id":"bd-u0g9","title":"GH#405: Prefix parsing with hyphens treats first segment as prefix","description":"Prefix me-py-toolkit gets parsed as just me- when detecting mismatches. Fix prefix parsing to handle multi-hyphen prefixes. See GitHub issue #405.","status":"tombstone","priority":2,"issue_type":"bug","created_at":"2025-12-16T01:03:18.354066-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} +{"id":"bd-xurv","title":"Restart daemon with 0.33.2","description":"Restart the bd daemon to pick up new version:\n\n```bash\nbd daemon --stop\nbd daemon --start\nbd daemon --health # Verify Version: 0.33.2\n```","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.760884-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-kwjh.4","title":"bd mol squash handles wispβ†’digest","description":"Update bd mol squash to handle ephemeral molecules.\n\n## Behavior for Ephemeral Molecules\n1. Delete wisp from .beads-ephemeral/\n2. Create digest issue in .beads/ (permanent)\n3. Digest has type:digest and squashed_from field\n\n## Digest Format\n```json\n{\n \"id\": \"\u003cparent\u003e.digest-NNN\",\n \"type\": \"digest\",\n \"title\": \"\u003cproto\u003e cycle @ \u003ctimestamp\u003e\",\n \"description\": \"\u003csummary from --summary flag\u003e\",\n \"parent\": \"\u003cproto-id\u003e\",\n \"squashed_from\": \"\u003cwisp-id\u003e\"\n}\n```\n\n## Implementation\n- Detect if molecule is ephemeral (check storage location or flag)\n- Delete from ephemeral store\n- Create digest in permanent store\n- Return digest ID\n\n## Testing\n- Test squash of ephemeral mol creates digest\n- Test wisp is deleted after squash\n- Test digest is queryable","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-22T00:07:27.685116-08:00","updated_at":"2025-12-22T00:53:55.74082-08:00","closed_at":"2025-12-22T00:53:55.74082-08:00","dependencies":[{"issue_id":"bd-kwjh.4","depends_on_id":"bd-kwjh","type":"parent-child","created_at":"2025-12-22T00:07:27.686798-08:00","created_by":"daemon"},{"issue_id":"bd-kwjh.4","depends_on_id":"bd-kwjh.2","type":"blocks","created_at":"2025-12-22T00:07:27.687773-08:00","created_by":"daemon"}]} +{"id":"bd-qqc.7","title":"Push release v{{version}} to remote","description":"Push the commit and tag:\n\n```bash\ngit push \u0026\u0026 git push --tags\n```\n\nVerify on GitHub that the tag appears in releases.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T13:00:26.933082-08:00","updated_at":"2025-12-18T23:34:18.630538-08:00","closed_at":"2025-12-18T22:41:41.882956-08:00","dependencies":[{"issue_id":"bd-qqc.7","depends_on_id":"bd-qqc.6","type":"blocks","created_at":"2025-12-18T13:01:12.711161-08:00","created_by":"stevey"},{"issue_id":"bd-qqc.7","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T13:00:26.933687-08:00","created_by":"stevey"}]} +{"id":"bd-f2lb","title":"Update CHANGELOG.md with release notes","description":"Add meaningful release notes to CHANGELOG.md describing what changed in test-squash","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T13:52:33.066065-08:00","updated_at":"2025-12-21T13:53:49.858742-08:00","deleted_at":"2025-12-21T13:53:49.858742-08:00","deleted_by":"stevey","delete_reason":"manual delete","original_type":"task"} +{"id":"bd-pbh.10","title":"Run check-versions.sh - all must pass","description":"Run the version consistency check:\n```bash\n./scripts/check-versions.sh\n```\n\nAll versions must match 0.30.4.\n\n\n```verify\n./scripts/check-versions.sh\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:11.047311-08:00","updated_at":"2025-12-17T21:46:46.28316-08:00","closed_at":"2025-12-17T21:46:46.28316-08:00","dependencies":[{"issue_id":"bd-pbh.10","depends_on_id":"bd-pbh.8","type":"blocks","created_at":"2025-12-17T21:19:11.211479-08:00","created_by":"daemon"},{"issue_id":"bd-pbh.10","depends_on_id":"bd-pbh.9","type":"blocks","created_at":"2025-12-17T21:19:11.224059-08:00","created_by":"daemon"},{"issue_id":"bd-pbh.10","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:11.047888-08:00","created_by":"daemon"},{"issue_id":"bd-pbh.10","depends_on_id":"bd-pbh.1","type":"blocks","created_at":"2025-12-17T21:19:11.159084-08:00","created_by":"daemon"},{"issue_id":"bd-pbh.10","depends_on_id":"bd-pbh.5","type":"blocks","created_at":"2025-12-17T21:19:11.177869-08:00","created_by":"daemon"},{"issue_id":"bd-pbh.10","depends_on_id":"bd-pbh.6","type":"blocks","created_at":"2025-12-17T21:19:11.187629-08:00","created_by":"daemon"},{"issue_id":"bd-pbh.10","depends_on_id":"bd-pbh.7","type":"blocks","created_at":"2025-12-17T21:19:11.199955-08:00","created_by":"daemon"},{"issue_id":"bd-pbh.10","depends_on_id":"bd-pbh.4","type":"blocks","created_at":"2025-12-17T21:19:11.168248-08:00","created_by":"daemon"}]} +{"id":"bd-au0.7","title":"Audit and standardize JSON output across all commands","description":"Ensure consistent JSON format and error handling when --json flag is used.\n\n**Scope:**\n1. Verify all commands respect --json flag\n2. Standardize success response format\n3. Standardize error response format\n4. Document JSON schemas\n\n**Commands to audit:**\n- Core CRUD: create, update, delete, show, list, search βœ“\n- Queries: ready, blocked, stale, count, stats, status\n- Deps: dep add/remove/tree/cycles\n- Labels: label commands\n- Comments: comments add/list/delete\n- Epics: epic status/close-eligible\n- Export/import: already support --json βœ“\n\n**Testing:**\n- Success cases return valid JSON\n- Error cases return valid JSON (not plain text)\n- Consistent field naming (snake_case vs camelCase)\n- Array vs object wrapping consistency","status":"closed","priority":1,"issue_type":"task","created_at":"2025-11-21T21:07:35.304424-05:00","updated_at":"2025-12-23T21:22:13.69621-08:00","closed_at":"2025-12-23T20:43:04.849211-08:00","dependencies":[{"issue_id":"bd-au0.7","depends_on_id":"bd-au0","type":"parent-child","created_at":"2025-11-21T21:07:35.305663-05:00","created_by":"daemon"}]} +{"id":"bd-20j","title":"sync branch not match config","description":"./bd sync\nβ†’ Exporting pending changes to JSONL...\nβ†’ No changes to commit\nβ†’ Pulling from sync branch 'gh-386'...\nError pulling from sync branch: failed to create worktree: failed to create worktree parent directory: mkdir /var/home/matt/dev/beads/worktree-db-fail/.git: not a directory\nmatt@blufin-framation ~/d/b/worktree-db-fail (worktree-db-fail) [1]\u003e bd config list\n\nConfiguration:\n auto_compact_enabled = false\n compact_batch_size = 50\n compact_model = claude-3-5-haiku-20241022\n compact_parallel_workers = 5\n compact_tier1_days = 30\n compact_tier1_dep_levels = 2\n compact_tier2_commits = 100\n compact_tier2_days = 90\n compact_tier2_dep_levels = 5\n compaction_enabled = false\n issue_prefix = worktree-db-fail\n sync.branch = worktree-db-fail","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-08T06:49:04.449094018-07:00","updated_at":"2025-12-08T06:49:04.449094018-07:00"} +{"id":"bd-zmmy","title":"bd ready resolves external dependencies","description":"Extend bd ready to check external blocked_by references:\n\n1. Parse external:\u003cproject\u003e:\u003ccapability\u003e from blocked_by\n2. Look up project path from external_projects config\n3. Check if target project has provides:\u003ccapability\u003e label on a closed issue\n4. If not satisfied, issue is blocked\n\nExample output:\n```bash\nbd ready\n# gt-xyz: blocked by external:beads:mol-run-assignee (not provided)\n# gt-abc: ready\n```\n\nDepends on: bd-om4a (external: prefix), bd-66w1 (config)\n\nPart of cross-project dependency system.\nSee: gastown/docs/cross-project-deps.md","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-12-21T22:37:50.03794-08:00","updated_at":"2025-12-21T23:42:25.042402-08:00","closed_at":"2025-12-21T23:42:25.042402-08:00","dependencies":[{"issue_id":"bd-zmmy","depends_on_id":"bd-om4a","type":"blocks","created_at":"2025-12-21T22:38:38.106657-08:00","created_by":"daemon"},{"issue_id":"bd-zmmy","depends_on_id":"bd-66w1","type":"blocks","created_at":"2025-12-21T22:38:38.175633-08:00","created_by":"daemon"}]} +{"id":"bd-56x","title":"Review PR #514: fix plugin install docs","description":"Review and merge PR #514 from aspiers. This PR fixes incorrect docs for installing Claude Code plugin from source in docs/PLUGIN.md. Clarifies shell vs Claude Code commands and fixes the . vs ./beads argument issue. URL: https://github.com/anthropics/beads/pull/514","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-13T08:15:16.865354+11:00","updated_at":"2025-12-13T07:07:19.729213-08:00","closed_at":"2025-12-13T07:07:19.729213-08:00"} +{"id":"bd-yck","title":"Fix checkExistingBeadsData to be worktree-aware","description":"The checkExistingBeadsData function in cmd/bd/init.go checks for .beads in the current working directory, but for worktrees it should check the main repository root instead. This prevents proper worktree compatibility.","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-07T16:48:32.082776345-07:00","updated_at":"2025-12-23T22:33:32.412338-08:00","closed_at":"2025-12-23T22:33:32.412338-08:00"} +{"id":"bd-au0.9","title":"Review and document rarely-used commands","description":"Document use cases or consider deprecation for infrequently-used commands.\n\n**Commands to review:**\n1. bd rename-prefix - How often is this used? Document use cases\n2. bd detect-pollution - Consider integrating into bd validate\n3. bd migrate-hash-ids - One-time migration, keep but document as legacy\n\n**For each command:**\n- Document typical use cases\n- Add examples to help text\n- Consider if it should be a subcommand instead\n- Add deprecation warning if appropriate\n\n**Not changing:**\n- duplicates βœ“ (useful for data quality)\n- repair-deps βœ“ (useful for fixing broken refs)\n- restore βœ“ (critical for compacted issues)\n- compact βœ“ (performance feature)\n\n**Deliverable:**\n- Updated help text\n- Documentation in ADVANCED.md\n- Deprecation plan if needed","status":"open","priority":3,"issue_type":"task","created_at":"2025-11-21T21:08:05.588275-05:00","updated_at":"2025-11-21T21:08:05.588275-05:00","dependencies":[{"issue_id":"bd-au0.9","depends_on_id":"bd-au0","type":"parent-child","created_at":"2025-11-21T21:08:05.59003-05:00","created_by":"daemon"}]} +{"id":"bd-xo1o","title":"Dynamic Molecule Bonding: Fanout patterns for patrol molecules","description":"## Vision\n\nEnable molecules to dynamically spawn child molecules at runtime based on discovered\nwork. This is the foundation for the \"Christmas Ornament\" pattern where a patrol\nmolecule grows arms per-polecat.\n\n## The Activity Feed Vision\n\nInstead of parsing agent logs, users see structured work state:\n\n```\n[14:32:08] + patrol-x7k.arm-ace bonded (5 steps)\n[14:32:08] + patrol-x7k.arm-nux bonded (5 steps)\n[14:32:09] β†’ patrol-x7k.arm-ace.capture in_progress\n[14:32:10] βœ“ patrol-x7k.arm-ace.capture completed\n[14:32:14] βœ“ patrol-x7k.arm-ace.decide completed (action: nudge-1)\n```\n\nThis requires beads to track molecule step state transitions in real-time.\n\n## Key Primitives Needed\n\n### 1. Dynamic Bond with Variables\n```bash\nbd mol bond mol-polecat-arm \u003cparent-wisp-id\u003e \\\n --var polecat_name=ace \\\n --var rig=gastown\n```\n\nCreates wisp children under the parent:\n- parent-id.arm-ace\n- parent-id.arm-ace.capture\n- parent-id.arm-ace.assess\n- etc.\n\n### 2. WaitsFor Directive\n```markdown\n## Step: aggregate\nCollect outcomes from all dynamically-bonded children.\nWaitsFor: all-children\nNeeds: survey-workers\n```\n\nThe `WaitsFor: all-children` directive makes this a fanout gate - it can't\nproceed until ALL dynamically-bonded children complete.\n\n### 3. Activity Feed Query\n```bash\nbd activity --follow # Real-time state stream\nbd activity --mol \u003cid\u003e # Activity for specific molecule\nbd activity --since 5m # Last 5 minutes\n```\n\n### 4. Parallel Step Detection\nSteps with no inter-dependencies should be flagged as parallelizable.\nWhen arms are bonded, their steps can run in parallel across arms.\n\n## Use Case: mol-witness-patrol\n\nThe Witness monitors N polecats where N varies at runtime:\n\n```\nsurvey-workers discovers: [ace, nux, toast]\nFor each polecat:\n bd mol bond mol-polecat-arm \u003cpatrol-id\u003e --var polecat_name=\u003cname\u003e\naggregate step waits for all arms to complete\n```\n\nThis creates the Christmas Ornament shape:\n- Trunk: preflight steps\n- Arms: per-polecat inspection molecules\n- Base: cleanup after all arms complete\n\n## Design Docs\n\nSee Gas Town docs:\n- docs/molecular-chemistry.md (updated with Christmas Ornament pattern)\n- docs/architecture.md (activity feed section)\n\n## Dependencies\n\nThis epic may depend on:\n- Wisp storage (.beads-wisp/) - already implemented\n- Variable substitution in molecules - may need enhancement","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-23T02:32:43.173305-08:00","updated_at":"2025-12-23T04:01:02.729388-08:00","closed_at":"2025-12-23T04:01:02.729388-08:00"} +{"id":"bd-j6lr","title":"GH#402: Add --parent flag documentation to bd onboard","description":"bd onboard output is missing --parent flag for epic subtasks. Agents guess wrong syntax (--deps parent:). See GitHub issue #402.","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-12-16T01:03:56.594829-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-687g","title":"Code review: mol squash deletion bypasses tombstone system","description":"The deleteEphemeralChildren function in mol_squash.go uses DeleteIssue directly instead of the proper deletion flow. This bypasses tombstone creation, deletion tracking (deletions.jsonl), and dependency cleanup. Could cause issues with deletion propagation across clones.\n\nCurrent code uses d.DeleteIssue(ctx, id) but should probably use d.DeleteIssues(ctx, ids, false, true, false) for proper tombstone handling.\n\nAlternative: Document that ephemeral issues intentionally use hard delete since they are transient and should never propagate to other clones anyway.","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-21T13:57:20.223345-08:00","updated_at":"2025-12-21T14:17:38.073899-08:00","closed_at":"2025-12-21T14:17:38.073899-08:00"} +{"id":"bd-likt","title":"Add daemon RPC support for gate commands","description":"Add daemon RPC support for gate commands.\n\n## Current State\nGate commands require --no-daemon flag because they use direct SQLite access:\n- Gate create needs to write await_type, await_id, timeout_ns, waiters fields\n- Gate wait needs to update waiters JSON array\n- Daemon RPC doesnt have methods for these operations\n\n## Implementation\n\n### 1. Add RPC methods to internal/rpc/protocol.go\n\n```go\n// Gate operations\ntype GateCreateArgs struct {\n Title string \\`json:\"title\"\\`\n AwaitType string \\`json:\"await_type\"\\`\n AwaitID string \\`json:\"await_id\"\\`\n Timeout time.Duration \\`json:\"timeout\"\\`\n Waiters []string \\`json:\"waiters\"\\`\n}\n\ntype GateCreateResult struct {\n Issue *types.Issue \\`json:\"issue\"\\`\n}\n\ntype GateListArgs struct {\n All bool \\`json:\"all\"\\` // Include closed gates\n}\n\ntype GateListResult struct {\n Gates []*types.Issue \\`json:\"gates\"\\`\n}\n\ntype GateWaitArgs struct {\n GateID string \\`json:\"gate_id\"\\`\n Waiters []string \\`json:\"waiters\"\\` // Additional waiters to add\n}\n\ntype GateWaitResult struct {\n Gate *types.Issue \\`json:\"gate\"\\`\n AddedCount int \\`json:\"added_count\"\\`\n}\n```\n\n### 2. Add handler methods to internal/daemon/rpc_handler.go\n\n```go\nfunc (h *RPCHandler) GateCreate(ctx context.Context, args *rpc.GateCreateArgs) (*rpc.GateCreateResult, error) {\n now := time.Now()\n gate := \u0026types.Issue{\n Title: args.Title,\n IssueType: types.TypeGate,\n Status: types.StatusOpen,\n Priority: 1,\n Assignee: \"deacon/\",\n Wisp: true,\n AwaitType: args.AwaitType,\n AwaitID: args.AwaitID,\n Timeout: args.Timeout,\n Waiters: args.Waiters,\n CreatedAt: now,\n UpdatedAt: now,\n }\n gate.ContentHash = gate.ComputeContentHash()\n \n if err := h.store.CreateIssue(ctx, gate, h.actor); err != nil {\n return nil, err\n }\n \n return \u0026rpc.GateCreateResult{Issue: gate}, nil\n}\n\nfunc (h *RPCHandler) GateList(ctx context.Context, args *rpc.GateListArgs) (*rpc.GateListResult, error) {\n gateType := types.TypeGate\n filter := types.IssueFilter{IssueType: \u0026gateType}\n if !args.All {\n openStatus := types.StatusOpen\n filter.Status = \u0026openStatus\n }\n \n gates, err := h.store.SearchIssues(ctx, \"\", filter)\n if err != nil {\n return nil, err\n }\n \n return \u0026rpc.GateListResult{Gates: gates}, nil\n}\n\nfunc (h *RPCHandler) GateWait(ctx context.Context, args *rpc.GateWaitArgs) (*rpc.GateWaitResult, error) {\n gate, err := h.store.GetIssue(ctx, args.GateID)\n if err != nil {\n return nil, err\n }\n if gate.IssueType != types.TypeGate {\n return nil, fmt.Errorf(\"%s is not a gate\", args.GateID)\n }\n \n // Merge waiters (dedupe)\n waiterSet := make(map[string]bool)\n for _, w := range gate.Waiters {\n waiterSet[w] = true\n }\n added := 0\n for _, w := range args.Waiters {\n if !waiterSet[w] {\n gate.Waiters = append(gate.Waiters, w)\n waiterSet[w] = true\n added++\n }\n }\n \n if added \u003e 0 {\n // Update via store\n updates := map[string]interface{}{\n \"waiters\": gate.Waiters,\n }\n if err := h.store.UpdateIssue(ctx, args.GateID, updates, h.actor); err != nil {\n return nil, err\n }\n }\n \n return \u0026rpc.GateWaitResult{Gate: gate, AddedCount: added}, nil\n}\n```\n\n### 3. Register methods in daemon\n\nIn internal/daemon/server.go, register the new methods:\n```go\nrpc.RegisterMethod(\"gate.create\", h.GateCreate)\nrpc.RegisterMethod(\"gate.list\", h.GateList)\nrpc.RegisterMethod(\"gate.wait\", h.GateWait)\n```\n\n### 4. Add client methods to internal/rpc/client.go\n\n```go\nfunc (c *Client) GateCreate(ctx context.Context, args *GateCreateArgs) (*GateCreateResult, error) {\n var result GateCreateResult\n err := c.Call(ctx, \"gate.create\", args, \u0026result)\n return \u0026result, err\n}\n\nfunc (c *Client) GateList(ctx context.Context, args *GateListArgs) (*GateListResult, error) {\n var result GateListResult\n err := c.Call(ctx, \"gate.list\", args, \u0026result)\n return \u0026result, err\n}\n\nfunc (c *Client) GateWait(ctx context.Context, args *GateWaitArgs) (*GateWaitResult, error) {\n var result GateWaitResult\n err := c.Call(ctx, \"gate.wait\", args, \u0026result)\n return \u0026result, err\n}\n```\n\n### 5. Update cmd/bd/gate.go to use daemon\n\n```go\n// In gateCreateCmd Run:\nif daemonClient != nil {\n result, err := daemonClient.GateCreate(ctx, \u0026rpc.GateCreateArgs{\n Title: title,\n AwaitType: awaitType,\n AwaitID: awaitID,\n Timeout: timeout,\n Waiters: notifyAddrs,\n })\n if err != nil {\n FatalError(\"gate create: %v\", err)\n }\n gate = result.Issue\n} else {\n // Existing direct store code\n}\n```\n\n## Files to Modify\n\n1. **internal/rpc/protocol.go** - Add Gate*Args/Result types\n2. **internal/daemon/rpc_handler.go** - Add handler methods\n3. **internal/daemon/server.go** - Register methods\n4. **internal/rpc/client.go** - Add client methods\n5. **cmd/bd/gate.go** - Use daemon client when available\n\n## Testing\n\n```bash\n# Start daemon\nbd daemon start\n\n# Test via daemon (should work without --no-daemon)\nbd gate create --await timer:5m --notify beads/dave\nbd gate list\nbd gate wait \u003cid\u003e --notify beads/alice\n\n# Verify daemon handled it\nbd daemons logs . | grep gate\n```\n\n## Success Criteria\n- All gate commands work without --no-daemon\n- Same behavior in daemon vs direct mode\n- Waiters array updates correctly via RPC\n- Tests pass for RPC gate operations","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-23T12:13:25.778412-08:00","updated_at":"2025-12-23T13:45:58.398604-08:00","closed_at":"2025-12-23T13:45:58.398604-08:00","dependencies":[{"issue_id":"bd-likt","depends_on_id":"bd-udsi","type":"discovered-from","created_at":"2025-12-23T12:13:36.174822-08:00","created_by":"daemon"},{"issue_id":"bd-likt","depends_on_id":"bd-iz5t","type":"parent-child","created_at":"2025-12-23T12:44:07.891992-08:00","created_by":"daemon"}]} +{"id":"bd-db72","title":"Upgrade local Homebrew installation","description":"Upgrade bd via Homebrew:\n\n```bash\nbrew update\nbrew upgrade bd\n/opt/homebrew/bin/bd version # Verify shows 0.33.2\n```","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.760552-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-u2sc.4","title":"Introduce slog for structured daemon logging","description":"Introduce slog for structured daemon logging.\n\n## Current State\nDaemon uses fmt.Fprintf for logging:\n```go\nfmt.Fprintf(os.Stderr, \"Warning: failed to detect user role: %v\\n\", err)\nfmt.Fprintf(logFile, \"[%s] %s\\n\", time.Now().Format(time.RFC3339), msg)\n```\n\nThis produces unstructured, hard-to-parse logs.\n\n## Target State\nUse Go 1.21+ slog for structured logging:\n```go\nslog.Warn(\"failed to detect user role\", \"error\", err)\nslog.Info(\"sync completed\", \"created\", 5, \"updated\", 3, \"duration_ms\", 150)\n```\n\n## Implementation\n\n### 1. Create logger setup (internal/daemon/logger.go)\n\n```go\npackage daemon\n\nimport (\n \"io\"\n \"log/slog\"\n \"os\"\n)\n\n// SetupLogger configures the daemon logger.\n// Returns a cleanup function to close the log file.\nfunc SetupLogger(logPath string, jsonFormat bool, level slog.Level) (func(), error) {\n var w io.Writer = os.Stderr\n var cleanup func()\n \n if logPath \\!= \"\" {\n f, err := os.OpenFile(logPath, os.O_CREATE|os.O_APPEND|os.O_WRONLY, 0600)\n if err \\!= nil {\n return nil, err\n }\n w = io.MultiWriter(os.Stderr, f)\n cleanup = func() { f.Close() }\n }\n \n var handler slog.Handler\n opts := \u0026slog.HandlerOptions{Level: level}\n \n if jsonFormat {\n handler = slog.NewJSONHandler(w, opts)\n } else {\n handler = slog.NewTextHandler(w, opts)\n }\n \n slog.SetDefault(slog.New(handler))\n \n return cleanup, nil\n}\n```\n\n### 2. Add log level flag\n\nIn cmd/bd/daemon.go:\n```go\ndaemonCmd.Flags().String(\"log-level\", \"info\", \"Log level (debug, info, warn, error)\")\ndaemonCmd.Flags().Bool(\"log-json\", false, \"Output logs in JSON format\")\n```\n\n### 3. Replace fmt.Fprintf with slog calls\n\n**Pattern 1: Simple messages**\n```go\n// Before\nfmt.Fprintf(os.Stderr, \"Starting daemon on %s\\n\", socketPath)\n\n// After\nslog.Info(\"starting daemon\", \"socket\", socketPath)\n```\n\n**Pattern 2: Errors**\n```go\n// Before\nfmt.Fprintf(os.Stderr, \"Error: failed to connect: %v\\n\", err)\n\n// After\nslog.Error(\"failed to connect\", \"error\", err)\n```\n\n**Pattern 3: Debug info**\n```go\n// Before\nif debug {\n fmt.Fprintf(os.Stderr, \"Received request: %s\\n\", method)\n}\n\n// After\nslog.Debug(\"received request\", \"method\", method)\n```\n\n**Pattern 4: Structured data**\n```go\n// Before\nfmt.Fprintf(logFile, \"Import: %d created, %d updated\\n\", created, updated)\n\n// After\nslog.Info(\"import completed\", \n \"created\", created,\n \"updated\", updated,\n \"unchanged\", unchanged,\n \"duration_ms\", duration.Milliseconds())\n```\n\n### 4. Files to Update\n\n| File | Changes |\n|------|---------|\n| cmd/bd/daemon.go | Add log flags, call SetupLogger |\n| internal/daemon/server.go | Replace fmt with slog |\n| internal/daemon/rpc_handler.go | Replace fmt with slog |\n| internal/daemon/sync.go | Replace fmt with slog |\n| internal/daemon/autoflush.go | Replace fmt with slog |\n| internal/daemon/logger.go | New file |\n\n### 5. Log output examples\n\n**Text format (default):**\n```\ntime=2025-12-23T12:30:00Z level=INFO msg=\"daemon started\" socket=/tmp/bd.sock pid=12345\ntime=2025-12-23T12:30:01Z level=INFO msg=\"import completed\" created=5 updated=3 duration_ms=150\ntime=2025-12-23T12:30:05Z level=WARN msg=\"sync branch diverged\" local_ahead=2 remote_ahead=1\n```\n\n**JSON format (--log-json):**\n```json\n{\"time\":\"2025-12-23T12:30:00Z\",\"level\":\"INFO\",\"msg\":\"daemon started\",\"socket\":\"/tmp/bd.sock\",\"pid\":12345}\n{\"time\":\"2025-12-23T12:30:01Z\",\"level\":\"INFO\",\"msg\":\"import completed\",\"created\":5,\"updated\":3,\"duration_ms\":150}\n```\n\n## Migration Strategy\n\n1. **Add logger.go** with SetupLogger\n2. **Update daemon startup** to initialize slog\n3. **Convert one file at a time** (start with server.go)\n4. **Test after each file**\n5. **Remove old logging code** once all converted\n\n## Testing\n\n```bash\n# Start daemon with debug logging\nbd daemon start --log-level debug\n\n# Check logs\nbd daemons logs . | head -20\n\n# Test JSON output\nbd daemon start --log-json --log-level debug\nbd daemons logs . | jq .\n```\n\n## Success Criteria\n- All daemon logging uses slog\n- --log-level controls verbosity\n- --log-json produces machine-parseable output\n- Log entries have consistent structure\n- No fmt.Fprintf to stderr in daemon code","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-22T14:27:16.47144-08:00","updated_at":"2025-12-23T13:44:23.374935-08:00","closed_at":"2025-12-23T13:44:23.374935-08:00","dependencies":[{"issue_id":"bd-u2sc.4","depends_on_id":"bd-u2sc","type":"parent-child","created_at":"2025-12-22T14:27:16.471878-08:00","created_by":"daemon"},{"issue_id":"bd-u2sc.4","depends_on_id":"bd-iz5t","type":"parent-child","created_at":"2025-12-23T12:44:08.048156-08:00","created_by":"daemon"}]} +{"id":"bd-hlsw","title":"Add sync resilience guardrails for forced pushes and prefix mismatches","description":"Beads can get into unrecoverable sync states when remote forces pushes occur (e.g., rebases) combined with prefix mismatches from multi-worker scenarios. Add detection, prevention, and auto-recovery features to handle this gracefully.","status":"open","priority":2,"issue_type":"epic","created_at":"2025-12-14T10:40:14.872875259-07:00","updated_at":"2025-12-14T10:40:14.872875259-07:00"} +{"id":"bd-nurq","title":"Implement bd mol current command","description":"Show what molecule the agent should currently be working on. Referenced by gt-um6q, gt-lz13. Needed for molecule navigation workflow in templates.","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-12-23T00:17:54.069983-08:00","updated_at":"2025-12-23T01:23:59.523404-08:00","closed_at":"2025-12-23T01:23:59.523404-08:00"} +{"id":"bd-kwjh","title":"Wisp storage: ephemeral molecule tracking","description":"Implement ephemeral molecule storage for patrol cycles.\n\n## Architecture\n\nWisps are ephemeral molecules stored in `.beads-wisps/` (gitignored).\nWhen squashed, they create digests in permanent `.beads/`.\n\n**Storage is per-rig, not per-role**: Witness and Refinery share mayor/rig's \n`.beads-wisps/` since they execute from that context.\n\n## Design Doc\nSee: gastown/mayor/rig/docs/wisp-architecture.md\n\n## Key Requirements\n\n1. **Ephemeral storage**: `.beads-wisps/` directory, gitignored\n2. **Bond with --wisp**: Creates in wisps instead of permanent\n3. **Squash**: Deletes wisp, creates digest in permanent beads\n4. **Burn**: Deletes wisp, no digest\n5. **Wisp commands**: `bd wisp list`, `bd wisp gc`\n\n## Storage Locations\n\n| Context | Location |\n|---------|----------|\n| Rig (Deacon, Witness, Refinery) | mayor/rig/.beads-wisps/ |\n| Polecat (if used) | polecats/\u003cname\u003e/.beads-wisps/ |\n\n## Children (to be created)\n- bd mol bond --wisp flag\n- .beads-wisps/ storage backend\n- bd mol squash handles wisp to permanent\n- bd wisp list command\n- bd wisp gc command (orphan cleanup)","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-21T23:34:47.188806-08:00","updated_at":"2025-12-22T01:12:53.965768-08:00","closed_at":"2025-12-22T01:12:53.965768-08:00"} +{"id":"bd-lk39","title":"Add composite index (issue_id, event_type) on events table","description":"GetCloseReason and GetCloseReasonsForIssues filter by both issue_id and event_type.\n\n**Query (queries.go:355-358):**\n```sql\nSELECT comment FROM events\nWHERE issue_id = ? AND event_type = ?\nORDER BY created_at DESC LIMIT 1\n```\n\n**Problem:** Currently uses idx_events_issue but must filter event_type in memory.\n\n**Solution:** Add migration:\n```sql\nCREATE INDEX IF NOT EXISTS idx_events_issue_type ON events(issue_id, event_type);\n```\n\n**Priority:** Low - events table is typically small relative to issues.","status":"closed","priority":4,"issue_type":"task","created_at":"2025-12-22T22:58:54.070587-08:00","updated_at":"2025-12-22T23:15:13.841988-08:00","closed_at":"2025-12-22T23:15:13.841988-08:00","dependencies":[{"issue_id":"bd-lk39","depends_on_id":"bd-h0we","type":"discovered-from","created_at":"2025-12-22T22:58:54.071286-08:00","created_by":"daemon"}]} +{"id":"bd-a0cp","title":"Consider using types.Status in merge package for type safety","description":"The merge package uses string for status comparison (e.g., result.Status == closed, issue.Status == StatusTombstone). The types package defines Status as a type alias with validation. While the merge package needs its own Issue struct for JSONL flexibility, it could import and use types.Status for constants to get compile-time type checking. Current code: if left == closed || right == closed. Could be: if left == string(types.StatusClosed). This is low priority since string comparison works correctly. Files: internal/merge/merge.go:44, 488, 501-521","status":"open","priority":4,"issue_type":"task","created_at":"2025-12-05T16:37:10.690424-08:00","updated_at":"2025-12-05T16:37:10.690424-08:00"} +{"id":"bd-lq2o","title":"Rebuild local binary","description":"Build and verify: go build -o bd ./cmd/bd \u0026\u0026 ./bd version","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.759506-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-d9mu","title":"Cross-rig external dependency support","description":"Support dependencies on issues in other rigs/repos.\n\n## Use Case\n\nGas Town issues often depend on Beads issues (and vice versa). Currently we use labels like `external:beads/bd-xxx` as documentation, but:\n- `bd blocked` doesn't recognize external deps\n- `bd ready` doesn't filter them out\n- No way to query cross-rig status\n\n## Proposed UX\n\n### Adding external deps\n```bash\n# New syntax for bd dep add\nbd dep add gt-a07f external:beads:bd-kwjh\n\n# Or maybe cleaner\nbd dep add gt-a07f --external beads:bd-kwjh\n```\n\n### Showing blocked status\n```bash\nbd blocked\n# β†’ gt-a07f blocked by external:beads:bd-kwjh (unverified)\n\n# With optional cross-rig query\nbd blocked --resolve-external\n# β†’ gt-a07f blocked by external:beads:bd-kwjh (closed) ← unblocked!\n```\n\n### Storage\nCould use:\n- Special dependency type: `type: external`\n- Label convention: `external:rig:id`\n- New field: `external_deps: [\"beads:bd-kwjh\"]`\n\n## Implementation Notes\n\nCross-rig queries would need:\n- Known rig locations (config or discovery)\n- Read-only beads access to external rigs\n- Caching to avoid repeated queries\n\nFor MVP, just recognizing external deps and marking them as 'unverified' blockers would be valuable.","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-12-22T02:27:23.892706-08:00","updated_at":"2025-12-22T22:32:49.261551-08:00","closed_at":"2025-12-22T22:32:49.261551-08:00"} +{"id":"bd-n3v","title":"Error committing to sync branch: failed to create worktree","description":"\u003e bd sync --no-daemon\nβ†’ Exporting pending changes to JSONL...\nβ†’ Committing changes to sync branch 'beads-sync'...\nError committing to sync branch: failed to create worktree: failed to create worktree parent directory: mkdir /var/home/matt/dev/beads/fix-ci/.git: not a directory","notes":"**Problem Diagnosed**: The `bd sync` command was failing with \"mkdir /var/home/matt/dev/beads/fix-ci/.git: not a directory\" because it was being executed from the wrong directory.\n\n**Root Cause**: The command was run from `/var/home/matt/dev/beads` (where the `fix-ci` worktree exists) instead of the main repository directory `/var/home/matt/dev/beads/main`. Since `fix-ci` is a git worktree with a `.git` file (not directory), the worktree creation logic failed when trying to create `\u003ccurrent_dir\u003e/.git/beads-worktrees/\u003cbranch\u003e`.\n\n**Solution Verified**: Execute `bd sync` from the main repository directory:\n```bash\ncd main \u0026\u0026 bd sync --dry-run\n```\n","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-05T15:25:24.514998248-07:00","updated_at":"2025-12-05T15:42:32.910166956-07:00"} +{"id":"bd-r6a.2","title":"Implement subgraph cloning with variable substitution","description":"Core implementation of template instantiation:\n\n1. Add `bd template instantiate \u003ctemplate-id\u003e [--var key=value]...` command\n2. Implement subgraph loading:\n - Load template epic\n - Recursively load all children (and their children)\n - Load all dependencies between issues in the subgraph\n3. Implement variable substitution:\n - Scan titles and descriptions for `{{name}}` patterns\n - Replace with provided values\n - Error on missing required variables (or prompt interactively)\n4. Implement cloning:\n - Generate new IDs for all issues\n - Create cloned issues with substituted text\n - Remap and create dependencies\n5. Return the new epic ID\n\nConsider adding `--dry-run` flag to preview what would be created.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T22:43:25.179848-08:00","updated_at":"2025-12-17T23:02:29.034444-08:00","closed_at":"2025-12-17T23:02:29.034444-08:00","dependencies":[{"issue_id":"bd-r6a.2","depends_on_id":"bd-r6a","type":"parent-child","created_at":"2025-12-17T22:43:25.180286-08:00","created_by":"daemon"},{"issue_id":"bd-r6a.2","depends_on_id":"bd-r6a.1","type":"blocks","created_at":"2025-12-17T22:44:03.15413-08:00","created_by":"daemon"}]} +{"id":"bd-qqc.8","title":"Create and push git tag v{{version}}","description":"Create the release tag and push it:\n\n```bash\ngit tag v{{version}}\ngit push origin v{{version}}\n```\n\nThis triggers the GoReleaser GitHub Action to build release binaries.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T22:42:34.659927-08:00","updated_at":"2025-12-18T22:47:14.054232-08:00","closed_at":"2025-12-18T22:47:14.054232-08:00","dependencies":[{"issue_id":"bd-qqc.8","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T22:42:34.660248-08:00","created_by":"daemon"},{"issue_id":"bd-qqc.8","depends_on_id":"bd-vgi5","type":"blocks","created_at":"2025-12-18T22:43:21.209529-08:00","created_by":"daemon"}]} +{"id":"bd-umbf","title":"Design contributor namespace isolation for beads pollution prevention","description":"## Problem\n\nWhen contributors work on beads-the-project using beads-the-tool, their personal work-tracking issues leak into PRs. The .beads/issues.jsonl is intentionally tracked (it's the project's issue database), but contributors' local issues pollute the diff.\n\nThis is a recursion problem unique to self-hosting projects.\n\n## Possible Solutions to Explore\n\n1. **Contributor namespaces** - Each contributor gets a private prefix (e.g., `bd-steve-xxxx`) that's gitignored or filtered\n2. **Separate database** - Contributors use BEADS_DIR pointing elsewhere for personal tracking\n3. **Issue ownership/visibility flags** - Mark issues as \"local-only\" vs \"project\"\n4. **Prefix-based filtering** - Configure which prefixes are committed vs ignored\n\n## Design Considerations\n\n- Should be zero-friction for contributors (no manual setup)\n- Must not break existing workflows\n- Needs to work with sync/collaboration features\n- Consider: what if a \"personal\" issue graduates to \"project\" issue?\n\n## Expansion Needed\n\nThis is a placeholder. Needs detailed design exploration before implementation.","status":"open","priority":2,"issue_type":"feature","created_at":"2025-12-13T18:00:29.638743-08:00","updated_at":"2025-12-13T18:00:41.345673-08:00"} +{"id":"bd-k88w","title":"Push version bump to GitHub","description":"git push origin main - triggers CI but no release yet.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.762574-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-6ns7","title":"test hook pin","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-12-23T04:39:16.619755-08:00","updated_at":"2025-12-23T04:51:29.436788-08:00","deleted_at":"2025-12-23T04:51:29.436788-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} +{"id":"bd-s0qf","title":"GH#405: Fix prefix parsing with hyphens - multi-hyphen prefixes parsed incorrectly","description":"Fixed: ExtractIssuePrefix was falling back to first-hyphen for word-like suffixes, breaking multi-hyphen prefixes like 'hacker-news' and 'me-py-toolkit'.","status":"tombstone","priority":2,"issue_type":"bug","created_at":"2025-12-16T01:13:56.951359-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} +{"id":"bd-aydr.6","title":"Add unit tests for reset package","description":"Comprehensive unit tests for internal/reset package.\n\n## Test Cases\n\n### ValidateState tests\n- .beads/ exists β†’ success\n- .beads/ missing β†’ appropriate error\n- git dirty state detection\n\n### CountImpact tests \n- Empty .beads/ β†’ zero counts\n- With issues β†’ correct count (open vs closed)\n- With tombstones β†’ correct count\n- Returns HasUncommitted correctly\n\n### Backup tests\n- Creates backup with correct timestamp format\n- Preserves all files and permissions\n- Returns correct path\n- Handles missing .beads/ gracefully\n- Errors on pre-existing backup dir\n\n### Git operation tests\n- CheckGitState detects dirty, detached, not-a-repo\n- GitRemoveBeads removes correct files\n- GitCommitReset creates commit with message\n- Operations skip gracefully when not in git repo\n\n### Reset tests (with mocks/temp dirs)\n- Soft reset removes files, calls init\n- Hard reset includes git operations\n- Dry run doesn't modify anything\n- SkipInit flag prevents re-initialization\n- Daemon killall is called\n- Backup is created when requested\n\n## Approach\n- Can start with interface definitions (TDD style)\n- Use testify for assertions\n- Create temp directories for isolation\n- Mock git operations where needed\n- Test completion depends on implementation tasks\n\n## File Location\n`internal/reset/reset_test.go`\n`internal/reset/backup_test.go`\n`internal/reset/git_test.go`","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-13T08:44:57.01739+11:00","updated_at":"2025-12-13T10:13:32.611698+11:00","closed_at":"2025-12-13T09:59:20.820314+11:00","dependencies":[{"issue_id":"bd-aydr.6","depends_on_id":"bd-aydr","type":"parent-child","created_at":"2025-12-13T08:44:57.017813+11:00","created_by":"daemon"}]} +{"id":"bd-pbh.12","title":"Push commit and tag to origin","description":"```bash\ngit push origin main\ngit push origin v0.30.4\n```\n\nThis triggers GitHub Actions:\n- GoReleaser build\n- PyPI publish\n- npm publish\n\n\n```verify\ngit ls-remote origin refs/tags/v0.30.4 | grep -q 'v0.30.4'\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:11.066074-08:00","updated_at":"2025-12-17T21:46:46.301948-08:00","closed_at":"2025-12-17T21:46:46.301948-08:00","dependencies":[{"issue_id":"bd-pbh.12","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:11.066442-08:00","created_by":"daemon"},{"issue_id":"bd-pbh.12","depends_on_id":"bd-pbh.11","type":"blocks","created_at":"2025-12-17T21:19:11.265986-08:00","created_by":"daemon"}]} +{"id":"bd-7m16","title":"GH#519: bd sync fails when sync.branch is currently checked-out branch","description":"bd sync tries to create worktree for sync.branch even when already on that branch. Should commit directly instead. See GitHub issue #519.","status":"tombstone","priority":2,"issue_type":"bug","created_at":"2025-12-16T01:03:36.613211-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} +{"id":"bd-512v","title":"Verify release artifacts","description":"Check GitHub releases page - binaries for darwin/linux/windows should be available","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T13:52:33.067124-08:00","updated_at":"2025-12-21T13:53:49.35495-08:00","deleted_at":"2025-12-21T13:53:49.35495-08:00","deleted_by":"stevey","delete_reason":"manual delete","original_type":"task"} +{"id":"bd-754r","title":"Merge: bd-thgk","description":"branch: polecat/Compactor\ntarget: main\nsource_issue: bd-thgk\nrig: beads","status":"closed","priority":1,"issue_type":"merge-request","created_at":"2025-12-23T13:41:43.965771-08:00","updated_at":"2025-12-23T19:12:08.345449-08:00","closed_at":"2025-12-23T19:12:08.345449-08:00"} +{"id":"bd-2vh3","title":"Ephemeral issue cleanup and history compaction","description":"## Problem\n\nBeads history grows without bound. Every message, handoff, work assignment\nstays in issues.jsonl forever. Enterprise users will balk at \"git as database.\"\n\n## Solution: Two-Tier Cleanup\n\n### Tier 1: Ephemeral Cleanup (v1)\n\nbd cleanup --ephemeral --closed\n\n- Deletes closed issues where ephemeral=true from issues.jsonl\n- Safe: only removes explicitly marked ephemeral + closed\n- Preserves git history (commits still exist)\n- Run after swarm completion\n\n### Tier 2: History Compaction (v2)\n\nbd compact --squash\n\n- Rewrites issues.jsonl to remove tombstones\n- Optionally squashes git history (interactive rebase equivalent)\n- Preserves Merkle proofs for deleted items\n- Advanced: cold storage tiering\n\n## HOP Context\n\n| Layer | HOP Role | Persistence |\n|-------|----------|-------------|\n| Execution trace | None | Ephemeral |\n| Work scaffolding | None | Summarizable |\n| Work outcome | CV entry | Permanent |\n| Validation record | Stake proof | Permanent |\n\n\"Execution is ephemeral. Outcomes are permanent. You can't squash your CV.\"\n\n## Success Criteria\n\n- After cleanup --ephemeral: issues.jsonl only contains persistent work\n- Work outcomes preserved (CV entries)\n- Validation records preserved (stake proofs)\n- Execution scaffolding removed (transient coordination)","notes":"## Implementation Plan (REVISED after code review)\n\nSee history/EPHEMERAL_MOLECULES_DESIGN.md for comprehensive design + review.\n\n## Key Simplification\n\nAfter code review, Tier 1 is MUCH simpler than originally designed:\n\n- **Original**: Separate ephemeral repo with routing.ephemeral config\n- **Revised**: Just set Wisp: true in cloneSubgraph()\n\nThe wisp field and bd cleanup --wisp already exist\\!\n\n## Child Tasks (in dependency order)\n\n1. **bd-2vh3.2**: Tier 1 - Ephemeral spawning (SIMPLIFIED) [READY]\n - Just add Wisp: true to template.go:474\n - Add --persistent flag to opt out\n2. **bd-2vh3.3**: Tier 2 - Basic bd mol squash command\n3. **bd-2vh3.4**: Tier 3 - AI-powered squash summarization\n4. **bd-2vh3.5**: Tier 4 - Auto-squash on molecule completion\n5. **bd-2vh3.6**: Tier 5 - JSONL archive rotation (DEFERRED: post-1.0)\n\n## What Already Exists\n\n| Component | Location |\n|-----------|----------|\n| Ephemeral field | internal/types/types.go:45 |\n| bd cleanup --wisp | cmd/bd/cleanup.go:72 |\n| cloneSubgraph() | cmd/bd/template.go:456 |\n| loadTemplateSubgraph() | cmd/bd/template.go |\n\n## HOP Alignment\n\n'Execution is ephemeral. Outcomes are permanent. You can't squash your CV.'","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-12-20T21:02:20.101367-08:00","updated_at":"2025-12-21T17:50:02.958155-08:00","closed_at":"2025-12-21T17:50:02.958155-08:00"} +{"id":"bd-pbh.9","title":"Update hook templates to 0.30.4","description":"Update bd-hooks-version comment in all 4 hook templates:\n- cmd/bd/templates/hooks/pre-commit\n- cmd/bd/templates/hooks/post-merge\n- cmd/bd/templates/hooks/pre-push\n- cmd/bd/templates/hooks/post-checkout\n\nEach should have:\n```bash\n# bd-hooks-version: 0.30.4\n```\n\n\n```verify\ngrep -l 'bd-hooks-version: 0.30.4' cmd/bd/templates/hooks/* | wc -l | grep -q '4'\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:11.0248-08:00","updated_at":"2025-12-17T21:46:46.27561-08:00","closed_at":"2025-12-17T21:46:46.27561-08:00","dependencies":[{"issue_id":"bd-pbh.9","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:11.025124-08:00","created_by":"daemon"}]} +{"id":"bd-usro","title":"Rename 'template instantiate' to 'mol bond'","description":"Rename the template instantiation command to match molecule metaphor.\n\nCurrent: bd template instantiate \u003cid\u003e --var key=value\nTarget: bd mol bond \u003cid\u003e --var key=value\n\nChanges needed:\n- Add 'mol' command group (or extend existing)\n- Add 'bond' subcommand that wraps template instantiate logic\n- Keep 'template instantiate' as deprecated alias for backward compat\n- Update help text and docs to use molecule terminology\n\nThe 'bond' verb captures:\n1. Chemistry metaphor (molecules bond to form structures)\n2. Dependency linking (child issues bonded in a DAG)\n3. Short and active\n\nSee also: molecule execution model in Gas Town","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-12-20T16:56:37.582795-08:00","updated_at":"2025-12-20T23:22:43.567337-08:00","closed_at":"2025-12-20T23:22:43.567337-08:00"} +{"id":"bd-uqfn","title":"Work on beads-wkt: Output control parameters for MCP tool...","description":"Work on beads-wkt: Output control parameters for MCP tools (GH#622). Add brief, fields, max_description_length params to ready/list/show. When done, submit MR (not PR) to integration branch for Refinery.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-19T22:57:10.675535-08:00","updated_at":"2025-12-20T00:49:51.929271-08:00","closed_at":"2025-12-19T23:28:25.362931-08:00"} +{"id":"bd-xo1o.2","title":"WaitsFor directive: Fanout gate for dynamic children","description":"Implement WaitsFor directive for molecules that spawn dynamic children.\n\n## Syntax\n```markdown\n## Step: aggregate\nCollect outcomes from all dynamically-bonded children.\nWaitsFor: all-children\nNeeds: survey-workers\n```\n\n## Behavior\n1. Parse WaitsFor directive during molecule step parsing\n2. Track which steps spawn dynamic children (the spawner)\n3. Gate step waits until ALL children of the spawner complete\n4. Works with bd ready - gate step not ready until children done\n\n## Gate Types\n- `WaitsFor: all-children` - Wait for all dynamic children\n- `WaitsFor: any-children` - Proceed when first child completes (future)\n- `WaitsFor: \u003cstep-ref\u003e.children` - Wait for specific spawner's children\n\n## Integration\n- bd ready should skip gate steps until children complete\n- bd show \u003cmol\u003e should display gate status and child count","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-23T02:33:14.946475-08:00","updated_at":"2025-12-23T04:00:09.443106-08:00","closed_at":"2025-12-23T04:00:09.443106-08:00","dependencies":[{"issue_id":"bd-xo1o.2","depends_on_id":"bd-xo1o","type":"parent-child","created_at":"2025-12-23T02:33:14.950008-08:00","created_by":"daemon"}]} +{"id":"bd-kwjh.3","title":"bd mol bond --ephemeral flag","description":"Add --ephemeral flag to bd mol bond command.\n\n## Behavior\n- `bd mol bond \u003cproto\u003e --ephemeral` creates molecule in .beads-ephemeral/\n- Without flag, creates in .beads/ (current behavior)\n- Ephemeral molecules have `ephemeral: true` in their issue record\n\n## Implementation\n- Add --ephemeral bool flag to mol bond command\n- Route to EphemeralStore when flag set\n- Set ephemeral:true on created issue\n\n## Testing\n- Test mol bond creates in correct location\n- Test ephemeral flag is persisted\n- Test regular mol bond still works","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-22T00:07:26.591728-08:00","updated_at":"2025-12-22T00:17:42.50719-08:00","closed_at":"2025-12-22T00:17:42.50719-08:00","dependencies":[{"issue_id":"bd-kwjh.3","depends_on_id":"bd-kwjh","type":"parent-child","created_at":"2025-12-22T00:07:26.592102-08:00","created_by":"daemon"},{"issue_id":"bd-kwjh.3","depends_on_id":"bd-kwjh.2","type":"blocks","created_at":"2025-12-22T00:07:26.592866-08:00","created_by":"daemon"}]} +{"id":"bd-y8tn","title":"Test Molecule","description":"A test molecule","status":"closed","priority":2,"issue_type":"molecule","created_at":"2025-12-19T18:30:24.491279-08:00","updated_at":"2025-12-19T18:31:12.49898-08:00","closed_at":"2025-12-19T18:31:12.49898-08:00"} +{"id":"bd-aydr.7","title":"Add integration tests for bd reset command","description":"End-to-end integration tests for the reset command.\n\n## Test Scenarios\n\n### Basic reset\n1. Init beads, create some issues\n2. Run bd reset --force\n3. Verify .beads/ is fresh, issues gone\n\n### Hard reset\n1. Init beads, create issues, commit\n2. Run bd reset --hard --force \n3. Verify git history has reset commits\n\n### Backup functionality\n1. Init beads, create issues\n2. Run bd reset --backup --force\n3. Verify backup exists with correct contents\n4. Verify main .beads/ is reset\n\n### Dry run\n1. Init beads, create issues\n2. Run bd reset --dry-run\n3. Verify nothing changed\n\n### Confirmation prompt\n1. Init beads\n2. Run bd reset (no --force)\n3. Verify prompts for confirmation\n4. Test both y and n responses\n\n## Location\ntests/integration/reset_test.go or similar","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-13T08:44:58.479282+11:00","updated_at":"2025-12-13T06:24:29.561908-08:00","closed_at":"2025-12-13T10:15:59.221637+11:00","dependencies":[{"issue_id":"bd-aydr.7","depends_on_id":"bd-aydr","type":"parent-child","created_at":"2025-12-13T08:44:58.479686+11:00","created_by":"daemon"},{"issue_id":"bd-aydr.7","depends_on_id":"bd-aydr.4","type":"blocks","created_at":"2025-12-13T08:45:11.15972+11:00","created_by":"daemon"}]} +{"id":"bd-rze6","title":"Digest: Release v0.34.0 @ 2025-12-22 12:16","description":"Released v0.34.0: wisp commands, chemistry UX, cross-project deps","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-22T12:16:53.033119-08:00","updated_at":"2025-12-22T12:16:53.033119-08:00","closed_at":"2025-12-22T12:16:53.033025-08:00"} +{"id":"bd-8g8","title":"Fix G304 potential file inclusion in cmd/bd/tips.go:259","description":"Linting issue: G304: Potential file inclusion via variable (gosec) at cmd/bd/tips.go:259:18. Error: if data, err := os.ReadFile(settingsPath); err == nil {","status":"closed","priority":0,"issue_type":"bug","created_at":"2025-12-07T15:34:57.189730843-07:00","updated_at":"2025-12-17T23:13:40.534569-08:00","closed_at":"2025-12-17T16:46:11.029837-08:00"} +{"id":"bd-dwh","title":"Implement or remove ExpectExit/ExpectStdout verification fields","description":"The Verification struct in internal/types/workflow.go has ExpectExit and ExpectStdout fields that are never used by workflowVerifyCmd. Either implement the functionality or remove the dead fields.","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-17T22:23:02.708627-08:00","updated_at":"2025-12-17T22:34:07.300348-08:00","closed_at":"2025-12-17T22:34:07.300348-08:00"} +{"id":"bd-cnwx","title":"Refactor mol.go: split 1200+ line file into subcommands","description":"## Problem\n\ncmd/bd/mol.go has grown to 1200+ lines with all molecule subcommands in one file.\n\n## Current State\n- mol.go: 1218 lines (bond, spawn, run, distill, catalog, show, etc.)\n- Hard to navigate, review, and maintain\n\n## Proposed Structure\nSplit into separate files by subcommand:\n```\ncmd/bd/\nβ”œβ”€β”€ mol.go # Root command, shared helpers\nβ”œβ”€β”€ mol_bond.go # bd mol bond\nβ”œβ”€β”€ mol_spawn.go # bd mol spawn \nβ”œβ”€β”€ mol_run.go # bd mol run\nβ”œβ”€β”€ mol_distill.go # bd mol distill\nβ”œβ”€β”€ mol_catalog.go # bd mol catalog\nβ”œβ”€β”€ mol_show.go # bd mol show\n└── mol_test.go # Tests (already separate)\n```\n\n## Benefits\n- Easier code review\n- Better separation of concerns\n- Simpler navigation\n- Each subcommand self-contained","status":"closed","priority":2,"issue_type":"chore","created_at":"2025-12-21T11:30:58.832192-08:00","updated_at":"2025-12-21T11:42:49.390824-08:00","closed_at":"2025-12-21T11:42:49.390824-08:00"} +{"id":"bd-e1085716","title":"bd validate - Comprehensive health check","description":"Run all validation checks in one command.\n\nChecks:\n- Duplicates\n- Orphaned dependencies\n- Test pollution\n- Git conflicts\n\nSupports --fix-all for auto-repair.\n\nDepends on bd-cbed9619.1, bd-0dcea000, bd-31aab707, bd-9826b69a.\n\nFiles: cmd/bd/validate.go (new)","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-29T23:05:13.980679-07:00","updated_at":"2025-12-17T22:58:34.562008-08:00","closed_at":"2025-12-17T22:58:34.562008-08:00"} +{"id":"bd-jv4w","title":"Phase 1.2: Separate bdt executable - Initial structure","description":"Create minimal bdt command structure completely separate from bd. Must not share code, config, or database.\n\n## Subtasks\n1. Create cmd/bdt/ directory with main.go\n2. Implement bdt version, help, and status commands\n3. Configure separate database location: $HOME/.bdt/ (not $HOME/.beads/)\n4. Create separate issues file: issues.toon (not issues.jsonl)\n5. Update build system:\n - Makefile: Add bdt target\n - .goreleaser.yml: Add bdt binary config\n\n## Files to Create\n- cmd/bdt/main.go - Entry point\n- cmd/bdt/version.go - Version handling\n- cmd/bdt/help.go - Help text (separate from bd)\n\n## Success Criteria\n- `make build` produces both `bd` and `bdt` executables\n- `bdt version` shows distinct version output from bd\n- `bdt --help` shows distinct help text\n- bdt uses $HOME/.bdt/ directory (verify with `bdt info`)\n- bd and bdt completely isolated (no shared imports beyond stdlib)","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T11:48:34.866877282-07:00","updated_at":"2025-12-19T12:59:11.389296015-07:00","closed_at":"2025-12-19T12:59:11.389296015-07:00"} +{"id":"bd-kwro.3","title":"Graph Link: relates_to for knowledge graph","description":"Implement relates_to link type for loose associations.\n\nNew command:\n- bd relate \u003cid1\u003e \u003cid2\u003e - creates bidirectional relates_to link\n\nQuery support:\n- bd show \u003cid\u003e --related shows related issues\n- bd list --related-to \u003cid\u003e\n\nStorage:\n- relates_to stored as JSON array of issue IDs\n- Consider: separate junction table for efficiency at scale?\n\nThis enables 'see also' connections without blocking or hierarchy.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-16T03:01:30.793115-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-czss","title":"Update CHANGELOG.md with release notes","description":"Add meaningful release notes to CHANGELOG.md describing what changed in {{version}}","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T22:55:59.909641-08:00","updated_at":"2025-12-20T17:59:26.262153-08:00","closed_at":"2025-12-20T01:23:51.407302-08:00","dependencies":[{"issue_id":"bd-czss","depends_on_id":"bd-6s61","type":"parent-child","created_at":"2025-12-19T22:56:14.862724-08:00","created_by":"daemon"},{"issue_id":"bd-czss","depends_on_id":"bd-qkw9","type":"blocks","created_at":"2025-12-19T22:56:23.145894-08:00","created_by":"daemon"}]} +{"id":"bd-ipj7","title":"enhance 'bd status' to show recent activity","description":"It would be nice to be able to quickly view the last N changes in the database, to see wha's recently been worked on. I'm imagining something like 'bd status activity'.","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-21T11:08:50.996541974-07:00","updated_at":"2025-12-21T17:54:00.279039-08:00","closed_at":"2025-12-21T17:54:00.279039-08:00"} +{"id":"bd-bs5j","title":"Release v0.33.2","description":"Version bump workflow for beads release 0.33.2.\n\n## Variables\n- `0.33.2` - The new version number (e.g., 0.31.0)\n- `2025-12-21` - Release date (YYYY-MM-DD format)\n\n## Workflow Steps\n1. Kill running daemons\n2. Run tests and linting\n3. Bump version in all files (10 files total)\n4. Update cmd/bd/info.go with release notes\n5. Commit and push version bump\n6. Create and push git tag\n7. Update Homebrew formula\n8. Upgrade local Homebrew installation\n9. Verify installation\n\n## Files Updated by bump-version.sh\n- cmd/bd/version.go\n- .claude-plugin/plugin.json\n- .claude-plugin/marketplace.json\n- integrations/beads-mcp/pyproject.toml\n- integrations/beads-mcp/src/beads_mcp/__init__.py\n- README.md\n- npm-package/package.json\n- cmd/bd/templates/hooks/* (4 files)\n- CHANGELOG.md\n\n## Manual Step Required\n- cmd/bd/info.go - Add versionChanges entry with release notes","status":"tombstone","priority":1,"issue_type":"epic","created_at":"2025-12-21T16:10:13.759062-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"epic"} +{"id":"bd-ncwo","title":"Ghost resurrection: remote status:closed wins during git merge","description":"During bd sync, the 3-way git merge sometimes keeps remote's status:closed instead of local's status:tombstone. This causes ghost issues to resurrect even when tombstones exist.\n\nRoot cause: Git 3-way merge doesn't understand tombstone semantics. If base had closed, local changed to tombstone, and remote has closed, git might keep remote's version.\n\nThe early tombstone check in importer.go only prevents CREATION when tombstones exist in DB. But if applyDeletionsFromMerge hard-deletes the tombstones before import runs (because they're not in the merged result), the check doesn't help.\n\nPotential fixes:\n1. Make tombstones 'win' in the beads merge driver (internal/merge/merge.go)\n2. Don't hard-delete tombstones in applyDeletionsFromMerge if they're in the DB\n3. Export tombstones to a separate file that's not subject to merge\n\nGhost issues: bd-cb64c226.*, bd-cbed9619.*","status":"tombstone","priority":1,"issue_type":"bug","created_at":"2025-12-16T22:01:03.56423-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} +{"id":"bd-of2p","title":"Improve version bump molecule with missing steps","description":"During v0.32.1 release, discovered missing steps in the release molecule:\n\n**Missing from molecule:**\n1. Rebuild ~/go/bin/bd (only did ~/.local/bin/bd)\n2. Install beads-mcp from local source: `uv tool install --reinstall ./integrations/beads-mcp`\n3. Restart daemons with `bd daemons killall`\n4. (Optional) Publish beads-mcp to PyPI\n\n**Current molecule steps (bd-6s61):**\n1. Update CHANGELOG.md\n2. Update info.go versionChanges \n3. Run bump-version.sh\n4. Run tests and linting\n5. Update local installation\n6. Commit and push release\n7. Wait for CI\n8. Verify release artifacts\n\n**Proposed additions:**\n- After \"Update local installation\": rebuild BOTH ~/.local/bin/bd AND ~/go/bin/bd\n- Add: \"Install beads-mcp from source\" step\n- Add: \"Restart daemons\" step\n- Add: \"Verify all versions match\" step that checks all artifacts\n\n**Also learned:**\n- Must run from mayor/rig to avoid git conflicts with bd sync (already documented in bump-version.sh)","notes":"CORRECTION: npm publishing IS automated and working!\n\n**Package naming:**\n- OLD: `beads` (npm) - deprecated, stuck at 0.2.1\n- CURRENT: `@beads/bd` (npm) - scoped package, auto-published by CI\n\n**How it works:**\n- CI uses OIDC trusted publishing (no token needed)\n- Workflow: .github/workflows/release.yml β†’ publish-npm job\n- Permissions: `id-token: write` enables GitHub OIDC\n- To install: `npm install -g @beads/bd` (not `npm install beads`)\n\n**All publishing is automated on tag push:**\n1. GitHub Release - goreleaser βœ“\n2. PyPI - publish-pypi job βœ“\n3. Homebrew - update-homebrew job βœ“\n4. npm (@beads/bd) - publish-npm job βœ“\n\n**Remaining molecule improvements (local steps only):**\n- Rebuild BOTH ~/.local/bin/bd AND ~/go/bin/bd\n- Install beads-mcp from source: `uv tool install --reinstall ./integrations/beads-mcp`\n- Restart daemons: `bd daemons killall`\n- Run from mayor/rig to avoid git conflicts with bd sync\n- Final verification step to check all local versions match","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-20T22:09:11.845787-08:00","updated_at":"2025-12-22T16:01:18.199132-08:00","closed_at":"2025-12-22T16:01:18.199132-08:00"} +{"id":"bd-aydr.1","title":"Implement core reset package (internal/reset)","description":"Create the core reset logic in internal/reset/ package.\n\n## Responsibilities\n- ResetOptions struct with all flag options\n- CountImpact() - count issues/tombstones that will be deleted\n- ValidateState() - check .beads/ exists, check git dirty state\n- ExecuteReset() - main reset logic (without CLI concerns)\n- Integrate with daemon killall\n\n## Interface Design\n```go\ntype ResetOptions struct {\n Hard bool // Include git operations (git rm, commit)\n Backup bool // Create backup before reset\n DryRun bool // Preview only, don't execute\n SkipInit bool // Don't re-initialize after reset\n}\n\ntype ResetResult struct {\n IssuesDeleted int\n TombstonesDeleted int\n BackupPath string // if backup was created\n DaemonsKilled int\n}\n\ntype ImpactSummary struct {\n IssueCount int\n OpenCount int\n ClosedCount int\n TombstoneCount int\n HasUncommitted bool // git dirty state\n}\n\nfunc Reset(opts ResetOptions) (*ResetResult, error)\nfunc CountImpact() (*ImpactSummary, error)\nfunc ValidateState() error\n```\n\n## IMPORTANT: CLI vs Core Separation\n- `Force` (skip confirmation) is NOT in ResetOptions - that's a CLI concern\n- Core always executes when called; CLI decides whether to prompt first\n- Keep CLI-agnostic: no prompts, no colored output, no user interaction\n- Return errors for CLI to handle with user-friendly messages\n- Unit testable in isolation\n\n## Dependencies\n- Uses daemon.KillAllDaemons() from internal/daemon/\n- Calls bd init logic after reset (unless SkipInit)","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-13T08:44:50.145364+11:00","updated_at":"2025-12-13T10:13:32.610253+11:00","closed_at":"2025-12-13T09:20:06.184893+11:00","dependencies":[{"issue_id":"bd-aydr.1","depends_on_id":"bd-aydr","type":"parent-child","created_at":"2025-12-13T08:44:50.145775+11:00","created_by":"daemon"}]} +{"id":"bd-lz49","title":"Add gate fields: await_type, await_id, timeout, waiters","description":"Add gate-specific fields to the Issue type.\n\n## New Fields\n- await_type: string - Type of condition (gh:run, gh:pr, timer, human, mail)\n- await_id: string - Identifier for the condition\n- timeout: duration - Max time to wait before escalation\n- waiters: []string - Mail addresses to notify when gate clears\n\n## Implementation\n- Add fields to Issue struct in internal/types/types.go\n- Update SQLite schema for new columns\n- Add JSONL serialization/deserialization\n- Update import/export logic","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-23T11:44:32.720196-08:00","updated_at":"2025-12-23T12:00:03.837691-08:00","closed_at":"2025-12-23T12:00:03.837691-08:00","dependencies":[{"issue_id":"bd-lz49","depends_on_id":"bd-udsi","type":"parent-child","created_at":"2025-12-23T11:44:52.738823-08:00","created_by":"daemon"},{"issue_id":"bd-lz49","depends_on_id":"bd-2v0f","type":"blocks","created_at":"2025-12-23T11:44:56.269351-08:00","created_by":"daemon"}]} +{"id":"bd-dsp","title":"Test stdin body-file","status":"closed","priority":4,"issue_type":"task","created_at":"2025-12-17T17:27:32.098806-08:00","updated_at":"2025-12-17T17:28:33.832749-08:00","closed_at":"2025-12-17T17:28:33.832749-08:00"} +{"id":"bd-m8ro","title":"Improve test coverage for internal/rpc (47.5% β†’ 60%)","description":"The RPC package has only 47.5% test coverage. RPC is the communication layer for daemon operations.\n\nCurrent coverage: 47.5%\nTarget coverage: 60%","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-13T20:43:09.515299-08:00","updated_at":"2025-12-23T22:29:35.837758-08:00","closed_at":"2025-12-23T20:45:11.261393-08:00"} +{"id":"bd-379","title":"Implement `bd setup cursor` for Cursor IDE integration","description":"Create a `bd setup cursor` command that integrates Beads workflow into Cursor IDE via .cursorrules file. Unlike Claude Code (which has hooks), Cursor uses a static rules file to provide context to its AI.","status":"open","priority":3,"issue_type":"feature","created_at":"2025-11-11T23:32:22.170083-08:00","updated_at":"2025-11-11T23:32:22.170083-08:00"} +{"id":"bd-dp4w","title":"Test message","description":"This is a test message body","status":"tombstone","priority":2,"issue_type":"message","created_at":"2025-12-16T18:11:58.467876-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"message"} +{"id":"bd-siz1","title":"GH#532: bd sync circular error (suggests running bd sync)","description":"bd sync error message recommends running bd sync to fix the bd sync error. Fix error handling to provide useful guidance. See GitHub issue #532.","status":"tombstone","priority":2,"issue_type":"bug","created_at":"2025-12-16T01:04:00.543573-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} +{"id":"bd-kzda","title":"Implement conditional bond type for mol bond","description":"The mol bond command accepts 'conditional' as a bond type but doesn't implement any conditional-specific behavior. It currently behaves identically to 'parallel'.\n\n**Expected behavior:**\nConditional bonds should mean 'B runs only if A fails' per the help text (mol.go:318).\n\n**Implementation needed:**\n- Add failure-condition dependency handling\n- Possibly new dependency type or status-based blocking\n- Update bondProtoProto, bondProtoMol, bondMolMol to handle conditional\n\n**Alternative:**\nRemove 'conditional' from valid bond types until implemented.\n\nThis is new functionality, not a regression.","status":"closed","priority":3,"issue_type":"feature","created_at":"2025-12-21T10:23:01.966367-08:00","updated_at":"2025-12-23T01:33:25.734264-08:00","closed_at":"2025-12-23T01:33:25.734264-08:00"} +{"id":"bd-n4td","title":"Add warning when staleness check errors","description":"## Problem\n\nWhen ensureDatabaseFresh() calls CheckStaleness() and it errors (corrupted metadata, permission issues, etc.), we silently proceed with potentially stale data.\n\n**Location:** cmd/bd/staleness.go:27-32\n\n**Scenarios:**\n- Corrupted metadata table\n- Database locked by another process \n- Permission issues reading JSONL file\n- Invalid last_import_time format in DB\n\n## Current Code\n\n```go\nisStale, err := autoimport.CheckStaleness(ctx, store, dbPath)\nif err \\!= nil {\n // If we can't determine staleness, allow operation to proceed\n // (better to show potentially stale data than block user)\n return nil\n}\n```\n\n## Fix\n\n```go\nisStale, err := autoimport.CheckStaleness(ctx, store, dbPath)\nif err \\!= nil {\n fmt.Fprintf(os.Stderr, \"Warning: Could not verify database freshness: %v\\n\", err)\n fmt.Fprintf(os.Stderr, \"Proceeding anyway. Data may be stale.\\n\\n\")\n return nil\n}\n```\n\n## Impact\nMedium - users should know when staleness check fails\n\n## Effort\nEasy - 5 minutes","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-11-20T20:16:34.889997-05:00","updated_at":"2025-12-17T23:13:40.531031-08:00","closed_at":"2025-12-17T19:11:12.950618-08:00","dependencies":[{"issue_id":"bd-n4td","depends_on_id":"bd-2q6d","type":"blocks","created_at":"2025-11-20T20:18:20.154723-05:00","created_by":"stevey"}]} +{"id":"bd-s2t","title":"wish: a 'continue' or similar cmd/flag which means alter last issue","description":"so many time I create an issue and then have another thought: 'oh, before I did X and it crashed there was ZZZ happening' or 'actually that is P4 not P2'. It would be nice if when `bd {cmd}` is used without a {title} or {id} it just adds or updates the most recently touched issue.","status":"open","priority":3,"issue_type":"task","created_at":"2025-12-08T06:46:37.529160416-07:00","updated_at":"2025-12-08T06:46:37.529160416-07:00"} +{"id":"bd-7b7h","title":"bd sync --merge fails due to chicken-and-egg: .beads/ always dirty","description":"## Problem\n\nWhen sync.branch is configured (e.g., beads-sync), the bd sync workflow creates a chicken-and-egg problem:\n\n1. `bd sync` commits changes to beads-sync via worktree\n2. `bd sync` copies JSONL to main working dir via `copyJSONLToMainRepo()` (sync.go line 364, worktree.go line 678-685)\n3. The copy is NOT committed to main - it just updates the working tree\n4. `bd sync --merge` checks for clean working dir (sync.go line 1547-1548)\n5. `bd sync --merge` FAILS because .beads/issues.jsonl is uncommitted!\n\n## Impact\n\n- sync.branch workflow is fundamentally broken\n- Users cannot periodically merge beads-sync β†’ main\n- Main branch always shows as dirty\n- Creates confusion about git state\n\n## Root Cause\n\nsync.go:1547-1548:\n```go\nif len(strings.TrimSpace(string(statusOutput))) \u003e 0 {\n return fmt.Errorf(\"main branch has uncommitted changes, please commit or stash them first\")\n}\n```\n\nThis check blocks merge when ANY uncommitted changes exist, including the .beads/ changes that `bd sync` itself created.\n\n## Proposed Fix\n\nOption A: Exclude .beads/ from the clean check in `mergeSyncBranch`:\n```go\n// Check if there are non-beads uncommitted changes\nstatusCmd := exec.CommandContext(ctx, \"git\", \"status\", \"--porcelain\", \"--\", \":!.beads/\")\n```\n\nOption B: Auto-stash .beads/ changes before merge, restore after\n\nOption C: Change the workflow - do not copy JSONL to main working dir, instead always read from worktree\n\n## Files to Modify\n\n- cmd/bd/sync.go:1540-1549 (mergeSyncBranch function)\n- Possibly internal/syncbranch/worktree.go (copyJSONLToMainRepo)","notes":"## Fix Implemented\n\nModified cmd/bd/sync.go mergeSyncBranch function:\n\n1. **Exclude .beads/ from dirty check** (line 1543):\n Changed `git status --porcelain` to `git status --porcelain -- :!.beads/`\n This allows merge to proceed when only .beads/ has uncommitted changes.\n\n2. **Restore .beads/ to HEAD before merge** (lines 1553-1561):\n Added `git checkout HEAD -- .beads/` before merge to prevent\n \"Your local changes would be overwritten by merge\" errors.\n The .beads/ changes are redundant since they came FROM beads-sync.\n\n## Testing\n\n- All cmd/bd sync/merge tests pass\n- All internal/syncbranch tests pass\n- Manual verification needed for full workflow","status":"tombstone","priority":0,"issue_type":"bug","created_at":"2025-12-16T23:06:06.97703-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} +{"id":"bd-987a","title":"bd mol run: panic slice bounds out of range in mol_run.go:130","description":"## Problem\nbd mol run panics after successfully creating the molecule:\n\n```\nβœ“ Molecule running: created 9 issues\n Root issue: gt-i4lo (pinned, in_progress)\n Assignee: stevey\n\nNext steps:\n bd ready # Find unblocked work in this molecule\npanic: runtime error: slice bounds out of range [:8] with length 7\n\ngoroutine 1 [running]:\nmain.runMolRun(0x1014fc0c0, {0x140001e0f80, 0x1, 0x10089daad?})\n /Users/stevey/gt/beads/crew/dave/cmd/bd/mol_run.go:130 +0xc38\n```\n\n## Reproduction\n```bash\nbd --no-daemon mol run gt-lwuu --var issue=gt-test123\n```\nWhere gt-lwuu is a mol-polecat-work proto with 8 child steps.\n\n## Impact\nThe molecule IS created successfully - the panic happens after creation when formatting the \"Next steps\" output.","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-21T21:48:55.396018-08:00","updated_at":"2025-12-21T22:57:46.827469-08:00","closed_at":"2025-12-21T22:57:46.827469-08:00"} +{"id":"bd-wp5j","title":"Merge: bd-indn","description":"branch: polecat/rictus\ntarget: main\nsource_issue: bd-indn\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-23T20:45:51.286598-08:00","updated_at":"2025-12-23T21:21:57.697826-08:00","closed_at":"2025-12-23T21:21:57.697826-08:00"} +{"id":"bd-fcl1","title":"Merge: bd-au0.5","description":"branch: polecat/Searcher\ntarget: main\nsource_issue: bd-au0.5\nrig: beads","status":"closed","priority":1,"issue_type":"merge-request","created_at":"2025-12-23T13:39:11.946667-08:00","updated_at":"2025-12-23T19:12:08.346454-08:00","closed_at":"2025-12-23T19:12:08.346454-08:00"} +{"id":"bd-is6m","title":"Add gate checking to Deacon patrol loop","description":"Integrate gate checking into Deacon's patrol cycle.\n\n## Patrol Integration\n```go\nfunc (d *Deacon) checkGates(ctx context.Context) {\n gates, _ := d.store.ListOpenGates(ctx)\n \n for _, gate := range gates {\n // Check timeout\n if time.Since(gate.CreatedAt) \u003e gate.Timeout {\n d.notifyWaiters(gate, \"timeout\")\n d.closeGate(gate, \"timed out\")\n continue\n }\n \n // Check condition\n if d.checkCondition(gate.AwaitType, gate.AwaitID) {\n d.notifyWaiters(gate, \"cleared\")\n d.closeGate(gate, \"condition met\")\n }\n }\n}\n```\n\n## Note\nThis task is in Gas Town (gt), not beads. May need to be moved there.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-23T11:44:36.839709-08:00","updated_at":"2025-12-23T12:19:44.204647-08:00","closed_at":"2025-12-23T12:19:44.204647-08:00","dependencies":[{"issue_id":"bd-is6m","depends_on_id":"bd-u66e","type":"blocks","created_at":"2025-12-23T11:44:56.428084-08:00","created_by":"daemon"},{"issue_id":"bd-is6m","depends_on_id":"bd-udsi","type":"parent-child","created_at":"2025-12-23T11:44:52.909253-08:00","created_by":"daemon"}]} +{"id":"bd-5rj1","title":"Merge: bd-gqxd","description":"branch: polecat/furiosa\ntarget: main\nsource_issue: bd-gqxd\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-23T16:40:21.707706-08:00","updated_at":"2025-12-23T19:12:08.349245-08:00","closed_at":"2025-12-23T19:12:08.349245-08:00"} +{"id":"bd-687v","title":"Consider caching external dep resolution results","description":"Each call to GetReadyWork re-checks all external dependencies by:\n1. Querying for external deps in the local database\n2. Opening each external project's database\n3. Querying for closed issues with provides: labels\n\nFor workloads with many external deps or slow external databases, this adds latency on every bd ready call.\n\nPotential optimizations:\n- In-memory TTL cache for external dep status (e.g., 60 second TTL)\n- Store resolved status in a local cache table with timestamp\n- Batch resolution of common project/capability pairs\n\nThis is not urgent - current implementation is correct and performant for typical workloads. Only becomes an issue with many external deps across many projects.","status":"deferred","priority":3,"issue_type":"task","created_at":"2025-12-21T23:45:16.360877-08:00","updated_at":"2025-12-23T12:27:02.223409-08:00","dependencies":[{"issue_id":"bd-687v","depends_on_id":"bd-zmmy","type":"discovered-from","created_at":"2025-12-21T23:45:16.361493-08:00","created_by":"daemon"}]} +{"id":"bd-dqu8","title":"Restart running daemons","description":"Kill and restart any running bd daemons to pick up new version: pkill -f 'bd daemon' \u0026\u0026 bd daemon --start","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-20T00:32:26.559311-08:00","updated_at":"2025-12-20T00:32:59.123766-08:00","closed_at":"2025-12-20T00:32:59.123766-08:00","dependencies":[{"issue_id":"bd-dqu8","depends_on_id":"bd-fgw3","type":"blocks","created_at":"2025-12-20T00:32:39.427846-08:00","created_by":"daemon"},{"issue_id":"bd-dqu8","depends_on_id":"bd-6s61","type":"parent-child","created_at":"2025-12-20T00:32:39.36213-08:00","created_by":"daemon"}]} +{"id":"bd-5hrq","title":"bd doctor: detect issues referenced in commits but still open","description":"Add a doctor check that finds 'orphaned' issues - ones referenced in git commit messages (e.g., 'fix bug (bd-xxx)') but still marked as open in beads.\n\n**Detection logic:**\n1. Get all open issue IDs from beads\n2. Parse git log for issue ID references matching pattern \\(prefix-[a-z0-9.]+\\)\n3. Report issues that appear in commits but are still open\n\n**Output:**\n⚠ Warning: N issues referenced in commits but still open\n bd-xxx: 'Issue title' (commit abc123)\n bd-yyy: 'Issue title' (commit def456)\n \n These may be implemented but not closed. Run 'bd show \u003cid\u003e' to check.\n\n**Implementation:**\n- Add check to doctor/checks.go\n- Use git log parsing (already have git utilities)\n- Match against configured issue_prefix","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-12-21T21:48:08.473165-08:00","updated_at":"2025-12-21T21:55:37.795109-08:00","closed_at":"2025-12-21T21:55:37.795109-08:00"} +{"id":"bd-8x3w","title":"Add composite index (issue_id, type) on dependencies table","description":"GetBlockedIssues uses EXISTS clauses that filter by issue_id AND type together.\n\n**Query pattern (ready.go:427-429):**\n```sql\nEXISTS (\n SELECT 1 FROM dependencies d2\n WHERE d2.issue_id = i.id AND d2.type = 'blocks'\n)\n```\n\n**Problem:** Only idx_dependencies_issue exists. SQLite must filter type after index lookup.\n\n**Solution:** Add migration:\n```sql\nCREATE INDEX IF NOT EXISTS idx_dependencies_issue_type ON dependencies(issue_id, type);\n```\n\n**Note:** This complements the existing idx_dependencies_depends_on_type for the reverse direction.","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-22T22:58:52.876846-08:00","updated_at":"2025-12-22T23:15:13.840789-08:00","closed_at":"2025-12-22T23:15:13.840789-08:00","dependencies":[{"issue_id":"bd-8x3w","depends_on_id":"bd-h0we","type":"discovered-from","created_at":"2025-12-22T22:58:52.877536-08:00","created_by":"daemon"}]} +{"id":"bd-au0.8","title":"Improve clean vs cleanup command naming/documentation","description":"Clarify the difference between bd clean and bd cleanup to reduce user confusion.\n\n**Current state:**\n- bd clean: Remove temporary artifacts (.beads/bd.sock, logs, etc.)\n- bd cleanup: Delete old closed issues from database\n\n**Options:**\n1. Rename for clarity:\n - bd clean β†’ bd clean-temp\n - bd cleanup β†’ bd cleanup-issues\n \n2. Keep names but improve help text and documentation\n\n3. Add prominent warnings in help output\n\n**Preferred approach:** Option 2 (improve documentation)\n- Update short/long descriptions in commands\n- Add examples to help text\n- Update README.md\n- Add cross-references in help output\n\n**Files to modify:**\n- cmd/bd/clean.go\n- cmd/bd/cleanup.go\n- README.md or ADVANCED.md","status":"open","priority":2,"issue_type":"task","created_at":"2025-11-21T21:07:49.960534-05:00","updated_at":"2025-11-21T21:07:49.960534-05:00","dependencies":[{"issue_id":"bd-au0.8","depends_on_id":"bd-au0","type":"parent-child","created_at":"2025-11-21T21:07:49.962743-05:00","created_by":"daemon"}]} +{"id":"bd-pbh.3","title":"Add 0.30.4 to info.go release notes","description":"Update cmd/bd/info.go versionChanges map with release notes for 0.30.4.\nInclude any workflow-impacting changes for --whats-new output.\n","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:10.966781-08:00","updated_at":"2025-12-17T21:46:46.222445-08:00","closed_at":"2025-12-17T21:46:46.222445-08:00","dependencies":[{"issue_id":"bd-pbh.3","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:10.967287-08:00","created_by":"daemon"},{"issue_id":"bd-pbh.3","depends_on_id":"bd-pbh.2","type":"blocks","created_at":"2025-12-17T21:19:11.149584-08:00","created_by":"daemon"}]} +{"id":"bd-vks2","title":"bd dep tree doesn't display external dependencies","description":"GetDependencyTree (dependencies.go:464-624) uses a recursive CTE that JOINs with the issues table, which means external refs (external:project:capability) are invisible in the tree output.\n\nWhen an issue has an external blocking dependency, running 'bd dep tree \u003cid\u003e' won't show it.\n\nOptions:\n1. Query dependencies table separately for external refs and display them as leaf nodes\n2. Add a synthetic 'external' node type that shows the ref and resolution status\n3. Document that external deps aren't shown in tree view (use bd show for full deps)\n\nLower priority since bd show \u003cid\u003e displays all dependencies including external refs.","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-21T23:45:27.121934-08:00","updated_at":"2025-12-22T22:30:19.083652-08:00","closed_at":"2025-12-22T22:30:19.083652-08:00","dependencies":[{"issue_id":"bd-vks2","depends_on_id":"bd-zmmy","type":"discovered-from","created_at":"2025-12-21T23:45:27.122511-08:00","created_by":"daemon"}]} +{"id":"bd-2oo","title":"Edge Schema Consolidation: Unify all edges in dependencies table","description":"Consolidate all edge types into the dependency table per decision 004.\n\n## Changes\n- Add metadata column to dependencies table\n- Add thread_id column for conversation grouping\n- Remove redundant Issue fields: replies_to, relates_to, duplicate_of, superseded_by\n- Update all code to use dependencies API\n- Migration script for existing data\n- JSONL format change (breaking)\n\nReference: ~/gt/hop/decisions/004-edge-schema-consolidation.md","status":"closed","priority":0,"issue_type":"epic","created_at":"2025-12-18T02:01:48.785558-08:00","updated_at":"2025-12-18T02:49:10.61237-08:00","closed_at":"2025-12-18T02:49:10.61237-08:00"} +{"id":"bd-pbh.7","title":"Update beads_mcp/__init__.py to 0.30.4","description":"Update __version__ in integrations/beads-mcp/src/beads_mcp/__init__.py:\n```python\n__version__ = \"0.30.4\"\n```\n\n\n```verify\ngrep -q '__version__ = \"0.30.4\"' integrations/beads-mcp/src/beads_mcp/__init__.py\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:11.005334-08:00","updated_at":"2025-12-17T21:46:46.254885-08:00","closed_at":"2025-12-17T21:46:46.254885-08:00","dependencies":[{"issue_id":"bd-pbh.7","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:11.005699-08:00","created_by":"daemon"}]} +{"id":"bd-bgm","title":"Fix unparam unused parameter in cmd/bd/doctor.go:1879","description":"Linting issue: checkGitHooks - path is unused (unparam) at cmd/bd/doctor.go:1879:20. Error: func checkGitHooks(path string) doctorCheck {","status":"closed","priority":0,"issue_type":"bug","created_at":"2025-12-07T15:35:25.270293252-07:00","updated_at":"2025-12-17T23:13:40.532991-08:00","closed_at":"2025-12-17T16:46:11.026693-08:00"} +{"id":"bd-xo1o.1","title":"bd mol bond: Dynamic bond with variable substitution","description":"Implement dynamic molecule bonding with runtime variable substitution.\n\n## Command\n```bash\nbd mol bond \u003cproto-id\u003e \u003cparent-wisp-id\u003e --var key=value --var key2=value2\n```\n\n## Behavior\n1. Parse proto molecule template\n2. Substitute {{key}} placeholders with provided values\n3. Create wisp children under the parent molecule\n4. Child IDs follow pattern: parent-id.child-ref (e.g., patrol-x7k.arm-ace)\n5. Nested children: parent-id.child-ref.step-ref (e.g., patrol-x7k.arm-ace.capture)\n\n## Variable Substitution\n- In step titles: \"Inspect {{polecat_name}}\" -\u003e \"Inspect ace\"\n- In descriptions: Full template substitution\n- In Needs directives: Allow referencing parent steps\n\n## Output\n```\nβœ“ Bonded mol-polecat-arm to patrol-x7k\n Created: patrol-x7k.arm-ace (5 steps)\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-23T02:33:13.878996-08:00","updated_at":"2025-12-23T03:38:03.54745-08:00","closed_at":"2025-12-23T03:38:03.54745-08:00","dependencies":[{"issue_id":"bd-xo1o.1","depends_on_id":"bd-xo1o","type":"parent-child","created_at":"2025-12-23T02:33:13.879419-08:00","created_by":"daemon"}]} +{"id":"bd-6sm6","title":"Improve test coverage for internal/export (37.1% β†’ 60%)","description":"The export package has only 37.1% test coverage. Export functionality needs good coverage to ensure data integrity.\n\nCurrent coverage: 37.1%\nTarget coverage: 60%","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-13T20:43:06.802277-08:00","updated_at":"2025-12-23T22:32:29.16846-08:00","closed_at":"2025-12-23T22:32:29.16846-08:00"} +{"id":"bd-f616","title":"Digest: Version Bump: test-squash","description":"## Molecule Execution Summary\n\n**Molecule**: Version Bump: test-squash\n**Steps**: 8\n\n**Completed**: 0/8\n\n---\n\n### Steps\n\n1. **[open]** Verify release artifacts\n Check GitHub releases page - binaries for darwin/linux/windows should be available\n\n2. **[open]** Commit and push release\n git add -A \u0026\u0026 git commit \u0026\u0026 git push to trigger CI\n\n3. **[open]** Update CHANGELOG.md with release notes\n Add meaningful release notes to CHANGELOG.md describing what changed in test-squash\n\n4. **[open]** Wait for CI to pass\n Monitor GitHub Actions - all checks must pass before release artifacts are built\n\n5. **[open]** Restart running daemons\n Kill and restart any running bd daemons to pick up new version: pkill -f 'bd daemon' \u0026\u0026 bd daemon --start\n\n6. **[open]** Update local installation\n Run install script or brew upgrade to get new version locally: curl -fsSL .../install.sh | bash\n\n7. **[open]** Run bump-version.sh test-squash\n Run ./scripts/bump-version.sh test-squash to update version in all files\n\n8. **[open]** Update info.go versionChanges\n Add entry to versionChanges in cmd/bd/info.go with agent-actionable changes for test-squash\n\n","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T13:53:18.471919-08:00","updated_at":"2025-12-21T13:53:35.256043-08:00","deleted_at":"2025-12-21T13:53:35.256043-08:00","deleted_by":"stevey","delete_reason":"manual delete","original_type":"task"} +{"id":"bd-7bbc4e6a","title":"Add MCP server functions for repair commands","description":"**Summary:** Added MCP server repair functions for agent dependency management, system validation, and pollution detection. Implemented across BdClientBase, BdCliClient, and daemon clients to enhance system diagnostics and self-healing capabilities.\n\n**Key Decisions:** \n- Expose repair_deps(), detect_pollution(), validate() via MCP server\n- Create abstract method stubs with fallback to CLI execution\n- Use @mcp.tool decorators for function registration\n\n**Resolution:** Successfully implemented comprehensive repair command infrastructure, enabling more robust system health monitoring and automated remediation with full CLI and daemon support.","notes":"Implemented all three MCP server functions:\n\n1. **repair_deps(fix=False)** - Find/fix orphaned dependencies\n2. **detect_pollution(clean=False)** - Detect/clean test issues \n3. **validate(checks=None, fix_all=False)** - Run comprehensive health checks\n\nChanges:\n- Added abstract methods to BdClientBase\n- Implemented in BdCliClient (CLI execution)\n- Added NotImplementedError stubs in BdDaemonClient (falls back to CLI)\n- Created wrapper functions in tools.py\n- Registered @mcp.tool decorators in server.py\n\nAll commands tested and working with --no-daemon flag.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-28T19:37:55.72639-07:00","updated_at":"2025-12-16T01:08:11.983953-08:00","closed_at":"2025-11-07T19:38:12.152437-08:00"} +{"id":"bd-dxtc","title":"Test daemon RPC delete handler","description":"Add tests for the daemon-side RPC delete handler that processes delete requests from clients.\n\n## What needs testing\n- Daemon's Delete RPC handler implementation\n- Processing delete requests from RPC clients\n- Cascade deletion at daemon level\n- Force deletion at daemon level\n- Dry-run mode validation\n- Error responses to clients\n- Dependency validation before deletion\n- Tombstone creation via daemon\n\n## Test scenarios\n1. Delete single issue via RPC\n2. Delete multiple issues via RPC\n3. Cascade deletion of dependents\n4. Force delete with orphaned dependents\n5. Dry-run returns what would be deleted without actual deletion\n6. Error: invalid issue IDs\n7. Error: insufficient permissions\n8. Error: dependency blocks deletion (without force/cascade)\n\n## Related\n- Parent epic: bd-kyll\n- Original issue: bd-7z4","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T13:08:33.532111042-07:00","updated_at":"2025-12-23T21:22:11.26397-08:00","closed_at":"2025-12-23T20:41:36.5164-08:00","dependencies":[{"issue_id":"bd-dxtc","depends_on_id":"bd-kyll","type":"parent-child","created_at":"2025-12-18T13:08:33.534367367-07:00","created_by":"mhwilkie"}]} +{"id":"bd-70an","title":"test pin","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-21T11:19:16.760214-08:00","updated_at":"2025-12-21T11:19:46.500688-08:00","closed_at":"2025-12-21T11:19:46.500688-08:00"} +{"id":"bd-c7y5","title":"Optimization: Tombstones still synced, adding overhead","description":"Tombstoned (deleted) issues are still processed during sync, adding overhead.\n\n## Evidence\n```\nImport complete: 0 created, 0 updated, 407 unchanged, 100 skipped\n```\nThose 100 skipped are tombstones - they're read from JSONL, parsed, then filtered out.\n\n## Current State (Beads repo)\n- 408 total issues\n- 99 tombstones (24% of database)\n- Every sync reads and skips these 99 entries\n\n## Impact\n- Sync time increases with tombstone count\n- JSONL file size grows indefinitely\n- Git history accumulates tombstone churn\n\n## Proposed Solutions\n\n### 1. JSONL Compaction (`bd compact`)\nPeriodically rewrite JSONL without tombstones:\n```bash\nbd compact # Removes tombstones, rewrites issues.jsonl\n```\nTrade-off: Loses delete history, but that's in git anyway.\n\n### 2. Tombstone TTL\nAuto-remove tombstones older than N days during sync:\n```go\nif issue.Deleted \u0026\u0026 time.Since(issue.UpdatedAt) \u003e 7*24*time.Hour {\n // Skip writing to new JSONL\n}\n```\n\n### 3. Archive File\nMove old closed issues to `issues-archive.jsonl`:\n- Not synced regularly\n- Available for historical queries\n- Main JSONL stays small\n\n### 4. Lazy Tombstone Handling \nDon't write tombstones to JSONL at all - just remove the line:\n- Simpler, but loses cross-clone delete propagation\n- Would need different delete propagation mechanism\n\n## Recommendation\nStart with `bd compact` command - simple, explicit, user-controlled.\n\n## Related\n- gt-tnss: Analysis - Beads database size and hygiene strategy\n- gt-ox67: Maintenance - Regular cleanup of closed MR/gate beads","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-23T14:41:16.925212-08:00","updated_at":"2025-12-23T21:22:26.662604-08:00","closed_at":"2025-12-23T20:44:19.99946-08:00"} +{"id":"bd-3sz0","title":"Auto-repair stale merge driver configs with invalid placeholders","description":"Old bd versions (\u003c0.24.0) installed merge driver with invalid placeholders %L %R instead of %A %B. Add detection to bd doctor --fix: check if git config merge.beads.driver contains %L or %R, auto-repair to 'bd merge %A %O %A %B'. One-time migration for users who initialized with old versions.","status":"open","priority":2,"issue_type":"feature","created_at":"2025-11-21T23:16:10.762808-08:00","updated_at":"2025-11-21T23:16:28.892655-08:00","dependencies":[{"issue_id":"bd-3sz0","depends_on_id":"bd-tbz3","type":"parent-child","created_at":"2025-11-21T23:16:10.763612-08:00","created_by":"daemon"}]} +{"id":"bd-8ca7","title":"Merge: bd-au0.6","description":"branch: polecat/furiosa\ntarget: main\nsource_issue: bd-au0.6\nrig: beads","status":"closed","priority":1,"issue_type":"merge-request","created_at":"2025-12-23T20:42:30.870178-08:00","updated_at":"2025-12-23T21:21:57.695179-08:00","closed_at":"2025-12-23T21:21:57.695179-08:00"} +{"id":"bd-9cdc","title":"Update docs for import bug fix","description":"Update AGENTS.md, README.md, TROUBLESHOOTING.md with import.orphan_handling config documentation. Document resurrection behavior, tombstones, config modes. Add troubleshooting section for import failures with deleted parents.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-11-04T12:32:30.770415-08:00","updated_at":"2025-12-21T21:14:08.328627-08:00","closed_at":"2025-12-21T21:14:08.328627-08:00"} +{"id":"bd-fu83","title":"Fix daemon/direct mode inconsistency in relate and duplicate commands","description":"The relate.go and duplicate.go commands have inconsistent daemon/direct mode handling:\n\nWhen daemonClient is connected, they resolve IDs via RPC but then perform updates directly via store.UpdateIssue(), bypassing the daemon.\n\nAffected locations:\n- relate.go:125-139 (runRelate update)\n- relate.go:235-246 (runUnrelate update) \n- duplicate.go:120 (runDuplicate update)\n- duplicate.go:207 (runSupersede update)\n\nShould either use RPC for updates when daemon is running, or document why direct access is intentional.","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-16T20:52:54.164189-08:00","updated_at":"2025-12-21T21:47:14.10222-08:00","closed_at":"2025-12-21T21:47:14.10222-08:00"} +{"id":"bd-8b0x","title":"Remove molecule.go (simple instantiation)","description":"molecule.go uses is_template field for simple single-issue cloning. This is too simple for what molecules should be - full DAG orchestration. The use case is covered by bd mol bond with a single-issue molecule. Delete molecule.go and its commands.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-20T23:52:15.041776-08:00","updated_at":"2025-12-21T00:04:32.335849-08:00","closed_at":"2025-12-21T00:04:32.335849-08:00","dependencies":[{"issue_id":"bd-8b0x","depends_on_id":"bd-ffjt","type":"blocks","created_at":"2025-12-20T23:52:25.807967-08:00","created_by":"daemon"}]} +{"id":"bd-u66e","title":"Implement bd gate create/show/list/close/wait commands","description":"Implement the gate CLI commands.\n\n## Commands\n```bash\n# Create gate (returns gate ID)\nbd gate create --await \u003ctype\u003e:\u003cid\u003e --timeout \u003cduration\u003e --notify \u003caddr\u003e\n\n# Check gate status\nbd gate show \u003cid\u003e\n\n# List open gates\nbd gate list\n\n# Close gate (usually done by Deacon)\nbd gate close \u003cid\u003e --reason \"completed\"\n\n# Add waiter to existing gate\nbd gate wait \u003cid\u003e --notify \u003caddr\u003e\n```\n\n## Implementation\n- Add cmd/bd/gate.go with subcommands\n- Gate create creates wisp issue of type gate\n- Gate list filters for open gates\n- Gate wait adds to waiters[] array\n- All support --json output","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-23T11:44:34.022464-08:00","updated_at":"2025-12-23T12:06:18.550673-08:00","closed_at":"2025-12-23T12:06:18.550673-08:00","dependencies":[{"issue_id":"bd-u66e","depends_on_id":"bd-udsi","type":"parent-child","created_at":"2025-12-23T11:44:52.823353-08:00","created_by":"daemon"},{"issue_id":"bd-u66e","depends_on_id":"bd-lz49","type":"blocks","created_at":"2025-12-23T11:44:56.349662-08:00","created_by":"daemon"}]} +{"id":"bd-xsl9","title":"Remove legacy autoflush code paths","description":"## Problem\n\nThe autoflush system has dual code paths - an old timer-based approach and a new FlushManager. Both are actively used based on whether flushManager is nil.\n\n## Locations\n\n- main.go:78-81: isDirty, needsFullExport, flushTimer marked 'used by legacy code'\n- autoflush.go:291-369: Functions with 'Legacy path for backward compatibility with tests'\n\n## Current Behavior\n\n```go\n// In markDirtyAndScheduleFlush():\nif flushManager != nil {\n flushManager.MarkDirty(false)\n return\n}\n// Legacy path for backward compatibility with tests\n```\n\n## Proposed Fix\n\n1. Ensure flushManager is always initialized (even in tests)\n2. Remove the legacy timer-based code paths\n3. Remove isDirty, needsFullExport, flushTimer globals\n4. Update tests to use FlushManager\n\n## Risk\n\nLow - the FlushManager is the production path. Legacy code only runs when flushManager is nil (test scenarios).","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-21T15:49:30.83769-08:00","updated_at":"2025-12-23T01:54:59.09333-08:00","closed_at":"2025-12-23T01:54:59.09333-08:00"} +{"id":"bd-pbh.16","title":"Update Homebrew formula","description":"After GoReleaser completes, the Homebrew tap should be auto-updated.\n\nIf manual update needed:\n```bash\n./scripts/update-homebrew.sh v0.30.4\n```\n\nOr manually update steveyegge/homebrew-beads with new SHA256.\n\nVerify:\n```bash\nbrew update\nbrew info beads\n```\n","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-17T21:19:11.100213-08:00","updated_at":"2025-12-17T21:46:46.341942-08:00","closed_at":"2025-12-17T21:46:46.341942-08:00","dependencies":[{"issue_id":"bd-pbh.16","depends_on_id":"bd-pbh.13","type":"blocks","created_at":"2025-12-17T21:19:11.312625-08:00","created_by":"daemon"},{"issue_id":"bd-pbh.16","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:11.100541-08:00","created_by":"daemon"}]} +{"id":"bd-adoe","title":"Add --hard flag to bd cleanup to permanently cull tombstones before cutoff date","description":"Currently tombstones persist for 30 days before cleanup prunes them. Need an official way to force-cull tombstones earlier than the default TTL, for scenarios like cleaning house after extended absence where resurrection from old clones is not a concern. Proposed: bd cleanup --hard --older-than N to bypass the 30-day tombstone TTL.","status":"tombstone","priority":2,"issue_type":"feature","created_at":"2025-12-16T01:17:31.064914-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"feature"} +{"id":"bd-y0fj","title":"Issue lifecycle hooks (on-close, on-complete)","description":"Add hooks that fire on issue state transitions, enabling automation like closing linked GitHub issues.\n\n## Problem\n\nWe have `external_ref` to link beads issues to external systems (GitHub, Linear, Jira), but no mechanism to trigger actions when issues close. Currently:\n\n```\nbd-u2sc (external_ref: gh-692) closes β†’ nothing happens\n```\n\n## Proposed Solution\n\n### Phase 1: Shell Hooks\n\nAdd `.beads-hooks/on-close.sh` (and similar lifecycle hooks):\n\n```bash\n# .beads-hooks/on-close.sh\n# Called by bd close with issue JSON on stdin\n#\\!/bin/bash\nissue=$(cat)\nexternal_ref=$(echo \"$issue\" | jq -r '.external_ref // empty')\nif [[ \"$external_ref\" == gh-* ]]; then\n number=\"${external_ref#gh-}\"\n gh issue close \"$number\" --repo steveyegge/beads \\\n --comment \"Completed via beads epic $(echo $issue | jq -r .id)\"\nfi\n```\n\n### Lifecycle Events\n\n| Event | Trigger | Use Cases |\n|-------|---------|-----------|\n| `on-close` | Issue closed | Close external refs, notify, archive |\n| `on-complete` | Epic children all done | Roll-up completion, close parent refs |\n| `on-status-change` | Any status transition | Sync to external systems |\n\n### Phase 2: Molecule Completion Handlers\n\nMolecules could define completion actions:\n\n```yaml\nname: github-issue-tracker\non_complete:\n - action: shell\n command: gh issue close {{external_ref}} --repo {{repo}}\n - action: mail\n to: mayor/\n subject: \"Epic {{id}} completed\"\n```\n\n### Phase 3: Gas Town Integration\n\nFor full Gas Town deployments:\n- Witness observes closures via beads events\n- Routes to integration agents via mail\n- Agents handle external system interactions\n\n## Implementation Notes\n\n- Hooks should be async (don't block bd close)\n- Pass full issue JSON to hook via stdin\n- Support hook timeout and failure handling\n- Consider `--no-hooks` flag for bulk operations\n\n## Related\n\n- `external_ref` field already exists (GH#142)\n- Cross-project deps: bd-h807, bd-d9mu\n- Git hooks: .beads-hooks/ pattern established\n\n## Use Cases\n\n1. **GitHub integration**: Close GH issues when beads epic completes\n2. **Linear sync**: Update Linear status when beads status changes \n3. **Notifications**: Send mail/Slack when high-priority issues close\n4. **Audit**: Log all closures to external system","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-12-22T14:46:04.846657-08:00","updated_at":"2025-12-22T14:50:40.35447-08:00","closed_at":"2025-12-22T14:50:40.35447-08:00"} +{"id":"bd-90v","title":"bd prime: AI context loading and Claude Code integration","description":"Implement `bd prime` command and Claude Code hooks for context recovery. Hooks work with BOTH MCP server and CLI approaches - they solve the context memory problem (keeping bd workflow fresh after compaction) not the tool access problem (MCP vs CLI).","status":"open","priority":2,"issue_type":"epic","created_at":"2025-11-11T23:31:12.119012-08:00","updated_at":"2025-11-12T00:11:07.743189-08:00"} +{"id":"bd-aq3s","title":"Merge: bd-u2sc.3","description":"branch: polecat/Modular\ntarget: main\nsource_issue: bd-u2sc.3\nrig: beads","status":"closed","priority":3,"issue_type":"merge-request","created_at":"2025-12-23T13:47:14.281479-08:00","updated_at":"2025-12-23T19:12:08.354548-08:00","closed_at":"2025-12-23T19:12:08.354548-08:00"} +{"id":"bd-5b6e","title":"Add tests for helper functions (GetDirtyIssueHash, GetAllDependencyRecords, export hashes)","description":"Several utility functions have 0% coverage:\n- GetDirtyIssueHash (dirty.go)\n- GetAllDependencyRecords (dependencies.go)\n- GetExportHash, SetExportHash, ClearAllExportHashes (hash.go)\n\nThese are lower priority but should have basic coverage.","status":"open","priority":4,"issue_type":"task","created_at":"2025-11-01T22:40:58.989976-07:00","updated_at":"2025-11-01T22:40:58.989976-07:00"} +{"id":"bd-cb64c226.12","title":"Remove Storage Cache from Server Struct","description":"Eliminate cache fields and use s.storage directly","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-10-27T22:55:25.474412-07:00","updated_at":"2025-12-17T23:18:29.111039-08:00","deleted_at":"2025-12-17T23:18:29.111039-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} +{"id":"bd-tggf","title":"Code Health Review Dec 2025: Technical Debt Cleanup","description":"Epic grouping technical debt identified in the Dec 16, 2025 code health review.\n\n## Overall Health Grade: B (Solid foundation, needs cleanup)\n\n### P1 (High Priority):\n- bd-74w1: Consolidate duplicate path-finding utilities\n- bd-b6xo: Remove/fix ClearDirtyIssues() race condition\n- bd-b3og: Fix TestImportBugIntegration deadlock\n\n### P2 (Medium Priority):\n- bd-05a8: Split large files (doctor.go, sync.go)\n- bd-qioh: Standardize error handling patterns\n- bd-rgyd: Split queries.go (1586 lines)\n- bd-9g1z: Fix/remove TestFindJSONLPathDefault\n\n### P3 (Low Priority):\n- bd-ork0: Add comments to 30+ ignored errors\n- bd-4nqq: Remove dead test code in info_test.go\n- bd-dhza: Reduce global state in main.go\n\n## Key Areas:\n1. Code duplication in path utilities\n2. Large monolithic files (5 files \u003e1000 lines)\n3. Global state (25+ variables, 3 deprecated)\n4. Silent error suppression (30+ instances)\n5. Test gaps and dead test code\n6. Atomicity risks in batch operations","status":"open","priority":2,"issue_type":"epic","created_at":"2025-12-16T18:18:58.115507-08:00","updated_at":"2025-12-16T18:21:50.561709-08:00","dependencies":[{"issue_id":"bd-tggf","depends_on_id":"bd-9g1z","type":"blocks","created_at":"2025-12-22T21:00:21.571116-08:00","created_by":"daemon"},{"issue_id":"bd-tggf","depends_on_id":"bd-rgyd","type":"blocks","created_at":"2025-12-22T21:00:21.710912-08:00","created_by":"daemon"},{"issue_id":"bd-tggf","depends_on_id":"bd-dhza","type":"blocks","created_at":"2025-12-22T21:00:21.852-08:00","created_by":"daemon"},{"issue_id":"bd-tggf","depends_on_id":"bd-ork0","type":"blocks","created_at":"2025-12-22T21:00:21.930168-08:00","created_by":"daemon"},{"issue_id":"bd-tggf","depends_on_id":"bd-qioh","type":"blocks","created_at":"2025-12-22T21:00:21.640589-08:00","created_by":"daemon"},{"issue_id":"bd-tggf","depends_on_id":"bd-4nqq","type":"blocks","created_at":"2025-12-22T21:00:21.781914-08:00","created_by":"daemon"},{"issue_id":"bd-tggf","depends_on_id":"bd-74w1","type":"blocks","created_at":"2025-12-22T21:00:21.429274-08:00","created_by":"daemon"},{"issue_id":"bd-tggf","depends_on_id":"bd-05a8","type":"blocks","created_at":"2025-12-22T21:00:21.501589-08:00","created_by":"daemon"}]} +{"id":"bd-j0tr","title":"Phase 1.3: Basic TOON read/write operations","description":"Add basic TOON read/write operations to bdt executable. Implement create, list, and show commands that use the internal/toon package for encoding/decoding to TOON format.\n\n## Subtasks\n1. Implement bdt create command - Create issues and serialize to TOON format\n2. Implement bdt list command - Read issues.toon and display all issues\n3. Implement bdt show command - Display single issue by ID\n4. Add file I/O operations for issues.toon\n5. Integrate internal/toon package (EncodeTOON, DecodeJSON)\n6. Write tests for create, list, show operations\n\n## Files to Create/Modify\n- cmd/bdt/create.go - Create command\n- cmd/bdt/list.go - List command \n- cmd/bdt/show.go - Show command\n- cmd/bdt/storage.go - File I/O helper\n\n## Success Criteria\n- bdt create \"Issue title\" creates and saves to issues.toon\n- bdt list displays all issues in human-readable format\n- bdt list --json shows JSON output\n- bdt show \u003cid\u003e displays single issue\n- Issues round-trip correctly: create β†’ list β†’ show\n- All tests passing with \u003e80% coverage","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T12:59:54.270296918-07:00","updated_at":"2025-12-19T13:09:00.196045685-07:00","closed_at":"2025-12-19T13:09:00.196045685-07:00"} +{"id":"bd-iq19","title":"Distill: promote ad-hoc epic to proto","description":"Extract a reusable proto from an existing ad-hoc epic.\n\nCOMMAND: bd mol distill \u003cepic-id\u003e [--as \u003cproto-name\u003e]\n\nBEHAVIOR:\n- Clone the epic and all children as a new proto\n- Set is_template=true on all cloned issues\n- Replace concrete values with {{variable}} placeholders (interactive or --var flags)\n- Add to proto catalog\n\nFLAGS:\n- --as NAME: Custom proto ID (default: proto-\u003cepic-id\u003e)\n- --var field=placeholder: Replace value with variable placeholder\n- --interactive: Prompt for each field that looks parameterizable\n- --dry-run: Preview the proto structure\n\nEXAMPLE:\n bd mol distill bd-o5xe --as proto-feature-workflow \\\n --var title=feature_name \\\n --var assignee=worker\n\nUSE CASES:\n- Team develops good workflow organically, wants to reuse it\n- Capture tribal knowledge as executable templates\n- Create starting point for similar future work\n\nThe reverse of spawn: instead of proto β†’ molecule, it's molecule β†’ proto.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-21T01:05:07.953538-08:00","updated_at":"2025-12-21T10:31:56.814246-08:00","closed_at":"2025-12-21T10:31:56.814246-08:00","dependencies":[{"issue_id":"bd-iq19","depends_on_id":"bd-o5xe","type":"parent-child","created_at":"2025-12-21T01:05:16.495774-08:00","created_by":"daemon"},{"issue_id":"bd-iq19","depends_on_id":"bd-rnnr","type":"blocks","created_at":"2025-12-21T01:05:16.560404-08:00","created_by":"daemon"}]} +{"id":"bd-hw3w","title":"Update info.go versionChanges","description":"Add entry to versionChanges in cmd/bd/info.go with agent-actionable changes for {{version}}","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T22:56:01.016558-08:00","updated_at":"2025-12-20T17:59:26.262511-08:00","closed_at":"2025-12-20T01:23:50.3879-08:00","dependencies":[{"issue_id":"bd-hw3w","depends_on_id":"bd-6s61","type":"parent-child","created_at":"2025-12-19T22:56:14.941855-08:00","created_by":"daemon"},{"issue_id":"bd-hw3w","depends_on_id":"bd-czss","type":"blocks","created_at":"2025-12-19T22:56:23.219257-08:00","created_by":"daemon"}]} +{"id":"bd-9usz","title":"Test suite hangs/never finishes","description":"Running 'go test ./... -count=1' hangs indefinitely. The full test suite never completes, making it difficult to verify changes. Need to investigate which tests are hanging and fix or add timeouts.","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-16T21:56:27.80191-08:00","updated_at":"2025-12-23T21:22:25.810705-08:00","closed_at":"2025-12-23T20:40:41.786512-08:00"} +{"id":"bd-ahot","title":"HANDOFF: Molecule bonding - spawn done, bond next","description":"## Context\n\nContinuing work on bd-o5xe (Molecule bonding epic).\n\n## Completed This Session\n\n- bd-mh4w: Renamed bond to spawn in mol.go\n- bd-rnnr: Added BondRef data model to types.go\n\n## Now Unblocked\n\n1. bd-o91r: Polymorphic bond command [P1]\n2. bd-iw4z: Compound visualization [P2] \n3. bd-iq19: Distill command [P2]\n\n## Key Files\n\n- cmd/bd/mol.go\n- internal/types/types.go\n\n## Next Step\n\nStart with bd-o91r. Run bd show bd-o5xe for context.","status":"closed","priority":1,"issue_type":"message","created_at":"2025-12-21T01:32:13.940757-08:00","updated_at":"2025-12-21T11:24:30.171048-08:00","closed_at":"2025-12-21T11:24:30.171048-08:00"} +{"id":"bd-4hn","title":"wish: list \u0026 ready show issues as hierarchy tree","description":"`bd ready` and `bd list` just show a flat list, and it's up to the reader to parse which ones are dependent or sub-issues of others. It would be much easier to understand if they were shown in a tree format","status":"open","priority":3,"issue_type":"task","created_at":"2025-12-08T06:38:24.016316945-07:00","updated_at":"2025-12-08T06:39:04.065882225-07:00"} +{"id":"bd-7pwh","title":"HOP-compatible schema additions","description":"Add optional fields to Beads schema to enable future HOP integration.\nAll fields are backwards-compatible (optional, omitted if empty).\n\n## Reference\nSee ~/gt/docs/hop/BEADS-SCHEMA-CHANGES.md for full specification.\n\n## P1 Changes (Must Have Before Launch)\n\n### 1. EntityRef type\nStructured entity reference that can become HOP URI:\n```go\ntype EntityRef struct {\n Name string // \"polecat/Nux\"\n Platform string // \"gastown\"\n Org string // \"steveyegge\" \n ID string // \"polecat-nux\"\n}\n```\n\n### 2. creator field\nEvery issue tracks who created it (EntityRef).\n\n### 3. assignee_ref field\nStructured form alongside existing string assignee.\n\n### 4. validations array\nTrack who validated work completion:\n```go\ntype Validation struct {\n Validator *EntityRef\n Outcome string // accepted, rejected, revision_requested\n Timestamp time.Time\n Score *float32 // Future\n}\n```\n\n## P2 Changes (Should Have)\n\n### 5. work_type field\n\"mutex\" (default) or \"open_competition\"\n\n### 6. crystallizes field\nBoolean - does this work compound (true) or evaporate (false)?\n\n### 7. cross_refs field\nArray of URIs to beads in other repos:\n- \"beads://github/anthropics/claude-code/bd-xyz\"\n\n## P3 Changes (Nice to Have)\n\n### 8. skill_vector placeholder\nReserved for future embeddings: []float32\n\n## Implementation Notes\n- All fields optional in JSONL serialization\n- Empty/null fields omit from output\n- No migration needed for existing data\n- CLI additions: --creator, --validated-by filters","notes":"Scope reduced after review. P1 only: EntityRef type, creator field, validations array. Deferred: assignee_ref, work_type, crystallizes, cross_refs, skill_vector (YAGNI - semantics unclear, can add later when needed).","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-22T02:42:39.267984-08:00","updated_at":"2025-12-22T20:09:09.211821-08:00","closed_at":"2025-12-22T20:09:09.211821-08:00"} +{"id":"bd-kwjh.6","title":"bd wisp gc command","description":"Add bd wisp gc command to garbage collect orphaned wisps.\n\n## Usage\n```bash\nbd wisp gc # Clean orphaned wisps\nbd wisp gc --dry-run # Show what would be cleaned\nbd wisp gc --age 1h # Custom orphan threshold (default: 1h)\n```\n\n## Orphan Detection\nA wisp is orphaned if:\n- process_id field exists AND process is dead\n- OR updated_at older than threshold AND not complete\n- AND molecule status is not complete/abandoned\n\n## Behavior\n- Delete orphaned wisps (no digest created)\n- Report count of cleaned wisps\n- --dry-run shows candidates without deleting\n\n## Implementation\n- Add 'gc' subcommand to wisp group\n- Process detection via os.FindProcess or /proc\n- Configurable age threshold","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-22T00:07:30.861155-08:00","updated_at":"2025-12-22T01:12:37.283991-08:00","closed_at":"2025-12-22T01:12:37.283991-08:00","dependencies":[{"issue_id":"bd-kwjh.6","depends_on_id":"bd-kwjh.2","type":"blocks","created_at":"2025-12-22T00:07:30.863721-08:00","created_by":"daemon"},{"issue_id":"bd-kwjh.6","depends_on_id":"bd-kwjh","type":"parent-child","created_at":"2025-12-22T00:07:30.862681-08:00","created_by":"daemon"}]} +{"id":"bd-8v2","title":"Add {{version}} to versionChanges in info.go","description":"Add new entry at TOP of versionChanges in cmd/bd/info.go with release notes from CHANGELOG.md. Must do before bump-version.sh --commit.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T22:43:00.482846-08:00","updated_at":"2025-12-18T22:45:21.465817-08:00","closed_at":"2025-12-18T22:45:21.465817-08:00","dependencies":[{"issue_id":"bd-8v2","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T22:43:16.496649-08:00","created_by":"daemon"},{"issue_id":"bd-8v2","depends_on_id":"bd-kyo","type":"blocks","created_at":"2025-12-18T22:43:20.69619-08:00","created_by":"daemon"}]} +{"id":"bd-pbh.20","title":"Update git hooks","description":"Install the updated hooks:\n```bash\nbd hooks install\n```\n\nVerify hook version:\n```bash\ngrep 'bd-hooks-version' .git/hooks/pre-commit\n```\n\n\n```verify\ngrep -q 'bd-hooks-version: 0.30.4' .git/hooks/pre-commit 2\u003e/dev/null || echo 'Hooks may not be installed - verify manually'\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:11.13198-08:00","updated_at":"2025-12-17T21:46:46.381519-08:00","closed_at":"2025-12-17T21:46:46.381519-08:00","dependencies":[{"issue_id":"bd-pbh.20","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:11.132306-08:00","created_by":"daemon"},{"issue_id":"bd-pbh.20","depends_on_id":"bd-pbh.17","type":"blocks","created_at":"2025-12-17T21:19:11.352288-08:00","created_by":"daemon"}]} +{"id":"bd-3852","title":"Add orphan detection migration","description":"Create migration to detect orphaned children in existing databases. Query: SELECT id FROM issues WHERE id LIKE '%.%' AND substr(id, 1, instr(id || '.', '.') - 1) NOT IN (SELECT id FROM issues). Log results, let user decide action (delete orphans or convert to top-level).","status":"closed","priority":2,"issue_type":"task","created_at":"2025-11-04T12:32:30.727044-08:00","updated_at":"2025-12-21T21:00:05.041582-08:00","closed_at":"2025-12-21T21:00:05.041582-08:00"} +{"id":"bd-bxha","title":"Default to YES for git hooks and merge driver installation","description":"Currently bd init prompts user to install git hooks and merge driver, but setup is incomplete if user declines. Change to install by default unless --skip-hooks or --skip-merge-driver flags are passed. Better safe defaults. If installation fails, warn user and suggest bd doctor --fix.","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-11-21T23:16:10.172238-08:00","updated_at":"2025-12-23T04:20:51.885765-08:00","closed_at":"2025-12-23T04:20:51.885765-08:00","dependencies":[{"issue_id":"bd-bxha","depends_on_id":"bd-tbz3","type":"parent-child","created_at":"2025-11-21T23:16:10.173034-08:00","created_by":"daemon"}]} +{"id":"bd-j3il","title":"Add bd reset command for clean slate restart","description":"Implement a command to reset beads to a clean starting state.\n\n**Context:** GitHub issue #479 - users sometimes get beads into an invalid state after updates, and there's no clean way to start fresh. The git backup/restore mechanism that protects against accidental deletion also makes it hard to intentionally reset.\n\n**Current workaround** (from maphew):\n```bash\nbd daemons killall\ngit rm .beads/*.jsonl\ngit commit -m 'remove old issues'\nrm .beads/*\nbd init\nbd onboard\n```\n\n**Desired:** A proper `bd reset` command that handles this cleanly and safely.","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-12-13T08:41:34.956552+11:00","updated_at":"2025-12-13T08:43:49.970591+11:00","closed_at":"2025-12-13T08:43:49.970591+11:00"} +{"id":"bd-kwro.8","title":"Hooks System","description":"Implement hook system for extensibility.\n\nHook directory: .beads/hooks/\nHook files (executable scripts):\n- on_create - runs after bd create\n- on_update - runs after bd update \n- on_close - runs after bd close\n- on_message - runs after bd mail send\n\nHook invocation:\n- Pass issue ID as first argument\n- Pass event type as second argument\n- Pass JSON issue data on stdin\n- Run asynchronously (dont block command)\n\nExample hook (GGT notification):\n #!/bin/bash\n gt notify --event=$2 --issue=$1\n\nThis allows GGT to register notification handlers without Beads knowing about GGT.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-16T03:02:23.086393-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-cbed9619.2","title":"Implement content-first idempotent import","description":"**Summary:** Refactored issue import to be content-first and idempotent, ensuring consistent data synchronization across multiple import rounds by prioritizing content hash matching over ID-based updates.\n\n**Key Decisions:** \n- Implement content hash as primary matching mechanism\n- Create global collision resolution algorithm\n- Ensure importing same data multiple times results in no-op\n\n**Resolution:** The new import strategy guarantees predictable convergence across distributed systems, solving rename detection and collision handling while maintaining data integrity during multi-stage imports.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-10-28T18:38:25.671302-07:00","updated_at":"2025-12-17T23:18:29.112032-08:00","dependencies":[{"issue_id":"bd-cbed9619.2","depends_on_id":"bd-cbed9619.5","type":"blocks","created_at":"2025-10-28T18:39:28.360026-07:00","created_by":"daemon"},{"issue_id":"bd-cbed9619.2","depends_on_id":"bd-cbed9619.4","type":"blocks","created_at":"2025-10-28T18:39:28.383624-07:00","created_by":"daemon"},{"issue_id":"bd-cbed9619.2","depends_on_id":"bd-cbed9619.3","type":"blocks","created_at":"2025-10-28T18:39:28.407157-07:00","created_by":"daemon"}],"deleted_at":"2025-12-17T23:18:29.112032-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} +{"id":"bd-wu62","title":"Gate: timer:1m","status":"open","priority":1,"issue_type":"gate","created_at":"2025-12-23T13:42:57.169229-08:00","updated_at":"2025-12-23T13:42:57.169229-08:00"} +{"id":"bd-pbh.19","title":"Install 0.30.4 MCP server locally","description":"Upgrade the MCP server (after PyPI publish):\n```bash\npip install --upgrade beads-mcp\n# OR if using uv:\nuv tool upgrade beads-mcp\n```\n\nVerify:\n```bash\npip show beads-mcp | grep Version\n```\n\n\n```verify\npip show beads-mcp 2\u003e/dev/null | grep -q 'Version: 0.30.4'\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:11.124496-08:00","updated_at":"2025-12-17T21:46:46.372989-08:00","closed_at":"2025-12-17T21:46:46.372989-08:00","dependencies":[{"issue_id":"bd-pbh.19","depends_on_id":"bd-pbh.14","type":"blocks","created_at":"2025-12-17T21:19:11.343558-08:00","created_by":"daemon"},{"issue_id":"bd-pbh.19","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:11.124829-08:00","created_by":"daemon"}]} +{"id":"bd-pbh.8","title":"Update npm-package/package.json to 0.30.4","description":"Update version field in npm-package/package.json:\n```json\n\"version\": \"0.30.4\"\n```\n\n\n```verify\njq -e '.version == \"0.30.4\"' npm-package/package.json\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:11.014905-08:00","updated_at":"2025-12-17T21:46:46.268821-08:00","closed_at":"2025-12-17T21:46:46.268821-08:00","dependencies":[{"issue_id":"bd-pbh.8","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:11.01529-08:00","created_by":"daemon"}]} +{"id":"bd-h8q","title":"Add tests for validation functions","description":"Validation functions like ParseIssueType have 0% coverage. These are critical for ensuring data quality and preventing invalid data from entering the system.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T07:01:02.843488344-07:00","updated_at":"2025-12-18T07:03:53.561016965-07:00","closed_at":"2025-12-18T07:03:53.561016965-07:00","dependencies":[{"issue_id":"bd-h8q","depends_on_id":"bd-6ss","type":"discovered-from","created_at":"2025-12-18T07:01:02.846419747-07:00","created_by":"matt"}]} +{"id":"bd-mv6h","title":"Add test coverage for external dep edge cases","description":"During code review of bd-zmmy, identified missing test coverage:\n\n1. RemoveDependency with external ref target (will fail - see bd-a3sj)\n2. GetBlockedIssues with mix of local and external blockers\n3. GetDependencyTree with external deps\n4. AddDependency cycle detection with external refs (should be skipped?)\n5. External dep resolution with WAL mode database\n6. External dep resolution when target project has no .beads directory\n7. External dep resolution with invalid external: format variations\n\nPriority 2 because bd-a3sj is a real bug that tests would catch.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-21T23:45:37.50093-08:00","updated_at":"2025-12-22T22:32:09.515096-08:00","closed_at":"2025-12-22T22:32:09.515096-08:00","dependencies":[{"issue_id":"bd-mv6h","depends_on_id":"bd-zmmy","type":"discovered-from","created_at":"2025-12-21T23:45:37.501495-08:00","created_by":"daemon"}]} +{"id":"bd-m964","title":"Consider FTS5 for text search at scale","description":"SearchIssues uses LIKE patterns for text search which can't use indexes.\n\n**Current query (queries.go:1475-1477):**\n```sql\n(title LIKE ? OR description LIKE ? OR id LIKE ?)\n```\n\n**Problem:** Full table scan on every text search. At 100K+ issues, this becomes slow.\n\n**SQLite FTS5 solution:**\n```sql\nCREATE VIRTUAL TABLE issues_fts USING fts5(\n id, title, description, design, notes,\n content='issues',\n content_rowid='rowid'\n);\n\n-- Triggers to keep FTS in sync\nCREATE TRIGGER issues_ai AFTER INSERT ON issues BEGIN\n INSERT INTO issues_fts(rowid, id, title, description, design, notes)\n VALUES (new.rowid, new.id, new.title, new.description, new.design, new.notes);\nEND;\n-- (similar for UPDATE, DELETE)\n```\n\n**Trade-offs:**\n- Database size increase (~30-50% for text content)\n- Additional write overhead (trigger execution)\n- Better search capabilities (ranking, phrase search)\n\n**Decision needed:** Is full-text search a priority feature? Current LIKE search may be acceptable for most use cases.\n\n**Benchmark first:** Measure SearchIssues at 100K scale before implementing.","status":"open","priority":4,"issue_type":"feature","created_at":"2025-12-22T22:58:56.466121-08:00","updated_at":"2025-12-22T22:58:56.466121-08:00","dependencies":[{"issue_id":"bd-m964","depends_on_id":"bd-h0we","type":"discovered-from","created_at":"2025-12-22T22:58:56.466764-08:00","created_by":"daemon"}]} +{"id":"bd-fom","title":"Remove all deletions.jsonl code except migration","description":"There's deletions manifest code spread across the entire codebase that should have been removed after tombstone migration:\n\nFiles with deletions code (non-migration):\n- internal/deletions/ - entire package\n- cmd/bd/sync.go - 25+ references, auto-compact, sanitize\n- cmd/bd/delete.go - dual-writes to deletions.jsonl\n- internal/importer/importer.go - checks deletions manifest\n- internal/syncbranch/worktree.go - merges deletions.jsonl\n- cmd/bd/doctor/fix/sync.go - cleanupDeletionsManifest\n- cmd/bd/doctor/fix/deletions.go - HydrateDeletionsManifest\n- cmd/bd/integrity.go - checks deletions for data loss\n- cmd/bd/deleted.go - entire command\n- cmd/bd/compact.go - pruneDeletionsManifest\n- cmd/bd/doctor.go - checkDeletionsManifest\n- Plus many more\n\nAction: Aggressively remove all non-migration deletions code. Tombstones are the only deletion mechanism now.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-16T13:29:04.960863-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-396j","title":"GetBlockedIssues shows external deps as blocking even when satisfied","description":"GetBlockedIssues (ready.go:385-493) shows external:* refs in the blocked_by list but doesn't check if they're actually satisfied using CheckExternalDep.\n\nThis can be confusing - an issue shows as blocked by external:project:capability even if that capability has been shipped (closed issue with provides: label exists).\n\nOptions:\n1. Call CheckExternalDep for each external ref and filter satisfied ones from blocked_by\n2. Add a note in output indicating external deps need lazy resolution\n3. Document this is expected behavior (bd blocked shows all deps, bd ready shows resolved state)\n\nRelated: GetReadyWork correctly filters by external deps, but GetBlockedIssues doesn't.","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-21T23:45:05.286304-08:00","updated_at":"2025-12-22T21:48:38.086451-08:00","closed_at":"2025-12-22T21:48:38.086451-08:00","dependencies":[{"issue_id":"bd-396j","depends_on_id":"bd-zmmy","type":"discovered-from","created_at":"2025-12-21T23:45:05.286971-08:00","created_by":"daemon"}]} +{"id":"bd-cbed9619.5","title":"Add content-addressable identity to Issue type","description":"**Summary:** Added content-addressable identity to Issue type by implementing a ContentHash field that generates a unique SHA256 fingerprint based on semantic issue content. This resolves issue identification challenges when multiple system instances create issues with identical IDs but different contents.\n\n**Key Decisions:**\n- Use SHA256 for content hashing\n- Hash excludes ID and timestamps\n- Compute hash automatically at creation/import time\n- Add database column for hash storage\n\n**Resolution:** Successfully implemented a deterministic content hashing mechanism that enables reliable issue identification across distributed systems, improving data integrity and collision detection.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-10-28T18:36:44.914967-07:00","updated_at":"2025-12-17T23:18:29.112933-08:00","deleted_at":"2025-12-17T23:18:29.112933-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} +{"id":"bd-589x","title":"HANDOFF: Version 0.30.7 release in progress","description":"## Context\nDoing a 0.30.7 patch release with bug fixes.\n\n## What's done\n- Fixed #657: bd graph nil pointer crash (graph.go:102)\n- Fixed #652: Windows npm installer file lock (postinstall.js)\n- Updated CHANGELOG.md and info.go\n- Pushed to main, CI running (run 20390861825)\n- Created version-bump molecule template (bd-6s61) and instantiated for 0.30.7 (bd-8pyn)\n\n## In progress\nMolecule bd-8pyn has 3 remaining tasks:\n - bd-dxo7: Wait for CI to pass\n - bd-7l70: Verify release artifacts \n - bd-5c91: Update local installation\n\n## Check CI\n gh run list --repo steveyegge/beads --limit 1\n gh run view 20390861825 --repo steveyegge/beads\n\n## New feature filed\nbd-n777: Timer beads for scheduled agent callbacks\nDesign for Deacon-managed timers that can interrupt agents via tmux\n\n## Resume commands\n bd --no-daemon show bd-8pyn\n gh run list --repo steveyegge/beads --limit 1","status":"closed","priority":2,"issue_type":"message","created_at":"2025-12-19T23:06:14.902334-08:00","updated_at":"2025-12-20T00:49:51.927111-08:00","closed_at":"2025-12-20T00:25:59.596546-08:00"} +{"id":"bd-thgk","title":"Improve test coverage for internal/compact (18.2% β†’ 70%)","description":"Improve test coverage for internal/compact package from 17% to 70%.\n\n## Current State\n- Coverage: 17.3%\n- Files: compactor.go, git.go, haiku.go\n- Tests: compactor_test.go (minimal tests)\n\n## Functions Needing Tests\n\n### compactor.go (core compaction)\n- [ ] New - needs config validation tests\n- [ ] CompactTier1 - needs single issue compaction tests\n- [ ] CompactTier1Batch - needs batch processing tests\n- [ ] compactSingleWithResult - internal, test via public API\n\n### git.go\n- [ ] GetCurrentCommitHash - needs git repo fixture tests\n\n### haiku.go (AI summarization) - MOCK REQUIRED\n- [ ] NewHaikuClient - needs API key validation tests\n- [ ] SummarizeTier1 - needs mock API response tests\n- [ ] callWithRetry - needs retry logic tests\n- [ ] isRetryable - needs error classification tests\n- [ ] renderTier1Prompt - needs template rendering tests\n\n## Implementation Guide\n\n1. **Mock the Anthropic API:**\n ```go\n // Create mock HTTP server\n server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n json.NewEncoder(w).Encode(map[string]interface{}{\n \"content\": []map[string]string{{\"text\": \"Summarized content\"}},\n })\n }))\n defer server.Close()\n \n // Point client at mock\n client.baseURL = server.URL\n ```\n\n2. **Test scenarios:**\n - Successful compaction with AI summary\n - API failure with retry\n - Rate limit handling\n - Empty issue handling\n - Large issue truncation\n\n3. **Use test database:**\n ```go\n store, cleanup := testutil.NewTestStore(t)\n defer cleanup()\n ```\n\n## Success Criteria\n- Coverage β‰₯ 70%\n- AI calls properly mocked (no real API calls in tests)\n- Retry logic verified\n- Error paths covered\n\n## Run Tests\n```bash\ngo test -v -cover ./internal/compact\ngo test -race ./internal/compact\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-13T20:42:58.455767-08:00","updated_at":"2025-12-23T13:41:10.80832-08:00","closed_at":"2025-12-23T13:41:10.80832-08:00","dependencies":[{"issue_id":"bd-thgk","depends_on_id":"bd-iz5t","type":"parent-child","created_at":"2025-12-23T12:44:07.287377-08:00","created_by":"daemon"}]} +{"id":"bd-yx22","title":"Merge: bd-d28c","description":"branch: polecat/testcat\ntarget: main\nsource_issue: bd-d28c\nrig: beads","status":"closed","priority":1,"issue_type":"merge-request","created_at":"2025-12-23T21:33:15.490412-08:00","updated_at":"2025-12-23T21:36:38.584933-08:00","closed_at":"2025-12-23T21:36:38.584933-08:00"} +{"id":"bd-2oo.4","title":"Create migration script for edge field to dependency conversion","description":"Migration must:\n1. Read existing JSONL with old fields\n2. Convert field values to dependency records\n3. Write updated JSONL without old fields\n4. Handle edge cases (missing refs, duplicates)\n\nRun via: bd migrate or automatic on bd prime","status":"closed","priority":0,"issue_type":"task","created_at":"2025-12-18T02:02:01.760277-08:00","updated_at":"2025-12-18T02:49:10.602446-08:00","closed_at":"2025-12-18T02:49:10.602446-08:00","dependencies":[{"issue_id":"bd-2oo.4","depends_on_id":"bd-2oo","type":"parent-child","created_at":"2025-12-18T02:02:01.760694-08:00","created_by":"daemon"}]} +{"id":"bd-iq7n","title":"Audit and fix JSONL filename mismatches across all repo clones","description":"## Problem\n\nMultiple clones of repos are configured with different JSONL filenames (issues.jsonl vs beads.jsonl), causing:\n1. JSONL files to be resurrected after deletion (one clone pushes issues.jsonl, another pushes beads.jsonl)\n2. Agents unable to see issues filed by other agents after sync\n3. Merge conflicts and data inconsistencies\n\n## Root Cause\n\nWhen repos were \"bd doctored\" or initialized at different times, some got issues.jsonl (old default) and others got beads.jsonl (Beads repo specific). These clones push their respective files, creating duplicates.\n\n## Task\n\nScan all repo clones under ~/src/ (1-2 levels deep) and standardize their JSONL configuration.\n\n### Step 1: Find all beads-enabled repos\n\n```bash\n# Find all directories named 'beads' at levels 1-2 under ~/src/\nfind ~/src -maxdepth 2 -type d -name beads\n```\n\n### Step 2: For each repo found, check configuration\n\nFor each directory from Step 1, check:\n- Does `.beads/metadata.json` exist?\n- What is the `jsonl_export` value?\n- What JSONL files actually exist in `.beads/`?\n- Are there multiple JSONL files (problem!)?\n\n### Step 3: Create audit report\n\nGenerate a report showing:\n```\nRepo Path | Config | Actual Files | Status\n----------------------------------- | ------------- | ---------------------- | --------\n~/src/beads | beads.jsonl | beads.jsonl | OK\n~/src/dave/beads | issues.jsonl | issues.jsonl | MISMATCH\n~/src/emma/beads | issues.jsonl | issues.jsonl, beads.jsonl | DUPLICATE!\n```\n\n### Step 4: Determine canonical name for each repo\n\nFor repos that are the SAME git repository (check `git remote -v`):\n- Group them together\n- Determine which JSONL filename should be canonical (majority wins, or beads.jsonl for the beads repo itself)\n- List which clones need to be updated\n\n### Step 5: Generate fix script\n\nCreate a script that for each mismatched clone:\n1. Updates `.beads/metadata.json` to use the canonical name\n2. If JSONL file needs renaming: `git mv .beads/old.jsonl .beads/new.jsonl`\n3. Removes any duplicate JSONL files: `git rm .beads/duplicate.jsonl`\n4. Commits the change\n5. Syncs: `bd sync`\n\n### Expected Output\n\n1. Audit report showing all repos and their config status\n2. List of repos grouped by git remote (same repository)\n3. Fix script or manual instructions for standardizing each repo\n4. Verification that after fixes, all clones of the same repo use the same JSONL filename\n\n### Edge Cases\n\n- Handle repos without metadata.json (use default discovery)\n- Handle repos with no git remote (standalone/local)\n- Handle repos that are not git repositories\n- Don't modify repos with uncommitted changes (warn instead)\n\n### Success Criteria\n\n- All clones of the same git repository use the same JSONL filename\n- No duplicate JSONL files in any repo\n- All configurations documented in metadata.json\n- bd doctor passes on all repos","status":"closed","priority":0,"issue_type":"task","created_at":"2025-11-21T23:58:35.044762-08:00","updated_at":"2025-12-17T23:13:40.531403-08:00","closed_at":"2025-12-17T16:50:59.510972-08:00"} +{"id":"bd-iw4z","title":"Compound visualization in bd mol show","description":"Enhance bd mol show to display compound structure.\n\nENHANCEMENTS:\n- Show constituent protos and how they're bonded\n- Display bond type (sequential/parallel) between components\n- Indicate attachment points\n- Show combined variable requirements across all protos\n\nEXAMPLE OUTPUT:\n\n Compound: proto-feature-with-tests\n Bonded from:\n └─ proto-feature (root)\n └─ proto-testing (sequential, after completion)\n \n Variables: {{name}}, {{version}}, {{test_suite}}\n \n Structure:\n proto-feature-with-tests\n β”œβ”€ Design feature {{name}}\n β”œβ”€ Implement core\n β”œβ”€ Write unit tests ← from proto-testing\n └─ Run test suite {{test_suite}} ← from proto-testing","status":"deferred","priority":2,"issue_type":"task","created_at":"2025-12-21T00:59:26.71318-08:00","updated_at":"2025-12-21T11:12:44.012871-08:00","dependencies":[{"issue_id":"bd-iw4z","depends_on_id":"bd-rnnr","type":"blocks","created_at":"2025-12-21T00:59:51.891643-08:00","created_by":"daemon"},{"issue_id":"bd-iw4z","depends_on_id":"bd-o5xe","type":"parent-child","created_at":"2025-12-21T00:59:51.500865-08:00","created_by":"daemon"}]} +{"id":"bd-4ri","title":"Fix TestFallbackToDirectModeEnablesFlush deadlock causing 10min test timeout","description":"## Problem\n\nTestFallbackToDirectModeEnablesFlush in direct_mode_test.go deadlocks for 9m59s before timing out, causing the entire test suite to take 10+ minutes instead of \u003c10 seconds.\n\n## Root Cause\n\nDatabase lock contention between test cleanup and flushToJSONL():\n- Test cleanup (line 36) tries to close DB via defer\n- flushToJSONL() (line 132) is still accessing DB\n- Results in deadlock: database/sql.(*DB).Close() waits for mutex while GetJSONLFileHash() holds it\n\n## Stack Trace Evidence\n\n```\ngoroutine 512 [sync.Mutex.Lock, 9 minutes]:\ndatabase/sql.(*DB).Close(0x14000643790)\n .../database/sql/sql.go:927 +0x84\ngithub.com/steveyegge/beads/cmd/bd.TestFallbackToDirectModeEnablesFlush.func1()\n .../direct_mode_test.go:36 +0xf4\n\nWhile goroutine running flushToJSONL() holds DB connection via GetJSONLFileHash()\n```\n\n## Impact\n\n- Test suite: 10+ minutes β†’ should be \u003c10 seconds\n- ALL other tests pass in ~4 seconds\n- This ONE test accounts for 99.9% of test runtime\n\n## Related\n\nThis is the EXACT same issue documented in MAIN_TEST_REFACTOR_NOTES.md for why main_test.go refactoring was deferred - global state manipulation + DB cleanup = deadlock.\n\n## Fix Approaches\n\n1. **Add proper cleanup sequencing** - stop flush goroutines BEFORE closing DB\n2. **Use test-specific DB lifecycle** - ensure flush completes before cleanup\n3. **Mock the flush mechanism** - avoid real DB for testing this code path \n4. **Add explicit timeout handling** - fail fast with clear error instead of hanging\n\n## Files\n\n- cmd/bd/direct_mode_test.go:36-132\n- cmd/bd/autoflush.go:353 (validateJSONLIntegrity)\n- cmd/bd/autoflush.go:508 (flushToJSONLWithState)\n\n## Acceptance\n\n- Test passes without timeout\n- Test suite completes in \u003c10 seconds\n- No deadlock between cleanup and flush operations","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-11-21T20:09:00.794372-05:00","updated_at":"2025-12-17T23:13:40.533279-08:00","closed_at":"2025-12-17T17:25:07.626617-08:00"} +{"id":"bd-2vh3.2","title":"Tier 1: Ephemeral repo routing","description":"Simplified: Make mol spawn set ephemeral=true on spawned issues.\n\n## The Fix\n\nModify cloneSubgraph() in template.go to set Ephemeral: true:\n\n```go\n// template.go:474\nnewIssue := \u0026types.Issue{\n Title: substituteVariables(oldIssue.Title, vars),\n // ... existing fields ...\n Ephemeral: true, // ADD THIS LINE\n}\n```\n\n## Optional: Add --persistent flag\n\nAdd flag to bd mol spawn for when you want spawned issues to persist:\n\n```bash\nbd mol spawn mol-code-review --var pr=123 # ephemeral (default)\nbd mol spawn mol-code-review --var pr=123 --persistent # not ephemeral\n```\n\n## Why This Is Simpler Than Original Design\n\nOriginal design proposed separate ephemeral repo routing. After code review:\n\n- Ephemeral field already exists in schema\n- bd cleanup --ephemeral already works\n- No new config needed\n- No multi-repo complexity\n\n## Acceptance Criteria\n\n- bd mol spawn creates issues with ephemeral=true\n- bd cleanup --ephemeral -f deletes them after closing\n- --persistent flag opts out of ephemeral\n- Existing molecules continue to work","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-21T12:57:36.661604-08:00","updated_at":"2025-12-21T13:43:22.990244-08:00","closed_at":"2025-12-21T13:43:22.990244-08:00","dependencies":[{"issue_id":"bd-2vh3.2","depends_on_id":"bd-2vh3","type":"parent-child","created_at":"2025-12-21T12:57:36.662118-08:00","created_by":"stevey"}]} +{"id":"bd-14ie","title":"Work on beads-2vn: Add simple built-in beads viewer (GH#6...","description":"Work on beads-2vn: Add simple built-in beads viewer (GH#654). Add bd list --pretty with --watch flag, tree view with priority/status symbols. When done, submit MR (not PR) to integration branch for Refinery.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-19T22:56:47.305831-08:00","updated_at":"2025-12-19T23:28:32.429492-08:00","closed_at":"2025-12-19T23:23:13.928323-08:00"} +{"id":"bd-2ep8","title":"Update CHANGELOG.md with release notes","description":"Add meaningful release notes to CHANGELOG.md describing what changed in 0.30.7","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T22:56:48.649053-08:00","updated_at":"2025-12-19T22:57:31.69559-08:00","closed_at":"2025-12-19T22:57:31.69559-08:00","dependencies":[{"issue_id":"bd-2ep8","depends_on_id":"bd-8pyn","type":"parent-child","created_at":"2025-12-19T22:56:48.650816-08:00","created_by":"stevey"},{"issue_id":"bd-2ep8","depends_on_id":"bd-rupw","type":"blocks","created_at":"2025-12-19T22:56:48.651136-08:00","created_by":"stevey"}]} +{"id":"bd-7h5","title":"Add pinned field to issue schema","description":"Add boolean 'pinned' field to the issue schema. When true, the issue is marked as a persistent context marker that should not be treated as a work item.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T23:33:26.767247-08:00","updated_at":"2025-12-19T00:08:59.854605-08:00","closed_at":"2025-12-19T00:08:59.854605-08:00","dependencies":[{"issue_id":"bd-7h5","depends_on_id":"bd-0vg","type":"blocks","created_at":"2025-12-18T23:33:55.98635-08:00","created_by":"daemon"}]} +{"id":"bd-phwd","title":"Add timeout message for long-running git push operations","description":"When git push hangs waiting for credential/browser auth, show a periodic message to the user instead of appearing frozen. Add timeout messaging after N seconds of inactivity during git operations.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-21T11:44:57.318984535-07:00","updated_at":"2025-12-21T11:46:05.218023559-07:00","closed_at":"2025-12-21T11:46:05.218023559-07:00"} +{"id":"bd-sh4c","title":"Improve test coverage for cmd/bd/setup (28.4% β†’ 50%)","description":"The setup package has only 28.4% test coverage. Setup commands are critical for first-time user experience.\n\nCurrent coverage: 28.4%\nTarget coverage: 50%","status":"in_progress","priority":2,"issue_type":"task","created_at":"2025-12-13T20:43:04.409346-08:00","updated_at":"2025-12-23T22:31:50.472109-08:00"} +{"id":"bd-pn0t","title":"Add 0.33.2 to versionChanges in info.go","description":"Add new entry at TOP of versionChanges in cmd/bd/info.go with release notes from CHANGELOG.md. Must do before bump-version.sh --commit.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.760056-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-sal9","title":"bd mol current: soft cursor showing current/next step","description":"Add bd mol current command for molecule navigation orientation.\n\n## Usage\n\nbd mol current [mol-id]\n\nIf mol-id given, show status for that molecule.\nIf not given, infer from in_progress issues assigned to current agent.\n\n## Output\n\nYou're working on molecule gt-abc (Feature X)\n\n [done] gt-abc.1: Design\n [done] gt-abc.2: Scaffold \n [done] gt-abc.3: Implement\n [current] gt-abc.4: Write tests [in_progress] \u003c- YOU ARE HERE\n [pending] gt-abc.5: Documentation\n [pending] gt-abc.6: Exit decision\n\nProgress: 3/6 steps complete\n\n## Key behaviors\n- Shows full molecule structure with status indicators\n- Highlights current in_progress step\n- If no in_progress, highlights first ready step\n- Works without explicit cursor tracking (inferred from state)\n\n## Implementation notes\n- Query children of mol-id\n- Sort by dependency order\n- Find first in_progress or first ready\n- Format with status indicators\n\n## Gas Town integration\n- gt-lz13: Update templates with nav workflow\n- gt-um6q: Update docs with nav workflow","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-12-22T17:03:30.245964-08:00","updated_at":"2025-12-22T17:36:31.936007-08:00","closed_at":"2025-12-22T17:36:31.936007-08:00"} +{"id":"bd-dhza","title":"Reduce global state in cmd/bd/main.go (25+ variables)","description":"Code health review found main.go has 25+ global variables (lines 57-112):\n\n- dbPath, actor, store, jsonOutput, daemonClient, noDaemon\n- rootCtx, rootCancel, autoFlushEnabled\n- isDirty (marked 'USED BY LEGACY CODE')\n- needsFullExport (marked 'USED BY LEGACY CODE')\n- flushTimer (marked 'DEPRECATED')\n- flushMutex, storeMutex, storeActive\n- flushFailureCount, lastFlushError, flushManager\n- skipFinalFlush, autoImportEnabled\n- versionUpgradeDetected, previousVersion, upgradeAcknowledged\n\nImpact:\n- Hard to test individual commands\n- Race conditions possible\n- State leakage between commands\n\nFix: Move toward dependency injection. Remove deprecated variables. Consider cmd/bd/internal package.","notes":"Investigation found flushTimer, isDirty, needsFullExport are actively used by both legacy autoflush.go and new flush_manager.go. Requires coordinated refactor to migrate all callers to FlushManager first. Estimated: significant effort.","status":"in_progress","priority":3,"issue_type":"task","created_at":"2025-12-16T18:17:29.643293-08:00","updated_at":"2025-12-23T22:29:35.811067-08:00"} +{"id":"bd-au0.10","title":"Add global verbosity flags (--verbose, --quiet)","description":"Add consistent verbosity controls across all commands.\n\n**Current state:**\n- bd init has --quiet flag\n- No other commands have verbosity controls\n- Debug output controlled by BD_VERBOSE env var\n\n**Proposal:**\nAdd persistent flags:\n- --verbose / -v: Enable debug output\n- --quiet / -q: Suppress non-essential output\n\n**Implementation:**\n- Add to rootCmd.PersistentFlags()\n- Replace BD_VERBOSE checks with flag checks\n- Standardize output levels:\n * Quiet: Errors only\n * Normal: Errors + success messages\n * Verbose: Errors + success + debug info\n\n**Files to modify:**\n- cmd/bd/main.go (add flags)\n- internal/debug/debug.go (respect flags)\n- Update all commands to respect quiet mode\n\n**Testing:**\n- Verify --verbose shows debug output\n- Verify --quiet suppresses normal output\n- Ensure errors always show regardless of mode","status":"open","priority":3,"issue_type":"task","created_at":"2025-11-21T21:08:21.600209-05:00","updated_at":"2025-11-21T21:08:21.600209-05:00","dependencies":[{"issue_id":"bd-au0.10","depends_on_id":"bd-au0","type":"parent-child","created_at":"2025-11-21T21:08:21.602557-05:00","created_by":"daemon"}]} +{"id":"bd-118d","title":"Commit release v0.33.2","description":"Stage and commit the version bump:\n\n```bash\ngit add cmd/bd/version.go cmd/bd/info.go CHANGELOG.md\ngit commit -m \"release: v0.33.2\"\n```\n\nDo NOT push yet - tag first.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.761725-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-pbh.1","title":"Update cmd/bd/version.go to 0.30.4","description":"Update the Version constant in cmd/bd/version.go:\n```go\nVersion = \"0.30.4\"\n```\n\n\n```verify\ngrep -q 'Version = \"0.30.4\"' cmd/bd/version.go\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:10.9462-08:00","updated_at":"2025-12-17T21:46:46.20387-08:00","closed_at":"2025-12-17T21:46:46.20387-08:00","dependencies":[{"issue_id":"bd-pbh.1","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:10.946633-08:00","created_by":"daemon"}]} +{"id":"bd-9l0h","title":"Run tests and linting","description":"go test -short ./... \u0026\u0026 golangci-lint run ./...","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-20T21:53:19.527602-08:00","updated_at":"2025-12-20T21:55:29.660914-08:00","closed_at":"2025-12-20T21:55:29.660914-08:00","dependencies":[{"issue_id":"bd-9l0h","depends_on_id":"bd-gocx","type":"blocks","created_at":"2025-12-20T21:53:29.753682-08:00","created_by":"daemon"},{"issue_id":"bd-9l0h","depends_on_id":"bd-an4s","type":"parent-child","created_at":"2025-12-20T21:53:19.529203-08:00","created_by":"daemon"}]} +{"id":"bd-hlyr","title":"Merge: bd-m8ro","description":"branch: polecat/max\ntarget: main\nsource_issue: bd-m8ro\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-23T20:45:40.218445-08:00","updated_at":"2025-12-23T21:21:57.69886-08:00","closed_at":"2025-12-23T21:21:57.69886-08:00"} +{"id":"bd-kwro.11","title":"Documentation for messaging and graph links","description":"Document all new features.\n\nFiles to update:\n- README.md - brief mention of messaging capability\n- AGENTS.md - update for AI agents using bd mail\n- docs/messaging.md (new) - full messaging reference\n- docs/graph-links.md (new) - graph link reference\n- CHANGELOG.md - v0.30.2 release notes\n\nTopics to cover:\n- Mail commands with examples\n- Graph link types and use cases\n- Identity configuration\n- Hooks setup for notifications\n- Migration notes","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-12-16T03:02:39.548518-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-7tuu","title":"Commit and push release","description":"git add -A \u0026\u0026 git commit \u0026\u0026 git push to trigger CI","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T22:56:02.053382-08:00","updated_at":"2025-12-20T01:23:52.484043-08:00","closed_at":"2025-12-20T01:23:52.484043-08:00","dependencies":[{"issue_id":"bd-7tuu","depends_on_id":"bd-hw3w","type":"blocks","created_at":"2025-12-19T22:56:23.291591-08:00","created_by":"daemon"},{"issue_id":"bd-7tuu","depends_on_id":"bd-6s61","type":"parent-child","created_at":"2025-12-19T22:56:15.021087-08:00","created_by":"daemon"}]} +{"id":"bd-a15d","title":"Add test files for internal/storage","description":"The internal/storage package has no test files at all. This package provides the storage interface abstraction.\n\nCurrent coverage: N/A (no test files)\nTarget: Add basic interface tests","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-13T20:43:11.363017-08:00","updated_at":"2025-12-13T21:01:20.925779-08:00"} +{"id":"bd-pbh.6","title":"Update integrations/beads-mcp/pyproject.toml to 0.30.4","description":"Update version in pyproject.toml:\n```toml\nversion = \"0.30.4\"\n```\n\n\n```verify\ngrep -q 'version = \"0.30.4\"' integrations/beads-mcp/pyproject.toml\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:10.994004-08:00","updated_at":"2025-12-17T21:46:46.246574-08:00","closed_at":"2025-12-17T21:46:46.246574-08:00","dependencies":[{"issue_id":"bd-pbh.6","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:10.994376-08:00","created_by":"daemon"}]} +{"id":"bd-lfak","title":"bd preflight: PR readiness checks for contributors","description":"## Vision\n\nEncode project-specific institutional knowledge into executable checks. CONTRIBUTING.md is documentation that's read once and forgotten; `bd preflight` is documentation that runs at exactly the right moment.\n\n## Problem Statement\n\nContributors face a \"last mile\" problem - they do the work but stumble on project-specific gotchas at PR time:\n- Nix vendorHash gets stale when go.sum changes\n- Beads artifacts leak into PRs (see bd-umbf for namespace solution)\n- Version mismatches between version.go and default.nix\n- Tests/lint not run locally before pushing\n- Other project-specific checks that only surface when CI fails\n\nThese are too obscure to remember, exist in docs nobody reads end-to-end, and waste CI round-trips.\n\n## Why beads?\n\nBeads already has a foothold in the contributor workflow. It knows:\n- Git state (staged files, branch, dirty status)\n- Project structure\n- The specific issue being worked on\n- Project-specific configuration\n\n## Proposed Interface\n\n### Tier 1: Checklist Mode (v1)\n\n $ bd preflight\n PR Readiness Checklist:\n\n [ ] Tests pass: go test -short ./...\n [ ] Lint passes: golangci-lint run ./...\n [ ] No beads pollution: check .beads/issues.jsonl diff\n [ ] Nix hash current: go.sum unchanged or vendorHash updated\n [ ] Version sync: version.go matches default.nix\n\n Run 'bd preflight --check' to validate automatically.\n\n### Tier 2: Check Mode (v2)\n\n $ bd preflight --check\n βœ“ Tests pass\n βœ“ Lint passes\n ⚠ Beads pollution: 3 issues in diff - are these project issues or personal?\n βœ— Nix hash stale: go.sum changed, vendorHash needs update\n Fix: sha256-KRR6dXzsSw8OmEHGBEVDBOoIgfoZ2p0541T9ayjGHlI=\n βœ“ Version sync\n\n 1 error, 1 warning. Run 'bd preflight --fix' to auto-fix where possible.\n\n### Tier 3: Fix Mode (v3)\n\n $ bd preflight --fix\n βœ“ Updated vendorHash in default.nix\n ⚠ Cannot auto-fix beads pollution - manual review needed\n\n## Checks to Implement\n\n| Check | Description | Auto-fixable |\n|-------|-------------|--------------|\n| tests | Run go test -short ./... | No |\n| lint | Run golangci-lint | Partial (gofmt) |\n| beads-pollution | Detect personal issues in diff | No (see bd-umbf) |\n| nix-hash | Detect stale vendorHash | Yes (if nix available) |\n| version-sync | version.go matches default.nix | Yes |\n| no-debug | No TODO/FIXME/console.log | Warn only |\n| clean-stage | No unintended files staged | Warn only |\n\n## Future: Configuration\n\nMake checks configurable per-project via .beads/preflight.yaml:\n\n preflight:\n checks:\n - name: tests\n run: go test -short ./...\n required: true\n - name: no-secrets\n pattern: \"**/*.env\"\n staged: deny\n - name: custom-check\n run: ./scripts/validate.sh\n\nThis lets any project using beads define their own preflight checks.\n\n## Implementation Phases\n\n### Phase 1: Static Checklist\n- Implement bd preflight with hardcoded checklist for beads\n- No execution, just prints what to check\n- Update CONTRIBUTING.md to reference it\n\n### Phase 2: Automated Checks\n- Implement bd preflight --check\n- Run tests, lint, detect stale hashes\n- Clear pass/fail/warn output\n\n### Phase 3: Auto-fix\n- Implement bd preflight --fix\n- Fix vendorHash, version sync\n- Integrate with bd-umbf solution for pollution\n\n### Phase 4: Configuration\n- .beads/preflight.yaml support\n- Make it useful for other projects using beads\n- Plugin/hook system for custom checks\n\n## Dependencies\n\n- bd-umbf: Namespace isolation for beads pollution (blocking for full solution)\n\n## Success Metrics\n\n- Fewer CI failures on first PR push\n- Reduced \"fix nix hash\" commits\n- Contributors report preflight caught issues before CI","status":"open","priority":2,"issue_type":"epic","created_at":"2025-12-13T18:01:39.587078-08:00","updated_at":"2025-12-13T18:01:39.587078-08:00","dependencies":[{"issue_id":"bd-lfak","depends_on_id":"bd-umbf","type":"blocks","created_at":"2025-12-13T18:01:46.059901-08:00","created_by":"daemon"}]} +{"id":"bd-ola6","title":"Implement transaction retry logic for SQLITE_BUSY","description":"BEGIN IMMEDIATE fails immediately on SQLITE_BUSY instead of retrying with exponential backoff.\n\nLocation: internal/storage/sqlite/sqlite.go:223-225\n\nProblem:\n- Under concurrent write load, BEGIN IMMEDIATE can fail with SQLITE_BUSY\n- Current implementation fails immediately instead of retrying\n- Results in spurious failures under normal concurrent usage\n\nSolution: Implement exponential backoff retry:\n- Retry up to N times (e.g., 5)\n- Backoff: 10ms, 20ms, 40ms, 80ms, 160ms\n- Check for context cancellation between retries\n- Only retry on SQLITE_BUSY/database locked errors\n\nImpact: Spurious failures under concurrent write load\n\nEffort: 3 hours","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-11-16T14:51:31.247147-08:00","updated_at":"2025-12-21T21:39:23.071036-08:00","closed_at":"2025-12-21T21:39:23.071036-08:00"} +{"id":"bd-p5za","title":"mol-christmas-launch: 3-day execution plan","description":"Christmas Launch Molecule - execute phases in order, survive restarts.\n\nPIN THIS BEAD. Check progress each session start.\n\n## Step: phase0-beads-foundation\nFix blocking issues before swarming:\n1. Verify gastown beads schema works: bd list --status=open\n2. Ensure bd mol bond exists (check bd-usro)\n3. Verify bd-2vh3 (squash) is filed\n\n## Step: phase1-polecat-loop\nSerial work on polecat execution:\n1. gt-9nf: Fresh polecats only\n2. gt-975: Molecule execution support\n3. gt-8v8: Refuse uncommitted work\nThen swarm: gt-e1y, gt-f8v, gt-eu9\nNeeds: phase0-beads-foundation\n\n## Step: phase2-refinery\nSerial work on refinery autonomy:\n1. gt-5gkd: Refinery CLAUDE.md\n2. gt-bj6f: Refinery context in gt prime\n3. gt-0qki: Refinery-Witness protocol\nNeeds: phase1-polecat-loop\n\n## Step: phase3-deacon\nHealth monitoring infrastructure:\n1. gt-5af.4: Simplify daemon\n2. gt-5af.7: Crew session patterns\n3. gt-976: Crew lifecycle\nNeeds: phase2-refinery\n\n## Step: phase4-code-review\nSelf-improvement flywheel:\n1. Define mol-code-review (gt-fjvo)\n2. Test on open MRs\n3. Integrate with Refinery\nNeeds: phase3-deacon\n\n## Step: phase5-polish\nDemo readiness:\n1. gt-b2hj: Find orphaned work\n2. Doctor checks\n3. Clean up open MRs\nNeeds: phase4-code-review\n\n## Step: verify-flywheel\nSuccess criteria:\n- gt spawn works with molecules\n- Refinery processes MRs autonomously\n- mol-code-review runs on a PR\n- bd cleanup --ephemeral works\nNeeds: phase5-polish","status":"closed","priority":0,"issue_type":"epic","created_at":"2025-12-20T21:20:02.462889-08:00","updated_at":"2025-12-21T17:23:25.471749-08:00","closed_at":"2025-12-21T17:23:25.471749-08:00"} +{"id":"bd-otli","title":"Wait for CI to pass","description":"Monitor GitHub Actions - all checks must pass before release artifacts are built","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T22:56:03.022281-08:00","updated_at":"2025-12-20T00:49:51.928591-08:00","closed_at":"2025-12-20T00:25:52.635223-08:00","dependencies":[{"issue_id":"bd-otli","depends_on_id":"bd-6s61","type":"parent-child","created_at":"2025-12-19T22:56:15.097564-08:00","created_by":"daemon"},{"issue_id":"bd-otli","depends_on_id":"bd-7tuu","type":"blocks","created_at":"2025-12-19T22:56:23.360436-08:00","created_by":"daemon"}]} +{"id":"bd-by0d","title":"Work on beads-ldv: Fix bd graph crashes with nil pointer ...","description":"Work on beads-ldv: Fix bd graph crashes with nil pointer dereference (GH#657). Fix nil pointer in computeDependencyCounts at graph.go:428. When done, submit MR (not PR) to integration branch for Refinery.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-19T22:55:27.829359-08:00","updated_at":"2025-12-19T23:28:32.428314-08:00","closed_at":"2025-12-19T23:20:49.038441-08:00"} +{"id":"bd-4nqq","title":"Remove dead test code in info_test.go","description":"Code health review found cmd/bd/info_test.go has two tests permanently skipped:\n\n- TestInfoCommand\n- TestInfoCommandNoDaemon\n\nBoth skip with: 'Manual test - bd info command is working, see manual testing'\n\nThese are essentially dead code. Either automate them or remove them entirely.","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-16T18:17:27.554019-08:00","updated_at":"2025-12-22T21:01:24.524963-08:00","closed_at":"2025-12-22T21:01:24.524963-08:00"} +{"id":"bd-bivq","title":"Merge: bd-9usz","description":"branch: polecat/slit\ntarget: main\nsource_issue: bd-9usz\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-23T20:42:19.995419-08:00","updated_at":"2025-12-23T21:21:57.700579-08:00","closed_at":"2025-12-23T21:21:57.700579-08:00"} +{"id":"bd-sj5y","title":"Daemon should be singleton and aggressively kill stale instances","description":"Found 2 bd daemons running (PIDs 76868, 77515) during shutdown. The daemon should:\n\n1. Be a singleton - only one instance per rig allowed\n2. On startup, check for existing daemon and kill it before starting\n3. Use a PID file or lock file to enforce this\n\nCurrently stale daemons can accumulate, causing confusion and resource waste.","notes":"**Investigation 2025-12-21:**\n\nThe singleton mechanism is already implemented and working correctly:\n\n1. **daemon.lock** uses flock (exclusive non-blocking) to prevent duplicate daemons\n2. **bd.sock.startlock** coordinates concurrent auto-starts via O_CREATE|O_EXCL\n3. **Registry** tracks all daemons globally in ~/.beads/registry.json\n\nTesting shows:\n- Trying to start a second daemon gives: 'Error: daemon already running (PID X)'\n- Multiple daemons for *different* rigs is expected/correct behavior\n\nThe original report ('Found 2 bd daemons running PIDs 76868, 77515') was likely:\n1. Two daemons for different rigs (expected), OR\n2. An edge case that's since been fixed\n\nConsider closing as RESOLVED or clarifying the original scenario.","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-21T01:29:14.778949-08:00","updated_at":"2025-12-21T11:27:34.302585-08:00","closed_at":"2025-12-21T11:27:34.302585-08:00"} +{"id":"bd-abjw","title":"Consider consolidating config.yaml parsing into shared utility","description":"Multiple places parse config.yaml with custom structs:\n\n1. **autoimport.go:148** - `localConfig{SyncBranch}`\n2. **main.go:310** - strings.Contains for no-db (fragile, see bd-r6k2)\n3. **doctor.go:863** - strings.Contains for no-db (fragile, see bd-r6k2)\n4. **internal/config/config.go** - Uses viper (but caches at startup, problematic for tests)\n\nConsider creating a shared utility in `internal/configfile/` or extending the viper config:\n\n```go\n// internal/configfile/yaml.go\ntype YAMLConfig struct {\n SyncBranch string `yaml:\"sync-branch\"`\n NoDb bool `yaml:\"no-db\"`\n IssuePrefix string `yaml:\"issue-prefix\"`\n Author string `yaml:\"author\"`\n}\n\nfunc LoadYAML(beadsDir string) (*YAMLConfig, error) {\n // Parse config.yaml with proper YAML library\n}\n```\n\nBenefits:\n- Single source of truth for config.yaml structure\n- Proper YAML parsing everywhere\n- Easier to add new config fields\n\nTrade-off: May add complexity for simple one-off reads.","status":"open","priority":4,"issue_type":"task","created_at":"2025-12-07T02:03:26.067311-08:00","updated_at":"2025-12-07T02:03:26.067311-08:00"} +{"id":"bd-oy6c","title":"Bump version in all files","description":"Run ./scripts/bump-version.sh 0.33.2 to update 10 version files. Then run with --commit after info.go is updated.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.759706-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-nqyp","title":"mol-beads-release","description":"Release checklist for beads version {{version}}.\n\nThis molecule ensures all release steps are completed properly.\nVariable: {{version}} - target version (e.g., 0.35.0)\n\n## Step: update-release-notes\nUpdate cmd/bd/info.go with release notes for {{version}}.\n\nAdd a new VersionChange entry at the top of versionChanges slice:\n```go\n{\n Version: \"{{version}}\",\n Date: \"YYYY-MM-DD\",\n Changes: []string{\n \"NEW: Feature description\",\n \"FIX: Bug fix description\",\n \"IMPROVED: Enhancement description\",\n },\n},\n```\n\nRun `git log --oneline v\u003cprevious\u003e..HEAD` to see what changed.\n\n## Step: update-changelog\nUpdate CHANGELOG.md with detailed release notes.\n\nAdd a new section after [Unreleased]:\n```markdown\n## [{{version}}] - YYYY-MM-DD\n\n### Added\n- **Feature name** (issue-id) - Description\n\n### Changed\n- **Change description** (issue-id)\n\n### Fixed\n- **Bug fix** (issue-id) - Description\n```\n\nSort by importance, not chronologically.\nNeeds: update-release-notes\n\n## Step: bump-version\nRun the version bump script.\n\n```bash\n./scripts/bump-version.sh {{version}}\n```\n\nThis updates version in all files:\n- cmd/bd/version.go\n- .claude-plugin/*.json\n- integrations/beads-mcp/pyproject.toml\n- npm-package/package.json\n- Hook templates\n\nNeeds: update-changelog\n\n## Step: run-tests\nRun tests and verify lint passes.\n\n```bash\ngo test -short ./...\n```\n\nCI will run full lint, but fix any obvious issues first.\nNeeds: bump-version\n\n## Step: commit-release\nCommit the release changes.\n\n```bash\ngit add -A\ngit commit -m \"chore: bump version to v{{version}}\"\n```\n\nNeeds: run-tests\n\n## Step: push-and-tag\nPush commit and create release tag.\n\n```bash\ngit push origin main\ngit tag v{{version}}\ngit push origin v{{version}}\n```\n\nThis triggers GitHub Actions release workflow.\nNeeds: commit-release\n\n## Step: wait-for-ci\nWait for GitHub Actions to complete.\n\nMonitor: https://github.com/steveyegge/beads/actions\n\nCI will:\n- Build binaries via GoReleaser\n- Create GitHub Release with assets\n- Publish to npm (@beads/bd)\n- Publish to PyPI (beads-mcp)\n- Update Homebrew tap\n\nWait until all jobs succeed (~5-10 min).\nNeeds: push-and-tag\n\n## Step: verify-release\nVerify the release is complete.\n\n```bash\n# Check GitHub release\ngh release view v{{version}}\n\n# Check Homebrew\nbrew update \u0026\u0026 brew info steveyegge/beads/bd\n\n# Check npm\nnpm view @beads/bd version\n\n# Check PyPI\npip index versions beads-mcp\n```\n\nNeeds: wait-for-ci\n\n## Step: update-local\nUpdate local installations.\n\n```bash\n# Upgrade Homebrew\nbrew upgrade steveyegge/beads/bd\n\n# Or install from source\n./scripts/bump-version.sh {{version}} --install\n\n# Install MCP locally\npip install -e integrations/beads-mcp\n\n# Restart daemons\npkill -f \"bd daemon\" || true\n```\n\nVerify: `bd --version` shows {{version}}\nNeeds: verify-release\n\n## Step: manual-publish\n(Optional) Manual publish if CI failed.\n\n```bash\n# npm (requires npm login)\n./scripts/bump-version.sh {{version}} --publish-npm\n\n# PyPI (requires TWINE credentials)\n./scripts/bump-version.sh {{version}} --publish-pypi\n\n# Or both\n./scripts/bump-version.sh {{version}} --publish-all\n```\n\nOnly needed if CI publishing failed.\nNeeds: wait-for-ci","status":"open","priority":2,"issue_type":"molecule","created_at":"2025-12-23T11:29:39.087936-08:00","updated_at":"2025-12-23T11:29:39.087936-08:00"} +{"id":"bd-r2n1","title":"Add integration tests for RPC server and event loops","description":"After adding basic unit tests for daemon utilities, the complex daemon functions still need integration tests:\n\nCore daemon lifecycle:\n- startRPCServer: Initializes and starts RPC server with proper error handling\n- runEventLoop: Polling-based sync loop with parent monitoring and signal handling\n- runDaemonLoop: Main daemon initialization and setup\n\nHealth checking:\n- isDaemonHealthy: Checks daemon responsiveness and health metrics\n- checkDaemonHealth: Periodic health verification\n\nThese require more complex test infrastructure (mock RPC, test contexts, signal handling) and should be tackled after the unit test foundation is in place.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T12:28:56.022996362-07:00","updated_at":"2025-12-18T12:44:32.167862713-07:00","closed_at":"2025-12-18T12:44:32.167862713-07:00","dependencies":[{"issue_id":"bd-r2n1","depends_on_id":"bd-4or","type":"discovered-from","created_at":"2025-12-18T12:28:56.045893852-07:00","created_by":"mhwilkie"}]} +{"id":"bd-9g1z","title":"Fix or remove TestFindJSONLPathDefault (issue #356)","description":"Code health review found .test-skip permanently skips TestFindJSONLPathDefault.\n\nThe test references issue #356 about wrong JSONL filename expectations (issues.jsonl vs beads.jsonl).\n\nTest file: internal/beads/beads_test.go\n\nThe underlying migration from beads.jsonl to issues.jsonl may be complete, so either:\n1. Fix the test expectations\n2. Remove the test if no longer needed\n3. Document why it remains skipped","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-16T18:17:31.33975-08:00","updated_at":"2025-12-22T21:24:50.357688-08:00","closed_at":"2025-12-22T21:24:50.357688-08:00"} +{"id":"bd-66w1","title":"Add external_projects to config schema","description":"Add external_projects mapping to .beads/config.yaml:\n\n```yaml\nexternal_projects:\n beads: ../beads\n gastown: ../gastown\n other: /absolute/path/to/project\n```\n\nUsed by bd ready and other commands to resolve external: references.\n\nPart of cross-project dependency system.\nSee: gastown/docs/cross-project-deps.md","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-12-21T22:37:39.245017-08:00","updated_at":"2025-12-21T23:03:19.81448-08:00","closed_at":"2025-12-21T23:03:19.81448-08:00"} +{"id":"bd-d148","title":"GH#483: Pre-commit hook fails unnecessarily when .beads removed","description":"Pre-commit hook fails on bd sync when .beads directory exists but user is on branch without beads. Should exit gracefully. See GitHub issue #483.","status":"tombstone","priority":2,"issue_type":"bug","created_at":"2025-12-16T01:03:40.049785-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} +{"id":"bd-akcq","title":"Design molecule step hooks","description":"Hooks that fire between molecule steps. When a bead in a molecule closes, trigger hook that can spawn agent attention to prompts/requests. This enables reactive orchestration - the molecule drives, hooks respond. Gas Town feature built on Beads data plane.","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-12-20T23:52:18.63487-08:00","updated_at":"2025-12-21T17:53:19.284064-08:00","closed_at":"2025-12-21T17:53:19.284064-08:00","dependencies":[{"issue_id":"bd-akcq","depends_on_id":"bd-icnf","type":"blocks","created_at":"2025-12-20T23:52:25.935274-08:00","created_by":"daemon"}]} +{"id":"bd-x3hi","title":"Support redirect files in .beads/ directory","description":"Gas Town creates polecat worktrees with .beads/redirect files that point to a shared beads database. The bd CLI should:\n\n1. When finding a .beads/ directory, check if it contains a 'redirect' file\n2. If redirect exists, read the relative path and use that as the beads directory\n3. This allows multiple git worktrees to share a single beads database\n\nExample:\n- polecats/alpha/.beads/redirect contains '../../mayor/rig/.beads'\n- bd commands from alpha should use mayor/rig/.beads\n\nCurrently bd ignores redirect files and either uses the local .beads/ or walks up to find a parent .beads/.\n\nRelated: gt-nriy (test message that can't be retrieved due to missing redirect support)","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-12-20T21:46:23.415172-08:00","updated_at":"2025-12-20T21:59:25.759664-08:00","closed_at":"2025-12-20T21:59:25.759664-08:00"} +{"id":"bd-28db","title":"Add 'bd status' command for issue database overview","description":"Implement a bd status command that provides a quick snapshot of the issue database state, similar to how git status shows working tree state.\n\nExpected output: Show summary including counts by state (open, in-progress, blocked, closed), recent activity (last 7 days), and quick overview without needing multiple queries.\n\nExample output showing issue counts, recent activity stats, and pointer to bd list for details.\n\nProposed options: --all (show all issues), --assigned (show issues assigned to current user), --json (JSON format output)\n\nUse cases: Quick project health check, onboarding for new contributors, integration with shell prompts or CI/CD, daily standup reference","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-11-02T17:25:59.203549-08:00","updated_at":"2025-12-21T17:54:00.205191-08:00","closed_at":"2025-12-21T17:54:00.205191-08:00"} +{"id":"bd-qqc.11","title":"Update go install bd to {{version}}","description":"Rebuild and install bd to ~/go/bin:\n\n```bash\ngo install ./cmd/bd\n~/go/bin/bd version # Verify shows {{version}}\n```\n\nNote: If ~/go/bin is in PATH before /opt/homebrew/bin, this is the version that runs by default.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T23:07:55.838013-08:00","updated_at":"2025-12-18T23:09:05.775582-08:00","closed_at":"2025-12-18T23:09:05.775582-08:00","dependencies":[{"issue_id":"bd-qqc.11","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T23:07:55.838432-08:00","created_by":"daemon"},{"issue_id":"bd-qqc.11","depends_on_id":"bd-qqc.10","type":"blocks","created_at":"2025-12-18T23:08:19.629947-08:00","created_by":"daemon"}]} +{"id":"bd-o4qy","title":"Improve CheckStaleness error handling","description":"## Problem\n\nCheckStaleness returns 'false' (not stale) for multiple error conditions instead of returning errors. This masks problems.\n\n**Location:** internal/autoimport/autoimport.go:253-285\n\n## Edge Cases That Return False\n\n1. **Invalid last_import_time format** (line 259-262)\n2. **No JSONL file found** (line 267-277) \n3. **JSONL stat fails** (line 279-282)\n\n## Fix\n\nReturn errors for abnormal conditions:\n\n```go\nlastImportTime, err := time.Parse(time.RFC3339, lastImportStr)\nif err != nil {\n return false, fmt.Errorf(\"corrupted last_import_time: %w\", err)\n}\n\nif jsonlPath == \"\" {\n return false, fmt.Errorf(\"no JSONL file found\")\n}\n\nstat, err := os.Stat(jsonlPath)\nif err != nil {\n return false, fmt.Errorf(\"cannot stat JSONL: %w\", err)\n}\n```\n\n## Impact\nMedium - edge cases are rare but should be handled\n\n## Effort \n30 minutes - requires updating callers in RPC server","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-11-20T20:17:27.606219-05:00","updated_at":"2025-12-17T23:13:40.536905-08:00","closed_at":"2025-12-17T19:11:12.965289-08:00","dependencies":[{"issue_id":"bd-o4qy","depends_on_id":"bd-2q6d","type":"blocks","created_at":"2025-11-20T20:18:26.81065-05:00","created_by":"stevey"}]} +{"id":"bd-tbz3","title":"bd init UX Improvements","description":"bd init leaves users with incomplete setup, requiring manual bd doctor --fix. Issues found: (1) git hooks not installed if user declines prompt, (2) no auto-migration when CLI is upgraded, (3) stale merge driver configs from old versions. Fix by making bd init more robust with better defaults and auto-migration.","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-11-21T23:16:00.333543-08:00","updated_at":"2025-12-23T04:20:51.88847-08:00","closed_at":"2025-12-23T04:20:51.88847-08:00"} +{"id":"bd-4q8","title":"bd cleanup --hard should skip tombstone creation for true permanent deletion","description":"## Problem\n\nWhen using bd cleanup --hard --older-than N --force, the command:\n1. Deletes closed issues older than N days (converting them to tombstones with NOW timestamp)\n2. Then tries to prune tombstones older than N days (finds none because they were just created)\n\nThis leaves the database bloated with fresh tombstones that will not be pruned.\n\n## Expected Behavior\n\nIn --hard mode, the deletion should be permanent without creating tombstones, since the user explicitly requested bypassing sync safety.\n\n## Workaround\n\nManually delete from database: sqlite3 .beads/beads.db 'DELETE FROM issues WHERE status=tombstone'\n\n## Fix Options\n\n1. In --hard mode, use a different delete path that does not create tombstones\n2. After deleting, immediately prune the just-created tombstones regardless of age\n3. Pass a skip_tombstone flag to the delete operation\n\nOption 1 is cleanest - --hard should mean permanent delete without tombstone.","status":"tombstone","priority":1,"issue_type":"bug","created_at":"2025-12-16T01:33:36.580657-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} +{"id":"bd-kwro.6","title":"Mail Commands: bd mail send/inbox/read/ack","description":"Implement core mail commands in cmd/bd/mail.go\n\nCommands:\n- bd mail send \u003crecipient\u003e -s 'Subject' -m 'Body' [--urgent]\n - Creates issue with type=message, sender=identity, assignee=recipient\n - --urgent sets priority=0\n \n- bd mail inbox [--from \u003csender\u003e] [--priority \u003cn\u003e]\n - Lists open messages where assignee=my identity\n - Sorted by priority, then date\n \n- bd mail read \u003cid\u003e\n - Shows full message content (subject, body, sender, timestamp)\n - Does NOT close (separate from ack)\n \n- bd mail ack \u003cid\u003e\n - Marks message as read by closing it\n - Can ack multiple: bd mail ack \u003cid1\u003e \u003cid2\u003e ...\n\nRequires: Identity configuration (bd-kwro.7)","status":"tombstone","priority":0,"issue_type":"task","created_at":"2025-12-16T03:02:12.103755-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-m0tl","title":"bd create -f crashes with nil pointer dereference","description":"GitHub issue #674. The markdown import feature crashes at markdown.go:338 because global variables (store, ctx, actor) aren't initialized when createIssuesFromMarkdown is called. The function uses globals set by cobra command framework but is being called before they're ready. Need to either initialize globals at start of function or pass them as parameters.","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-21T14:35:14.813012-08:00","updated_at":"2025-12-21T15:41:14.600953-08:00","closed_at":"2025-12-21T15:41:14.600953-08:00"} +{"id":"bd-g9eu","title":"Investigate TestRoutingIntegration failure","description":"TestRoutingIntegration/maintainer_with_SSH_remote failed during pre-commit check with \"expected role maintainer, got contributor\".\nThis occurred while running `go test -short ./...` on darwin/arm64.\nThe failure appears unrelated to storage/sqlite changes.\nNeed to investigate if this is a flaky test or environmental issue.","status":"open","priority":2,"issue_type":"task","created_at":"2025-11-20T15:55:19.337094-08:00","updated_at":"2025-11-20T15:55:19.337094-08:00"} +{"id":"bd-lsv4","title":"GH#444: Fix inconsistent status naming in_progress vs in-progress","description":"Documentation uses in-progress (hyphen) but code expects in_progress (underscore). Update all docs to use canonical in_progress. See GitHub issue #444.","status":"tombstone","priority":2,"issue_type":"bug","created_at":"2025-12-16T01:03:14.349425-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} +{"id":"bd-o55a","title":"GH#509: bd doesn't find .beads when running from nested worktrees","description":"When worktrees are nested under main repo (.worktrees/feature/), bd stops at worktree git root instead of continuing to find .beads in parent. See GitHub issue #509 for detailed fix suggestion.","status":"tombstone","priority":2,"issue_type":"bug","created_at":"2025-12-16T01:03:20.281591-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} +{"id":"bd-iic1","title":"Phase 2.2: Switch bdt storage to TOON format","description":"Currently bdt stores issues in JSONL format in issues.toon file. Phase 2.2 must implement actual TOON format storage - this is the fundamental goal of the bdtoon project.\n\n## Current State (Phase 2.1)\n- issues.toon stores JSONL (intermediate format)\n- --toon flag allows output in TOON format for LLM consumption\n- Problem: We're not actually using TOON as the fundamental storage format\n\n## Required Work (Phase 2.2)\n1. Switch issue file I/O to write TOON format instead of JSONL\n - Update cmd/bdt/storage.go to use EncodeTOON for writing\n - Update cmd/bdt/storage.go to decode TOON (currently decodes JSON)\n - Ensure round-trip: write TOON β†’ read TOON β†’ write TOON is byte-identical\n\n2. Update command implementations\n - cmd/bdt/create.go: Write newly created issues to TOON format\n - cmd/bdt/list.go: Read issues from TOON format\n - cmd/bdt/show.go: Read from TOON format\n - cmd/bdt/import.go: Convert imported JSONL to TOON\n - cmd/bdt/export.go: Export TOON to JSONL (for bd compatibility)\n\n3. Implement TOON parser that handles gotoon's encoder-only limitation\n - Since gotoon doesn't decode TOON, need custom TOONβ†’JSON decoder\n - OR continue storing TOON but decoding via intermediate JSON conversion\n\n4. Git merge driver optimization\n - TOON is line-oriented, better for 3-way merges than binary formats\n - Configure git merge driver for .toon files\n\n5. Comprehensive testing\n - Round-trip tests: Issue β†’ TOON β†’ storage β†’ read β†’ Issue\n - Merge conflict resolution tests with TOON format\n - Large issue set performance tests\n\n## Success Criteria\n- issues.toon stores actual TOON format (not JSONL)\n- bdt list reads from TOON file\n- bdt create writes to TOON file\n- Round-trip: create issue β†’ list β†’ show returns identical data\n- All 65+ tests still passing\n- Performance comparable to JSONL storage","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T14:05:41.394964404-07:00","updated_at":"2025-12-19T14:37:17.879612634-07:00","closed_at":"2025-12-19T14:37:17.879612634-07:00"} +{"id":"bd-si4g","title":"Verify release artifacts","description":"Check GitHub releases page - binaries for darwin/linux/windows should be available","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T22:56:04.183029-08:00","updated_at":"2025-12-20T00:49:51.92894-08:00","closed_at":"2025-12-20T00:25:52.720816-08:00","dependencies":[{"issue_id":"bd-si4g","depends_on_id":"bd-6s61","type":"parent-child","created_at":"2025-12-19T22:56:15.173619-08:00","created_by":"daemon"},{"issue_id":"bd-si4g","depends_on_id":"bd-otli","type":"blocks","created_at":"2025-12-19T22:56:23.428507-08:00","created_by":"daemon"}]} +{"id":"bd-0vg","title":"Pinned issues: persistent context markers","description":"Add ability to pin issues so they remain visible and are excluded from work-finding commands. Pinned issues serve as persistent context markers (handoffs, architectural notes, recovery instructions) that should not be claimed as work items.\n\nUse Cases:\n1. Handoff messages - Pin session handoffs so new agents always see them\n2. Architecture decisions - Pin ADRs or design notes for reference \n3. Recovery context - Pin amnesia-cure notes that help agents orient\n\nCore commands: bd pin, bd unpin, bd list --pinned/--no-pinned","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-18T23:33:10.911092-08:00","updated_at":"2025-12-21T11:30:28.989696-08:00","closed_at":"2025-12-21T11:30:28.989696-08:00"} +{"id":"bd-4or","title":"Add tests for daemon functionality","description":"Critical daemon functions have 0% test coverage including daemon lifecycle, health checks, and RPC server functionality. These are essential for system reliability and need comprehensive test coverage.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T07:00:26.916050465-07:00","updated_at":"2025-12-19T09:54:57.017114822-07:00","closed_at":"2025-12-18T12:29:06.134014366-07:00","dependencies":[{"issue_id":"bd-4or","depends_on_id":"bd-6ss","type":"discovered-from","created_at":"2025-12-18T07:00:26.919347253-07:00","created_by":"matt"}]} +{"id":"bd-47tn","title":"Add bd daemon --stop-all command to kill all daemon processes","description":"Currently there's no easy way to stop all running bd daemon processes. Users must resort to pkill -f 'bd daemon' or similar shell commands.\n\nAdd a --stop-all flag to bd daemon that:\n1. Finds all running bd daemon processes (not just the current repo's daemon)\n2. Gracefully stops them all\n3. Reports how many were stopped\n\nThis is useful when:\n- Multiple daemons are running and causing race conditions\n- User wants a clean slate before running bd sync\n- Debugging daemon-related issues","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-12-13T06:34:45.080633-08:00","updated_at":"2025-12-16T01:14:49.501989-08:00","closed_at":"2025-12-14T17:33:03.057089-08:00"} +{"id":"bd-bijf","title":"Merge: bd-l13p","description":"branch: polecat/nux\ntarget: main\nsource_issue: bd-l13p\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-23T16:41:32.467246-08:00","updated_at":"2025-12-23T19:12:08.348252-08:00","closed_at":"2025-12-23T19:12:08.348252-08:00"} +{"id":"bd-o5xe","title":"Molecule bonding: composable workflow templates","description":"Vision: Molecules should be composable like LEGO bricks or Mad Max war rig sections. Bonding lets you attach molecules together to create compound workflows.\n\nTHREE BONDING CONTEXTS:\n1. Template-time: bd mol bond A B β†’ Create reusable compound proto\n2. Spawn-time: bd mol spawn A --attach B β†’ Attach modules when instantiating \n3. Runtime: bd mol attach epic B β†’ Add to running workflow\n\nBOND TYPES:\n- Sequential: B after A completes (feature β†’ deploy)\n- Parallel: B runs alongside A (feature + docs)\n- Conditional: B only if A fails (feature β†’ hotfix)\n\nBOND POINTS (Attachment Sites):\n- Default: B depends on A root epic completion\n- Explicit: --after issue-id for specific attachment\n- Future: Named bond points in proto definitions\n\nVARIABLE FLOW:\n- Shared namespace between bonded molecules\n- Warn on variable name conflicts\n- Future: explicit mapping with --map\n\nDATA MODEL: Issues track bonded_from to preserve compound lineage.\n\nSUCCESS CRITERIA:\n- Can bond two protos into a compound proto\n- Can spawn with --attach for on-the-fly composition\n- Can attach molecules to running workflows\n- Compound structure visible in bd mol show\n- Variables flow correctly between bonded molecules","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-21T00:58:35.479009-08:00","updated_at":"2025-12-21T17:19:45.871164-08:00","closed_at":"2025-12-21T17:19:45.871164-08:00"} +{"id":"bd-lxzx","title":"Add close_reason to JSONL export format documentation","description":"PR #551 now persists close_reason to the database, but there's a question about whether this field should be exported to JSONL format.\n\n## Current State\n- close_reason is stored in issues.close_reason column\n- close_reason is also stored in events table (audit trail)\n- The JSONL export format may or may not include close_reason\n\n## Questions\n1. Should close_reason be exported to JSONL format?\n2. If yes, where should it go (root level or nested in events)?\n3. Should there be any special handling to avoid duplication?\n4. How should close_reason be handled during JSONL import?\n\n## Why This Matters\n- JSONL is the git-friendly sync format\n- Other beads instances import from JSONL\n- close_reason is meaningful data that should be preserved across clones\n\n## Suggested Action\n- Check if close_reason is currently exported in JSONL\n- If not, add it to the export schema\n- Document the field in JSONL format spec\n- Add tests for round-trip (export -\u003e import -\u003e verify close_reason)","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-14T14:25:17.414916-08:00","updated_at":"2025-12-14T14:25:17.414916-08:00","dependencies":[{"issue_id":"bd-lxzx","depends_on_id":"bd-z86n","type":"discovered-from","created_at":"2025-12-14T14:25:17.416131-08:00","created_by":"stevey"}]} +{"id":"bd-a3sj","title":"RemoveDependency fails on external deps - FK violation in dirty_issues","description":"In dependencies.go:225, RemoveDependency marks BOTH issueID and dependsOnID as dirty. For external refs (e.g., external:project:capability), dependsOnID doesn't exist in the issues table. This causes FK violation since dirty_issues.issue_id has FK constraint to issues.id.\n\nFix: Check if dependsOnID starts with 'external:' and only mark source issue as dirty, matching the logic in AddDependency (lines 162-170).\n\nRepro: bd dep rm \u003cissue\u003e external:project:capability","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-21T23:44:51.981138-08:00","updated_at":"2025-12-22T17:48:29.062424-08:00","closed_at":"2025-12-22T17:48:29.062424-08:00","dependencies":[{"issue_id":"bd-a3sj","depends_on_id":"bd-zmmy","type":"discovered-from","created_at":"2025-12-21T23:44:51.982343-08:00","created_by":"daemon"}]} +{"id":"bd-awmf","title":"Merge: bd-dtl8","description":"branch: polecat/dag\ntarget: main\nsource_issue: bd-dtl8\nrig: beads","status":"closed","priority":1,"issue_type":"merge-request","created_at":"2025-12-23T20:47:15.147476-08:00","updated_at":"2025-12-23T21:21:57.690692-08:00","closed_at":"2025-12-23T21:21:57.690692-08:00"} +{"id":"bd-66l4","title":"Runtime bonding: bd mol attach","description":"Attach a molecule to an already-running workflow.\n\nCOMMAND: bd mol attach \u003cepic-id\u003e \u003cproto\u003e [--after \u003cissue-id\u003e]\n\nBEHAVIOR:\n- Resolve running epic and proto\n- Spawn proto as new subtree\n- Wire to specified attachment point (or epic root)\n- Handle in-progress issues: new work doesn't block completed work\n\nUSE CASES:\n- Discovered need for docs while implementing feature\n- Hotfix needs attaching to release workflow\n- Additional testing scope identified mid-flight\n\nFLAGS:\n- --after ISSUE: Specific attachment point within epic\n- --type: sequential (default) or parallel\n- --var: Variables for the attached proto\n\nCONSIDERATIONS:\n- What if epic is already closed? Error or reopen?\n- What if attachment point issue is closed? Attach as ready-to-work?","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-21T00:59:16.920483-08:00","updated_at":"2025-12-21T01:08:43.530597-08:00","closed_at":"2025-12-21T01:08:43.530597-08:00","dependencies":[{"issue_id":"bd-66l4","depends_on_id":"bd-o5xe","type":"parent-child","created_at":"2025-12-21T00:59:51.435542-08:00","created_by":"daemon"},{"issue_id":"bd-66l4","depends_on_id":"bd-o91r","type":"blocks","created_at":"2025-12-21T00:59:51.813782-08:00","created_by":"daemon"}]} +{"id":"bd-4lm3","title":"Correction: Pinned field already in v0.31.0","description":"Quick correction - the Pinned field is already in the current bd v0.31.0:\n\n```go\n// In beads internal/types/types.go\nPinned bool `json:\"pinned,omitempty\"`\n```\n\nSo you just need to:\n1. Add `Pinned bool `json:\"pinned,omitempty\"`` to BeadsMessage in types.go\n2. Sort pinned messages first in listBeads() after fetching\n\nNo migration needed - the field is already there.\n\n-- Mayor","status":"closed","priority":2,"issue_type":"message","created_at":"2025-12-20T17:52:27.321458-08:00","updated_at":"2025-12-21T17:52:18.617995-08:00","closed_at":"2025-12-21T17:52:18.617995-08:00"} +{"id":"bd-cb64c226.13","title":"Audit Current Cache Usage","description":"**Summary:** Comprehensive audit of storage cache usage revealed minimal dependency across server components, with most calls following a consistent pattern. Investigation confirmed cache was largely unnecessary in single-repository daemon architecture.\n\n**Key Decisions:** \n- Remove all cache-related environment variables\n- Delete server struct cache management fields\n- Eliminate cache-specific test files\n- Deprecate req.Cwd routing logic\n\n**Resolution:** Cache system will be completely removed, simplifying server storage access and reducing unnecessary complexity with negligible performance impact.","notes":"AUDIT COMPLETE\n\ngetStorageForRequest() callers: 17 production + 11 test\n- server_issues_epics.go: 8 calls\n- server_labels_deps_comments.go: 4 calls \n- server_export_import_auto.go: 2 calls\n- server_compact.go: 2 calls\n- server_routing_validation_diagnostics.go: 1 call\n- server_eviction_test.go: 11 calls (DELETE entire file)\n\nPattern everywhere: store, err := s.getStorageForRequest(req) β†’ store := s.storage\n\nreq.Cwd usage: Only for multi-repo routing. Local daemon always serves 1 repo, so routing is unused.\n\nMCP server: Uses separate daemons per repo (no req.Cwd usage found). NOT affected by cache removal.\n\nCache env vars to deprecate:\n- BEADS_DAEMON_MAX_CACHE_SIZE (used in server_core.go:63)\n- BEADS_DAEMON_CACHE_TTL (used in server_core.go:72)\n- BEADS_DAEMON_MEMORY_THRESHOLD_MB (used in server_cache_storage.go:47)\n\nServer struct fields to remove:\n- storageCache, cacheMu, maxCacheSize, cacheTTL, cleanupTicker, cacheHits, cacheMisses\n\nTests to delete:\n- server_eviction_test.go (entire file - 9 tests)\n- limits_test.go cache assertions\n\nSpecial consideration: ValidateDatabase endpoint uses findDatabaseForCwd() outside cache. Verify if used, then remove or inline.\n\nSafe to proceed with removal - cache always had 1 entry in local daemon model.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-10-27T22:55:19.3723-07:00","updated_at":"2025-12-17T23:18:29.111369-08:00","deleted_at":"2025-12-17T23:18:29.111369-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} +{"id":"bd-fmdy","title":"Merge: bd-kzda","description":"branch: polecat/toast\ntarget: main\nsource_issue: bd-kzda\nrig: beads","status":"closed","priority":3,"issue_type":"merge-request","created_at":"2025-12-23T00:27:28.952413-08:00","updated_at":"2025-12-23T01:33:25.731326-08:00","closed_at":"2025-12-23T01:33:25.731326-08:00"} +{"id":"bd-au0","title":"Command Set Standardization \u0026 Flag Consistency","description":"Comprehensive improvements to bd command set based on 2025 audit findings.\n\n## Background\nSee docs/command-audit-2025.md for detailed analysis.\n\n## Goals\n1. Standardize flag naming and behavior across all commands\n2. Add missing flags for feature parity\n3. Fix naming confusion\n4. Improve consistency in JSON output\n\n## Success Criteria\n- All mutating commands support --dry-run (no --preview variants)\n- bd update supports label operations\n- bd search has filter parity with bd list\n- Priority flags accept both int and P0-P4 format everywhere\n- JSON output is consistent across all commands","status":"open","priority":2,"issue_type":"epic","created_at":"2025-11-21T21:05:55.672749-05:00","updated_at":"2025-11-21T21:05:55.672749-05:00"} +{"id":"bd-xctp","title":"GH#519: bd sync fails when sync.branch is currently checked-out branch","status":"tombstone","priority":2,"issue_type":"bug","created_at":"2025-12-16T01:06:05.319281-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} +{"id":"bd-etyv","title":"Smart --var detection for mol distill","description":"Implemented bidirectional syntax support for mol distill --var flag.\n\n**Problem:**\n- spawn uses: --var variable=value (assignment style)\n- distill used: --var value=variable (substitution style)\n- Agents would naturally guess spawn-style for both\n\n**Solution:**\nSmart detection that accepts BOTH syntaxes by checking which side appears in the epic text:\n- --var branch=feature-auth β†’ finds 'feature-auth' in text β†’ works\n- --var feature-auth=branch β†’ finds 'feature-auth' in text β†’ also works\n\n**Changes:**\n- Added parseDistillVar() with smart detection\n- Added collectSubgraphText() helper\n- Restructured runMolDistill to load subgraph before parsing vars\n- Updated help text to document both syntaxes\n- Added comprehensive tests in mol_test.go\n\n**Edge cases handled:**\n- Both sides found: prefers spawn-style (more common guess)\n- Neither found: helpful error message\n- Empty sides: validation error\n- Values containing '=' (e.g., KEY=VALUE): works via SplitN\n\nEmbodies the Beads philosophy: watch what agents do, make their guess correct.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-21T11:08:50.83923-08:00","updated_at":"2025-12-21T11:08:56.432536-08:00","closed_at":"2025-12-21T11:08:56.432536-08:00"} +{"id":"bd-6rl","title":"Merge3Way public API does not expose TTL parameter","description":"The public Merge3Way() function in merge.go does not allow callers to configure the tombstone TTL. It hard-codes the default via merge3WayWithTTL(). While merge3WayWithTTL() exists, it is unexported (lowercase). This means the CLI and tests cannot configure TTL at merge time. Use cases: testing with different TTL values, per-repository TTL configuration, debugging with short TTL, supporting --ttl flag in bd merge command (mentioned in design doc bd-zvg). Recommendation: Export Merge3WayWithTTL (rename to uppercase). Files: internal/merge/merge.go:77, 292-298","status":"open","priority":3,"issue_type":"feature","created_at":"2025-12-05T16:36:15.756814-08:00","updated_at":"2025-12-05T16:36:15.756814-08:00"} +{"id":"bd-2vh3.5","title":"Tier 4: Auto-squash on molecule completion","description":"Automatically squash molecules when they reach terminal state.\n\n## Integration Points\n\n1. Hook into molecule completion handler\n2. Detect when all steps are done/failed\n3. Trigger squash automatically\n\n## Config\n\nbd config set mol.auto_squash true # Default: false\nbd config set mol.auto_squash_on_success true # Only on success\nbd config set mol.auto_squash_delay '5m' # Wait before squash\n\n## Implementation Options\n\n### Option A: Post-Completion Hook\nIn mol completion handler:\n- Check if auto_squash enabled\n- Call Squash() after terminal state\n\n### Option B: Git Hook\nIn .beads/hooks/post-commit:\n- bd mol squash --auto\n\n### Option C: Daemon Background Task\n- Daemon periodically checks for squashable molecules\n- Squashes in background\n\n## Acceptance Criteria\n\n- Completed molecules auto-squash without manual intervention\n- Configurable delay before squash\n- Option to squash only on success vs always\n- Works with both daemon and no-daemon modes","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-21T12:58:13.345577-08:00","updated_at":"2025-12-21T17:40:39.794527-08:00","closed_at":"2025-12-21T17:40:39.794527-08:00","dependencies":[{"issue_id":"bd-2vh3.5","depends_on_id":"bd-2vh3.4","type":"blocks","created_at":"2025-12-21T12:58:22.797141-08:00","created_by":"stevey"},{"issue_id":"bd-2vh3.5","depends_on_id":"bd-2vh3","type":"parent-child","created_at":"2025-12-21T12:58:13.346152-08:00","created_by":"stevey"}]} +{"id":"bd-2v0f","title":"Add gate issue type to beads","description":"Add 'gate' as a new issue type for async coordination.\n\n## Changes Needed\n- Add 'gate' to IssueType enum in internal/types/types.go\n- Update validation to accept gate type\n- Update CLI help text and completion\n\n## Gate Type Semantics\n- Gates are ephemeral (live in wisp storage)\n- Managed by Deacon patrol\n- Have special fields: await_type, await_id, timeout, waiters[]","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-23T11:44:31.331897-08:00","updated_at":"2025-12-23T11:47:06.287781-08:00","closed_at":"2025-12-23T11:47:06.287781-08:00","dependencies":[{"issue_id":"bd-2v0f","depends_on_id":"bd-udsi","type":"parent-child","created_at":"2025-12-23T11:44:52.659005-08:00","created_by":"daemon"}]} +{"id":"bd-a9y3","title":"Add composite index (status, priority) for common list queries","description":"SearchIssues and GetReadyWork frequently filter by status and sort by priority. Currently uses two separate indexes.\n\n**Common query pattern (queries.go:1646-1647):**\n```sql\nWHERE status = ? \nORDER BY priority ASC, created_at DESC\n```\n\n**Problem:** Index merge or full scan when both columns are used.\n\n**Solution:** Add migration:\n```sql\nCREATE INDEX IF NOT EXISTS idx_issues_status_priority ON issues(status, priority);\n```\n\n**Expected impact:** Faster bd list, bd ready with filters. Particularly noticeable at 10K+ issues.","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-22T22:58:50.515275-08:00","updated_at":"2025-12-22T23:15:13.838976-08:00","closed_at":"2025-12-22T23:15:13.838976-08:00","dependencies":[{"issue_id":"bd-a9y3","depends_on_id":"bd-h0we","type":"discovered-from","created_at":"2025-12-22T22:58:50.516072-08:00","created_by":"daemon"}]} +{"id":"bd-6ss","title":"Improve test coverage","description":"The test suite reports less than 45% code coverage. Identify the specific uncovered areas of the codebase, including modules, functions, or features. Rank them by potential impact on system reliability and business value, from most to least, and provide actionable recommendations for improving coverage in each area.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-18T06:54:23.036822442-07:00","updated_at":"2025-12-18T07:17:49.245940799-07:00","closed_at":"2025-12-18T07:17:49.245940799-07:00"} +{"id":"bd-dsdh","title":"Document sync.branch 'always dirty' working tree behavior","description":"## Context\n\nWhen sync.branch is configured, the .beads/issues.jsonl file in main's working tree is ALWAYS dirty. This is by design:\n\n1. bd sync commits to beads-sync branch (via worktree)\n2. bd sync copies JSONL to main's working tree (so CLI commands work)\n3. This copy is NOT committed to main (to reduce commit noise)\n\nContributors who watch main branch history pushed for sync.branch to avoid constant beads commit noise. But users need to understand the trade-off.\n\n## Documentation Needed\n\nUpdate README.md sync.branch section with:\n\n1. **Clear explanation** of why .beads/ is always dirty on main\n2. **\"Be Zen about it\"** - this is expected, not a bug\n3. **Workflow options:**\n - Accept dirty state, use `bd sync --merge` periodically to snapshot to main\n - Or disable sync.branch if clean working tree is more important\n4. **Shell alias tip** to hide beads from git status:\n ```bash\n alias gs='git status -- \":!.beads/\"'\n ```\n5. **When to merge**: releases, milestones, or periodic snapshots\n\n## Related\n\n- bd-7b7h: Fix that allows bd sync --merge to work with dirty .beads/\n- bd-elqd: Investigation that identified this as expected behavior","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-12-16T23:16:12.253559-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-kwro","title":"Beads Messaging \u0026 Knowledge Graph (v0.30.2)","description":"Add messaging semantics and extended graph links to Beads, enabling it to serve as\nthe universal substrate for knowledge work - issues, messages, documents, and threads\nas nodes in a queryable graph.\n\n## Motivation\n\nGas Town (GGT) needs inter-agent communication. Rather than a separate mail system,\ncollapse messaging into Beads - one system, one sync, one query interface, all in git.\n\nThis also positions Beads as a foundation for:\n- Company-wide issue tracking (like Notion)\n- Threaded conversations (like Reddit/Slack)\n- Knowledge graphs with loose associations\n- Arbitrary workflow UIs built on top\n\n## New Issue Type\n\n**message** - ephemeral communication between workers\n- sender: who sent it\n- assignee: recipient\n- priority: P0 (urgent) to P4 (routine)\n- status: open (unread) -\u003e closed (read)\n- ephemeral: true = can be bulk-deleted after swarm\n\n## New Graph Links\n\n**replies_to** - conversation threading\n- Messages reply to messages\n- Enables Reddit-style nested threads\n- Different from parent_id (not hierarchy, its conversation flow)\n\n**relates_to** - loose see also associations\n- Bidirectional knowledge graph edges\n- Not blocking, not hierarchical, just related\n- Enables discovery and traversal\n\n**duplicates** - deduplication at scale\n- Mark issue B as duplicate of canonical issue A\n- Close B, link to A\n- Essential for large issue databases\n\n**supersedes** - version chains\n- Design Doc v2 supersedes Design Doc v1\n- Track evolution of artifacts\n\n## New Fields (optional, any issue type)\n\n- sender (string) - who created this (for messages)\n- ephemeral (boolean) - can be bulk-deleted when closed\n\n## New Commands\n\nMessaging:\n- bd mail send \u003crecipient\u003e -s Subject -m Body\n- bd mail inbox (list open messages for me)\n- bd mail read \u003cid\u003e (show message content)\n- bd mail ack \u003cid\u003e (mark as read/close)\n- bd mail reply \u003cid\u003e -m Response (reply to thread)\n\nGraph links:\n- bd relate \u003cid1\u003e \u003cid2\u003e (create relates_to link)\n- bd duplicate \u003cid\u003e --of \u003ccanonical\u003e (mark as duplicate)\n- bd supersede \u003cid\u003e --with \u003cnew\u003e (mark superseded)\n\nCleanup:\n- bd cleanup --ephemeral (delete closed ephemeral issues)\n\n## Identity Configuration\n\nWorkers need identity for sender field:\n- BEADS_IDENTITY env var\n- Or .beads/config.json: identity field\n\n## Hooks (for GGT integration)\n\nBeads as platform - extensible without knowing about GGT.\nHook files in .beads/hooks/:\n- on_create (runs after bd create)\n- on_update (runs after bd update)\n- on_close (runs after bd close)\n- on_message (runs after bd mail send)\n\nGGT registers hooks to notify daemons of new messages.\n\n## Schema Changes (Migration Required)\n\nAdd to issue schema:\n- type: message (new valid type)\n- sender: string (optional)\n- ephemeral: boolean (optional)\n- replies_to: string (issue ID, optional)\n- relates_to: []string (issue IDs, optional)\n- duplicates: string (canonical issue ID, optional)\n- superseded_by: string (new issue ID, optional)\n\nMigration adds fields as optional - existing beads unchanged.\n\n## Success Criteria\n\n1. bd mail send/inbox/read/ack/reply work end-to-end\n2. replies_to creates proper thread structure\n3. relates_to, duplicates, supersedes links queryable\n4. Hooks fire on create/update/close/message\n5. Identity configurable via env or config\n6. Migration preserves all existing data\n7. All new features have tests","status":"tombstone","priority":0,"issue_type":"epic","created_at":"2025-12-16T03:00:53.912223-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"epic"} +{"id":"bd-bhg7","title":"Merge: bd-io8c","description":"branch: polecat/Syncer\ntarget: main\nsource_issue: bd-io8c\nrig: beads","status":"closed","priority":1,"issue_type":"merge-request","created_at":"2025-12-23T13:46:46.954667-08:00","updated_at":"2025-12-23T19:12:08.34433-08:00","closed_at":"2025-12-23T19:12:08.34433-08:00"} +{"id":"bd-to1u","title":"Run bump-version.sh test-squash","description":"Run ./scripts/bump-version.sh test-squash to update version in all files","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T13:52:33.06696-08:00","updated_at":"2025-12-21T13:53:41.841677-08:00","deleted_at":"2025-12-21T13:53:41.841677-08:00","deleted_by":"stevey","delete_reason":"manual delete","original_type":"task"} +{"id":"bd-hlsw.3","title":"Auto-recovery mode (bd sync --auto-recover)","description":"Add bd sync --auto-recover flag that: detects problematic sync state, backs up .beads/issues.db with timestamp, rebuilds DB from JSONL atomically, verifies consistency, reports what was fixed. Provides safety valve when sync integrity fails.","status":"open","priority":3,"issue_type":"task","created_at":"2025-12-14T10:40:20.599836875-07:00","updated_at":"2025-12-14T10:40:20.599836875-07:00","dependencies":[{"issue_id":"bd-hlsw.3","depends_on_id":"bd-hlsw","type":"parent-child","created_at":"2025-12-14T10:40:20.600435888-07:00","created_by":"daemon"}]} +{"id":"bd-kptp","title":"Merge: bd-qioh","description":"branch: polecat/Errata\ntarget: main\nsource_issue: bd-qioh\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-23T13:46:08.832073-08:00","updated_at":"2025-12-23T19:12:08.350136-08:00","closed_at":"2025-12-23T19:12:08.350136-08:00"} +{"id":"bd-3ggb","title":"Rebuild local binary","description":"Build and verify: go build -o bd ./cmd/bd \u0026\u0026 ./bd version","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T22:43:03.101428-08:00","updated_at":"2025-12-18T22:46:40.955673-08:00","closed_at":"2025-12-18T22:46:40.955673-08:00","dependencies":[{"issue_id":"bd-3ggb","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T22:43:16.748289-08:00","created_by":"daemon"},{"issue_id":"bd-3ggb","depends_on_id":"bd-4y4g","type":"blocks","created_at":"2025-12-18T22:43:20.950376-08:00","created_by":"daemon"}]} +{"id":"bd-icnf","title":"Add bd mol run command (bond + assign + pin)","description":"bd mol run = bond + assign root to caller + pin to startup mail. This is the Gas Town integration point. When agent restarts, check startup mail, find pinned molecule root, query bd ready for next step. Makes molecules immortal.","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-12-20T23:52:17.462882-08:00","updated_at":"2025-12-21T00:07:25.803058-08:00","closed_at":"2025-12-21T00:07:25.803058-08:00","dependencies":[{"issue_id":"bd-icnf","depends_on_id":"bd-ffjt","type":"blocks","created_at":"2025-12-20T23:52:25.871742-08:00","created_by":"daemon"}]} +{"id":"bd-077e","title":"Add close_reason field to CLI schema and documentation","description":"PR #551 persists close_reason, but the CLI documentation may not mention this field as part of the issue schema.\n\n## Current State\n- close_reason is now persisted in database\n- `bd show --json` will return close_reason in JSON output\n- Documentation may not reflect this new field\n\n## What's Missing\n- CLI reference documentation for close_reason field\n- Schema documentation showing close_reason is a top-level issue field\n- Example output showing close_reason in bd show --json\n- bd close command documentation should mention close_reason parameter is optional\n\n## Suggested Action\n1. Update README.md or CLI reference docs to list close_reason as an issue field\n2. Add example to bd close documentation\n3. Update any type definitions or schema specs\n4. Consider adding close_reason to verbose list output (bd list --verbose)","status":"open","priority":3,"issue_type":"task","created_at":"2025-12-14T14:25:28.448654-08:00","updated_at":"2025-12-14T14:25:28.448654-08:00","dependencies":[{"issue_id":"bd-077e","depends_on_id":"bd-z86n","type":"discovered-from","created_at":"2025-12-14T14:25:28.449968-08:00","created_by":"stevey"}]} +{"id":"bd-kwjh.2","title":".beads-ephemeral/ storage backend","description":"Implement ephemeral storage layer for wisps.\n\n## Requirements\n- New storage location: .beads-ephemeral/issues.jsonl (sibling to .beads/)\n- Gitignored by default (add to .beads/.gitignore)\n- Same JSONL format as regular beads\n- Config option: ephemeral.directory (relative path)\n- ephemeral.enabled config flag\n\n## Storage Behavior\n- Ephemeral issues have ephemeral: true field\n- No sync to remote (local only)\n- No daemon tracking needed (transient)\n\n## Implementation\n- Add EphemeralStore in storage package\n- Initialize on demand when --ephemeral flag used\n- Share Issue struct, just different storage path","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-22T00:06:56.248345-08:00","updated_at":"2025-12-22T00:13:51.281427-08:00","closed_at":"2025-12-22T00:13:51.281427-08:00","dependencies":[{"issue_id":"bd-kwjh.2","depends_on_id":"bd-kwjh","type":"parent-child","created_at":"2025-12-22T00:06:56.248725-08:00","created_by":"daemon"}]} +{"id":"bd-x1xs","title":"Work on beads-1ra: Add molecules.jsonl as separate catalo...","description":"Work on beads-1ra: Add molecules.jsonl as separate catalog file for template molecules","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-19T20:17:44.840032-08:00","updated_at":"2025-12-21T15:28:17.633716-08:00","closed_at":"2025-12-21T15:28:17.633716-08:00"} +{"id":"bd-9qj5","title":"Merge: bd-c7y5","description":"branch: polecat/toast\ntarget: main\nsource_issue: bd-c7y5\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-23T20:45:02.626929-08:00","updated_at":"2025-12-23T21:21:57.699742-08:00","closed_at":"2025-12-23T21:21:57.699742-08:00"} +{"id":"bd-bw6","title":"Fix G104 errors unhandled in internal/storage/sqlite/queries.go:1181","description":"Linting issue: G104: Errors unhandled (gosec) at internal/storage/sqlite/queries.go:1181:4. Error: rows.Close()","status":"closed","priority":0,"issue_type":"bug","created_at":"2025-12-07T15:35:09.008444133-07:00","updated_at":"2025-12-17T23:13:40.536627-08:00","closed_at":"2025-12-17T16:46:11.029355-08:00"} +{"id":"bd-an4s","title":"Version Bump: 0.32.1","description":"Release checklist for version 0.32.1. Patch release with MCP output control params and pin field fix.","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-20T21:53:01.315592-08:00","updated_at":"2025-12-20T21:57:13.909864-08:00","closed_at":"2025-12-20T21:57:13.909864-08:00"} +{"id":"bd-pbh.18","title":"Restart beads daemon","description":"Kill any running daemons so they pick up the new version:\n```bash\nbd daemons killall\n```\n\nStart fresh daemon:\n```bash\nbd list # triggers daemon start\n```\n\nVerify daemon version:\n```bash\nbd version --daemon\n```\n\n\n```verify\nbd version --daemon 2\u003e\u00261 | grep -q '0.30.4'\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T21:19:11.11636-08:00","updated_at":"2025-12-17T21:46:46.364842-08:00","closed_at":"2025-12-17T21:46:46.364842-08:00","dependencies":[{"issue_id":"bd-pbh.18","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:11.116706-08:00","created_by":"daemon"},{"issue_id":"bd-pbh.18","depends_on_id":"bd-pbh.17","type":"blocks","created_at":"2025-12-17T21:19:11.330411-08:00","created_by":"daemon"}]} +{"id":"bd-401h","title":"Work on beads-7jl: Fix Windows installer file locking iss...","description":"Work on beads-7jl: Fix Windows installer file locking issue (GH#652). Close file handle before extraction in postinstall.js. When done, submit MR (not PR) to integration branch for Refinery.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-19T22:55:57.873767-08:00","updated_at":"2025-12-19T23:20:05.747664-08:00","closed_at":"2025-12-19T23:20:05.747664-08:00"} +{"id":"bd-r6a","title":"Redesign workflow system: templates as native Beads","description":"## Problem\n\nThe current workflow system (YAML templates in cmd/bd/templates/workflows/) is architecturally flawed:\n\n1. **Out-of-band data plane** - YAML files are a parallel system outside Beads itself\n2. **Heavyweight DSL** - YAML is gross; even TOML would have been better, but neither is ideal\n3. **Not graph-native** - Beads IS already a dependency graph with priorities, so why reinvent it?\n4. **Can't use bd commands on templates** - They're opaque YAML, not viewable/editable Beads\n\n## The Right Design\n\n**Templates should be Beads themselves.**\n\nA \"workflow template\" should be:\n- An epic marked as a template (via label, type, or prefix like `tpl-`)\n- Child issues with dependencies between them (using normal bd dep)\n- Titles and descriptions containing `{{variable}}` placeholders\n- Normal priorities that control serialization order\n\n\"Instantiation\" becomes:\n1. Clone the template subgraph (epic + children + dependencies)\n2. Substitute variables in titles/descriptions\n3. Generate new IDs for all cloned issues\n4. Return the new epic ID\n\n## Benefits\n\n- **No YAML** - Templates are just Beads\n- **Use existing tools** - `bd show`, `bd edit`, `bd dep` work on templates\n- **Graph-native** - Dependencies are real Beads dependencies\n- **Simpler codebase** - Remove all the YAML parsing/workflow code\n- **Composable** - Templates can reference other templates\n\n## Tasks\n\n1. Delete the YAML workflow system code (revert recent push + remove existing workflow code)\n2. Design template marking convention (label? type? id prefix?)\n3. Implement `bd template create` or `bd clone --as-template`\n4. Implement `bd template instantiate \u003ctemplate-id\u003e --var key=value`\n5. Migrate version-bump workflow to native Beads template\n6. Update documentation","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-17T22:41:57.359643-08:00","updated_at":"2025-12-18T17:42:26.000769-08:00","closed_at":"2025-12-18T13:47:04.632525-08:00"} +{"id":"bd-x3j8","title":"Update info.go versionChanges","description":"Add 0.32.1 entry to versionChanges map in cmd/bd/info.go","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-20T21:53:17.344841-08:00","updated_at":"2025-12-20T21:54:31.906761-08:00","closed_at":"2025-12-20T21:54:31.906761-08:00","dependencies":[{"issue_id":"bd-x3j8","depends_on_id":"bd-an4s","type":"parent-child","created_at":"2025-12-20T21:53:17.346736-08:00","created_by":"daemon"},{"issue_id":"bd-x3j8","depends_on_id":"bd-rgd7","type":"blocks","created_at":"2025-12-20T21:53:29.62309-08:00","created_by":"daemon"}]} +{"id":"bd-49kw","title":"Workaround for FastMCP outputSchema bug in Claude Code","description":"The beads MCP server (v0.23.1) successfully connects to Claude Code, but all tools fail to load with a schema validation error due to a bug in FastMCP 2.13.1.\n\nError: \"Invalid literal value, expected \\\"object\\\"\" in outputSchema.\n\nRoot Cause: FastMCP generates outputSchema with $ref at root level without \"type\": \"object\" for self-referential models (Issue).\n\nWorkaround: Use slash commands (/beads:ready) or wait for FastMCP fix.\n","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-11-20T18:55:39.041831-05:00","updated_at":"2025-12-23T21:22:15.889295-08:00","closed_at":"2025-12-23T20:42:16.593681-08:00"} +{"id":"bd-r46","title":"Support --reason flag in daemon mode for reopen command","description":"The reopen.go command has a TODO at line 61 to add reason as a comment once RPC supports AddComment. Currently --reason flag is ignored in daemon mode with a warning.","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-11-21T18:55:10.773626-05:00","updated_at":"2025-12-21T21:47:15.43375-08:00","closed_at":"2025-12-21T21:47:15.43375-08:00"} +{"id":"bd-io8c","title":"Improve test coverage for internal/syncbranch (33.0% β†’ 70%)","description":"Improve test coverage for internal/syncbranch package from 27% to 70%.\n\n## Current State\n- Coverage: 27.0%\n- Files: syncbranch.go, worktree.go\n- Tests: syncbranch_test.go (basic tests exist)\n\n## Functions Needing Tests\n\n### syncbranch.go (config management)\n- [x] ValidateBranchName - has tests\n- [ ] Get - needs store mock tests\n- [ ] GetFromYAML - needs YAML parsing tests\n- [ ] IsConfigured - needs file system tests\n- [ ] IsConfiguredWithDB - needs DB path tests\n- [ ] Set - needs store mock tests\n- [ ] Unset - needs store mock tests\n\n### worktree.go (git operations) - PRIORITY\n- [ ] CommitToSyncBranch - needs git repo fixture tests\n- [ ] PullFromSyncBranch - needs merge scenario tests\n- [ ] CheckDivergence - needs ahead/behind tests\n- [ ] ResetToRemote - needs reset scenario tests\n- [ ] performContentMerge - needs 3-way merge tests\n- [ ] extractJSONLFromCommit - needs git show tests\n- [ ] hasChangesInWorktree - needs dirty state tests\n- [ ] commitInWorktree - needs commit scenario tests\n\n## Implementation Guide\n\n1. **Use testutil fixtures:**\n ```go\n import \"github.com/steveyegge/beads/internal/testutil/fixtures\"\n \n func TestCommitToSyncBranch(t *testing.T) {\n repo := fixtures.NewGitRepo(t)\n defer repo.Cleanup()\n // ... test scenarios\n }\n ```\n\n2. **Test scenarios for worktree.go:**\n - Clean commit (no conflicts)\n - Non-fast-forward push (diverged)\n - Merge conflict resolution\n - Empty changes (nothing to commit)\n\n3. **Mock storage for syncbranch.go:**\n ```go\n store := memory.New()\n // Set up test config\n syncbranch.Set(ctx, store, \"beads-sync\")\n ```\n\n## Success Criteria\n- Coverage β‰₯ 70%\n- All public functions have at least one test\n- Edge cases covered for git operations\n- Tests pass with `go test -race ./internal/syncbranch`\n\n## Run Tests\n```bash\ngo test -v -cover ./internal/syncbranch\ngo test -race ./internal/syncbranch\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-13T20:43:02.079145-08:00","updated_at":"2025-12-23T13:46:10.191435-08:00","closed_at":"2025-12-23T13:46:10.191435-08:00","dependencies":[{"issue_id":"bd-io8c","depends_on_id":"bd-iz5t","type":"parent-child","created_at":"2025-12-23T12:44:07.213092-08:00","created_by":"daemon"}]} +{"id":"bd-411u","title":"Document BEADS_DIR pattern for multi-agent workspaces (Gas Town)","description":"Gas Town and similar multi-agent systems need to configure separate beads databases per workspace/rig, distinct from any project-level beads.\n\n## Use Case\n\nIn Gas Town:\n- Each 'rig' (managed project) has multiple agents (polecats, refinery, witness)\n- All agents in a rig should share a single beads database at the rig level\n- This should be separate from any .beads/ the project itself uses\n- The BEADS_DIR env var enables this\n\n## Documentation Needed\n\n1. Add a section to docs explaining BEADS_DIR for multi-agent setups\n2. Example: setting BEADS_DIR in agent startup scripts/hooks\n3. Clarify interaction with project-level .beads/ (BEADS_DIR takes precedence)\n\n## Current Support\n\nAlready implemented in internal/beads/beads.go:FindDatabasePath():\n- BEADS_DIR env var is checked first (preferred)\n- BEADS_DB env var still supported (deprecated)\n- Falls back to .beads/ search in tree\n\nJust needs documentation for the multi-agent workspace pattern.","status":"open","priority":3,"issue_type":"task","created_at":"2025-12-15T22:08:22.158027-08:00","updated_at":"2025-12-15T22:08:22.158027-08:00"} +{"id":"bd-3uje","title":"Test issue for pin --for","description":"Testing the pin --for flag","status":"tombstone","priority":3,"issue_type":"task","created_at":"2025-12-22T02:53:43.075522-08:00","updated_at":"2025-12-22T02:54:07.973855-08:00","deleted_at":"2025-12-22T02:54:07.973855-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} +{"id":"bd-t3cf","title":"Update CHANGELOG.md for 0.33.2","description":"In CHANGELOG.md:\n\n1. Change `## [Unreleased]` section header to `## [0.33.2] - 2025-12-21`\n2. Add new empty `## [Unreleased]` section above it\n3. Review and clean up the changes list\n\nFormat:\n```markdown\n## [Unreleased]\n\n## [0.33.2] - 2025-12-21\n\n### Added\n- ...\n\n### Changed\n- ...\n\n### Fixed\n- ...\n```","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.7614-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-o9o","title":"Exclude pinned issues from bd ready","description":"Update bd ready to exclude pinned issues. Pinned issues are context markers, not work items, and should never appear in the ready-to-work list.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T23:33:41.979073-08:00","updated_at":"2025-12-21T11:29:41.190567-08:00","closed_at":"2025-12-21T11:29:41.190567-08:00","dependencies":[{"issue_id":"bd-o9o","depends_on_id":"bd-0vg","type":"blocks","created_at":"2025-12-18T23:33:56.392931-08:00","created_by":"daemon"},{"issue_id":"bd-o9o","depends_on_id":"bd-7h5","type":"blocks","created_at":"2025-12-18T23:34:07.612655-08:00","created_by":"daemon"}]} +{"id":"bd-kpy","title":"Sync race: rebase-based divergence recovery resurrects tombstones","description":"## Problem\nWhen two repos sync simultaneously, tombstones can be resurrected:\n\n1. Repo A deletes issue (creates tombstone), pushes to sync branch\n2. Repo B (with 'closed' status) exports and tries to push\n3. Push fails (non-fast-forward)\n4. fetchAndRebaseInWorktree does git rebase\n5. Git rebase applies B's 'closed' patch on top of A's 'tombstone'\n6. TEXT-level rebase doesn't invoke beads merge driver\n7. 'closed' overwrites 'tombstone' = resurrection\n\n## Root Cause\nCommitToSyncBranch uses git rebase for divergence recovery, but rebase is text-level, not content-level. The proper content-level merge in PullFromSyncBranch handles tombstones correctly, but it runs AFTER the problematic push.\n\n## Proposed Fix\nOption 1: Don't push in CommitToSyncBranch - let PullFromSyncBranch handle merge+push\nOption 2: Replace git rebase with content-level merge in fetchAndRebaseInWorktree\nOption 3: Reorder sync steps: Export β†’ Pull/Merge β†’ Commit β†’ Push\n\n## Workaround Applied\nExcluded tombstones from orphan detection warnings (commit 1e97d9cc).\n\nSee also: bd-3852 (Add orphan detection migration)","status":"open","priority":2,"issue_type":"bug","created_at":"2025-12-17T23:29:33.049272-08:00","updated_at":"2025-12-17T23:29:33.049272-08:00"} +{"id":"bd-ffjt","title":"Unify template.go and mol.go under bd mol","description":"Consolidate the two DAG-template systems into one under the mol command. mol.go (on rictus branch) has the right UX (catalog/show/bond), template.go has the mechanics. Merge them, deprecate bd template commands.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-20T23:52:13.208972-08:00","updated_at":"2025-12-21T00:01:59.283765-08:00","closed_at":"2025-12-21T00:01:59.283765-08:00"} +{"id":"bd-2vh3.1","title":"Tier 1: Ephemeral repo routing","description":"Add routing.ephemeral config option to route ephemeral=true issues to separate location.\n\n## Changes Required\n\n1. Add `routing.ephemeral` config option (default: empty = disabled)\n2. Update routing logic in `determineRepo()` to check ephemeral flag\n3. Update `bd create` to respect ephemeral routing\n4. Update import/export for multi-location support\n5. Ephemeral repo can be:\n - Separate git repo (~/.beads-ephemeral)\n - Non-git directory (just filesystem)\n - Same repo, different branch (future)\n\n## Config\n\n```bash\nbd config set routing.ephemeral \"~/.beads-ephemeral\"\n```\n\n## Acceptance Criteria\n\n- `bd create \"test\" --ephemeral` creates in ephemeral repo when configured\n- `bd list` shows issues from both repos\n- Ephemeral repo never synced to remote","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T12:57:26.648052-08:00","updated_at":"2025-12-21T12:59:01.815357-08:00","deleted_at":"2025-12-21T12:59:01.815357-08:00","deleted_by":"stevey","delete_reason":"manual delete","original_type":"task"} +{"id":"bd-mrpw","title":"Run tests and verify build","description":"Run the test suite to verify nothing is broken:\n\n```bash\n./scripts/test.sh\n```\n\nOr manually:\n```bash\ngo build ./cmd/bd/...\ngo test ./...\n```\n\nFix any failures before proceeding.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.761563-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-8wgo","title":"bd merge omits priority:0 due to omitempty JSON tag","description":"GitHub issue #671. The merge code in internal/merge/merge.go uses 'omitempty' on the Priority field, which causes priority:0 (P0/critical) to be dropped from JSON output since 0 is Go's zero value for int. Fix: either remove omitempty from Priority field or use a pointer (*int). This affects the git merge driver and causes P0 issues to lose their priority.","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-21T14:35:15.083146-08:00","updated_at":"2025-12-21T15:41:14.522554-08:00","closed_at":"2025-12-21T15:41:14.522554-08:00"} +{"id":"bd-r6a.5","title":"Update documentation for template system","description":"Update AGENTS.md and help text to document the new template system:\n\n- How to create a template (epic + template label + child issues)\n- How to define variables (just use {{name}} placeholders)\n- How to instantiate (bd template instantiate)\n- Migration from YAML workflows (if any users had custom ones)","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-17T22:43:55.461345-08:00","updated_at":"2025-12-18T17:42:26.001474-08:00","closed_at":"2025-12-18T13:46:53.446262-08:00","dependencies":[{"issue_id":"bd-r6a.5","depends_on_id":"bd-r6a.3","type":"blocks","created_at":"2025-12-17T22:44:03.632404-08:00","created_by":"daemon"},{"issue_id":"bd-r6a.5","depends_on_id":"bd-r6a.4","type":"blocks","created_at":"2025-12-17T22:44:03.788517-08:00","created_by":"daemon"},{"issue_id":"bd-r6a.5","depends_on_id":"bd-r6a","type":"parent-child","created_at":"2025-12-17T22:43:55.461763-08:00","created_by":"daemon"}]} +{"id":"bd-nl2","title":"No logging/debugging for tombstone resurrection events","description":"Per the design document bd-zvg Open Question 1: Should resurrection log a warning? Recommendation was Yes. Currently, when an expired tombstone loses to a live issue (resurrection), there is no logging or debugging output. This makes it hard to understand why an issue reappeared. Recommendation: Add optional debug logging when resurrection occurs, e.g., Issue bd-abc resurrected (tombstone expired). Files: internal/merge/merge.go:359-366, 371-378, 400-405, 410-415","status":"open","priority":4,"issue_type":"feature","created_at":"2025-12-05T16:36:52.27525-08:00","updated_at":"2025-12-05T16:36:52.27525-08:00"} +{"id":"bd-rnnr","title":"BondRef data model for compound lineage","description":"Add data model support for tracking compound molecule lineage.\n\nNEW FIELDS on Issue:\n bonded_from: []BondRef // For compounds: constituent protos\n\nNEW TYPE:\n type BondRef struct {\n ProtoID string // Source proto ID\n BondType string // sequential, parallel, conditional\n BondPoint string // Attachment site (issue ID or empty for root)\n }\n\nJSONL SERIALIZATION:\n {\n \"id\": \"proto-feature-tested\",\n \"title\": \"Feature with tests\",\n \"bonded_from\": [\n {\"proto_id\": \"proto-feature\", \"bond_type\": \"root\"},\n {\"proto_id\": \"proto-testing\", \"bond_type\": \"sequential\"}\n ],\n ...\n }\n\nQUERIES:\n- GetCompoundConstituents(id) β†’ []BondRef\n- IsCompound(id) β†’ bool\n- GetCompoundsUsing(protoID) β†’ []Issue // Reverse lookup","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-21T00:59:38.582509-08:00","updated_at":"2025-12-21T01:19:43.922416-08:00","closed_at":"2025-12-21T01:19:43.922416-08:00","dependencies":[{"issue_id":"bd-rnnr","depends_on_id":"bd-o5xe","type":"parent-child","created_at":"2025-12-21T00:59:51.234246-08:00","created_by":"daemon"}]} +{"id":"bd-r4sn","title":"Phase 2.5: TOON-based daemon sync","description":"Implement TOON-native daemon sync (replaces JSONL sync machinery).\n\n## Overview\nDaemon sync is the final integration point. Replace export/import/merge machinery with TOON-native sync, building on deletion tracking (2.3) and merge optimization (2.4).\n\n## Required Work\n\n### 2.5.1 TOON-based Daemon Sync\n- [ ] Understand current JSONL sync machinery (export.go, import.go, merge.go)\n- [ ] Replace export step with TOON encoding (EncodeTOON)\n- [ ] Replace import step with TOON decoding (DecodeTOON)\n- [ ] Replace merge step with TOON-aware 3-way merge\n- [ ] Update daemon auto-sync to read/write TOON\n- [ ] Verify 5-second debounce still works\n\n### 2.5.2 Deletion Sync Integration\n- [ ] Load deletions.toon during import phase\n- [ ] Apply deletions after merging issues\n- [ ] Ensure deletion TTL respects daemon schedule\n\n### 2.5.3 Testing\n- [ ] Unit tests for daemon sync with TOON\n- [ ] Integration tests with actual daemon operations\n- [ ] Multi-clone sync scenarios with concurrent edits\n- [ ] Performance comparison with JSONL sync\n- [ ] Long-running daemon stability tests\n\n## Success Criteria\n- Daemon reads/writes TOON format (not JSONL)\n- Sync latency comparable to JSONL (\u003c100ms)\n- All 70+ tests passing\n- bdt commands work seamlessly with daemon\n- Multi-clone sync scenarios work correctly","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T14:43:20.33132177-07:00","updated_at":"2025-12-21T14:42:25.274362-08:00","closed_at":"2025-12-21T14:42:25.274362-08:00","dependencies":[{"issue_id":"bd-r4sn","depends_on_id":"bd-uz8r","type":"blocks","created_at":"2025-12-19T14:43:20.347724699-07:00","created_by":"daemon"},{"issue_id":"bd-r4sn","depends_on_id":"bd-uwkp","type":"blocks","created_at":"2025-12-19T14:43:20.355379309-07:00","created_by":"daemon"}]} +{"id":"bd-vpan","title":"Re: Thread Test 2","description":"Got your message. Testing reply feature.","status":"tombstone","priority":2,"issue_type":"message","created_at":"2025-12-16T18:21:29.144352-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","dependencies":[{"issue_id":"bd-vpan","depends_on_id":"bd-x36g","type":"replies-to","created_at":"2025-12-18T13:45:31.137191-08:00","created_by":"migration"}],"deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"message"} +{"id":"bd-w8g0","title":"test pin issue","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-12-20T22:44:27.963361-08:00","updated_at":"2025-12-20T22:44:57.977229-08:00","deleted_at":"2025-12-20T22:44:57.977229-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} +{"id":"bd-qqc.1","title":"Update version to {{version}} in version.go","description":"Edit cmd/bd/version.go line 17:\n\n```go\nVersion = \"{{version}}\"\n```\n\nVerify with: `grep 'Version =' cmd/bd/version.go`","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T12:59:13.887087-08:00","updated_at":"2025-12-18T23:34:18.630067-08:00","closed_at":"2025-12-18T22:41:41.82664-08:00","dependencies":[{"issue_id":"bd-qqc.1","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T12:59:13.887655-08:00","created_by":"stevey"}]} +{"id":"bd-1slh","title":"Investigate charmbracelet-based TUI for beads","description":"Now that we've merged the create-form command (PR #603) which uses charmbracelet/huh, investigate whether beads should have a more comprehensive TUI.\n\nConsiderations:\n- Should this be in core or a separate binary (bd-tui)?\n- What functionality would benefit from a TUI? (list view, issue details, search, bulk operations)\n- Plugin/extension architecture vs build tags vs separate binary\n- Dependency cost vs user experience tradeoff\n- Target audience: humans who want interactive workflows vs CLI/scripting users\n\nRelated: PR #603 added charmbracelet/huh dependency for create-form command.","notes":"Foundation is in place (lipgloss, huh), but not a priority right now","status":"deferred","priority":3,"issue_type":"feature","created_at":"2025-12-17T14:20:51.503563-08:00","updated_at":"2025-12-20T23:31:34.354023-08:00"} +{"id":"bd-kwro.5","title":"Graph Link: supersedes for version chains","description":"Implement supersedes link type for version tracking.\n\nNew command:\n- bd supersede \u003cid\u003e --with \u003cnew\u003e - marks id as superseded by new\n- Auto-closes the superseded issue\n\nQuery support:\n- bd show \u003cid\u003e shows 'Superseded by: \u003cnew\u003e'\n- bd show \u003cnew\u003e shows 'Supersedes: \u003cid\u003e'\n- bd list --superseded shows version chains\n\nStorage:\n- superseded_by column pointing to replacement issue\n\nUseful for design docs, specs, and evolving artifacts.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-16T03:01:41.749294-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-bdc9","title":"Update Homebrew formula","description":"Update the Homebrew tap with new version:\n\n```bash\n./scripts/update-homebrew.sh 0.33.2\n```\n\nThis script waits for GitHub Actions to complete (~5 min), then updates the formula with new SHA256 hashes.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.762399-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-lo4","title":"Test pinned issue","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-12-18T21:44:49.031385-08:00","updated_at":"2025-12-18T21:47:25.055109-08:00","deleted_at":"2025-12-18T21:47:25.055109-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} +{"id":"bd-aydr.5","title":"Enhance bd doctor to suggest reset for broken states","description":"Update bd doctor to detect severely broken states and suggest reset.\n\n## Detection Criteria\nSuggest reset when:\n- Multiple unfixable errors detected\n- Corrupted JSONL that can't be repaired\n- Schema version mismatch that can't be migrated\n- Daemon state inconsistent and unkillable\n\n## Implementation\nAdd to doctor's check/fix flow:\n```go\nif unfixableErrors \u003e threshold {\n suggest('State may be too broken to fix. Consider: bd reset')\n}\n```\n\n## Output Example\n```\nβœ— Found 5 unfixable errors\n \n Your beads state may be too corrupted to repair.\n Consider running 'bd reset' to start fresh.\n (Use 'bd reset --backup' to save current state first)\n```\n\n## Notes\n- Don't auto-run reset, just suggest\n- This is lower priority, can be done in parallel with main work","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-13T08:44:55.591986+11:00","updated_at":"2025-12-13T06:24:29.561624-08:00","closed_at":"2025-12-13T10:17:23.4522+11:00","dependencies":[{"issue_id":"bd-aydr.5","depends_on_id":"bd-aydr","type":"parent-child","created_at":"2025-12-13T08:44:55.59239+11:00","created_by":"daemon"}]} +{"id":"bd-o7ik","title":"Priority: refactor mol.go then bd squash","description":"Two tasks:\n\n1. bd-cnwx - Refactor mol.go (1200+ lines, split by subcommand)\n2. bd-2vh3 - Ephemeral cleanup (bd cleanup --ephemeral)\n\nRefactor first - smaller, unblocks easier review of future mol work.\n\n- Mayor","status":"closed","priority":2,"issue_type":"message","created_at":"2025-12-21T11:31:38.287244-08:00","updated_at":"2025-12-21T12:59:32.937472-08:00","closed_at":"2025-12-21T12:59:32.937472-08:00"} +{"id":"bd-zf5w","title":"bd mail uses git user.name for sender instead of BEADS_AGENT_NAME","description":"When sending mail via `bd mail send`, the sender field in the stored issue uses git config user.name instead of the BEADS_AGENT_NAME environment variable.\n\nReproduction:\n1. Set BEADS_AGENT_NAME=gastown-alpha\n2. Run: bd mail send mayor/ -s 'Test' -m 'Body'\n3. Check the issue.jsonl: sender is 'Steve Yegge' (git user.name) not 'gastown-alpha'\n\nExpected: The sender field should use BEADS_AGENT_NAME when set.\n\nThis breaks the mail system for multi-agent workflows where agents need to identify themselves by their role (polecat, refinery, etc.) rather than the human user's git identity.\n\nRelated: gt mail routing integration with Gas Town","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-20T21:46:33.646746-08:00","updated_at":"2025-12-20T21:59:25.771325-08:00","closed_at":"2025-12-20T21:59:25.771325-08:00"} +{"id":"bd-n777","title":"Timer beads for scheduled agent callbacks","description":"## Problem\n\nAgents frequently need to wait for external events (CI completion, PR reviews, artifact builds) but have no good mechanism:\n- `sleep N` blocks and is unreliable (often times out at 8+ minutes)\n- Polling wastes context and is easy to forget\n- No way to survive session restarts\n\n## Proposal: Timer Beads\n\nA new bead type or field that represents a scheduled callback:\n\n### Creating timers\n```bash\nbd timer create --in 30s --callback \"Check CI run 12345\" --issue bd-xyz\nbd timer create --at \"2025-12-20T08:00:00\" --callback \"Morning standup\"\nbd timer create --in 5m --on-expire \"tmux send-keys -t dave 'bd show bd-xyz'\"\n```\n\n### Timer storage\n- Store in beads (survives restarts)\n- Fields: `expires_at`, `callback_description`, `on_expire_command`, `linked_issue`\n- Status: pending, fired, cancelled\n\n### Deacon integration\nThe Deacon daemon monitors timer beads:\n1. Wakes on next timer expiry\n2. Executes `on_expire` command (e.g., tmux send-keys to interrupt agent)\n3. Marks timer as fired\n4. Optionally updates linked issue\n\n### Use cases\n- CI monitoring: \"ping me when build completes\"\n- PR reviews: \"check back in 1 hour\"\n- Scheduled tasks: \"remind me at EOD to sync\"\n- Blocking waits: agent registers callback instead of sleeping\n\n## Acceptance criteria\n- [ ] Timer bead type or field design\n- [ ] `bd timer create/list/cancel` commands\n- [ ] Deacon timer monitoring loop\n- [ ] tmux integration for agent interrupts\n- [ ] Survives daemon restarts","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-12-19T23:05:33.051861-08:00","updated_at":"2025-12-21T17:19:48.087482-08:00","closed_at":"2025-12-21T17:19:48.087482-08:00"} +{"id":"bd-kwro.10","title":"Tests for messaging and graph links","description":"Comprehensive test coverage for all new features.\n\nTest files:\n- cmd/bd/mail_test.go - mail command tests\n- internal/storage/sqlite/graph_links_test.go - graph link tests\n- internal/hooks/hooks_test.go - hook execution tests\n\nTest cases:\n- Mail send/inbox/read/ack lifecycle\n- Thread creation and traversal (replies_to)\n- Bidirectional relates_to\n- Duplicate marking and queries\n- Supersedes chains\n- Ephemeral cleanup\n- Identity resolution priority\n- Hook execution (mock hooks)\n- Schema migration preserves data\n\nTarget: \u003e80% coverage on new code","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-12-16T03:02:34.050136-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-rl5t","title":"Integration test: agent waits for CI via gate","description":"End-to-end test of the gate workflow.\n\n## Test Scenario\n1. Agent creates gate: bd gate create --await gh:run:123 --timeout 5m --notify beads/dave\n2. Agent writes handoff and exits\n3. Deacon patrol checks gate condition\n4. (Mock) GitHub run completes\n5. Deacon notifies waiter and closes gate\n6. New agent session reads mail and resumes\n\n## Test Requirements\n- Mock GitHub API responses\n- Test timeout path\n- Test multiple waiters\n- Verify mail notifications sent","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-23T11:44:41.725752-08:00","updated_at":"2025-12-23T12:24:08.346347-08:00","closed_at":"2025-12-23T12:24:08.346347-08:00","dependencies":[{"issue_id":"bd-rl5t","depends_on_id":"bd-ykqu","type":"blocks","created_at":"2025-12-23T11:44:56.753264-08:00","created_by":"daemon"},{"issue_id":"bd-rl5t","depends_on_id":"bd-udsi","type":"parent-child","created_at":"2025-12-23T11:44:53.157037-08:00","created_by":"daemon"},{"issue_id":"bd-rl5t","depends_on_id":"bd-2l03","type":"blocks","created_at":"2025-12-23T11:44:56.674866-08:00","created_by":"daemon"}]} +{"id":"bd-4bsb","title":"Code review findings: mol squash deletion bypasses tombstones","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-21T13:57:14.154316-08:00","updated_at":"2025-12-21T18:01:06.811216-08:00","closed_at":"2025-12-21T18:01:06.811216-08:00","dependencies":[{"issue_id":"bd-4bsb","depends_on_id":"bd-2vh3.3","type":"discovered-from","created_at":"2025-12-21T13:57:14.155488-08:00","created_by":"daemon"}]} +{"id":"bd-e7ou","title":"Fix --as flag: uses title instead of ID in mol bond","description":"In bondProtoProto, the --as flag is documented as 'Custom ID for compound proto' but the implementation uses it as the title, not the issue ID.\n\n**Current behavior (mol.go:637-638):**\n```go\nif customID != '' {\n compoundTitle = customID // Used as title, not ID\n}\n```\n\n**Options:**\n1. Change flag description to say 'Custom title' (documentation fix)\n2. Actually use it as a custom ID prefix or full ID (feature change)\n3. Add separate --title flag and make --as actually set ID\n\nRecommend option 1 for simplest fix - change 'Custom ID' to 'Custom title' in the flag description.","status":"closed","priority":3,"issue_type":"bug","created_at":"2025-12-21T10:22:59.069368-08:00","updated_at":"2025-12-21T21:18:48.514513-08:00","closed_at":"2025-12-21T21:18:48.514513-08:00"} +{"id":"bd-rdzk","title":"Merge: bd-rgyd","description":"branch: polecat/Splitter\ntarget: main\nsource_issue: bd-rgyd\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-23T13:41:15.051877-08:00","updated_at":"2025-12-23T19:12:08.351145-08:00","closed_at":"2025-12-23T19:12:08.351145-08:00"} +{"id":"bd-u2sc","title":"GH#692: Code quality and refactoring improvements","description":"Epic for implementing refactoring suggestions from GitHub issue #692 (rsnodgrass). These are code quality improvements that don't change functionality but improve maintainability, type safety, and performance.\n\nOriginal issue: https://github.com/steveyegge/beads/issues/692\n\nHigh priority items:\n1. Replace map[string]interface{} with typed structs for JSON output\n2. Adopt slices.SortFunc instead of sort.Slice (Go 1.21+)\n3. Split large files (sync.go, init.go, show.go)\n4. Introduce slog for structured logging in daemon\n\nLower priority:\n5. Further CLI helper extraction\n6. Preallocate slices in hot paths\n7. Polish items (error wrapping, table-driven parsing)","status":"closed","priority":3,"issue_type":"epic","created_at":"2025-12-22T14:26:31.630004-08:00","updated_at":"2025-12-23T22:07:32.477628-08:00","closed_at":"2025-12-23T22:07:32.477628-08:00"} +{"id":"bd-muw","title":"Add empty tasks validation in workflow create","description":"workflow.go:321 will panic if wf.Tasks is empty. Add validation that len(wf.Tasks) \u003e 0 before accessing wf.Tasks[0].","status":"closed","priority":3,"issue_type":"bug","created_at":"2025-12-17T22:23:00.75707-08:00","updated_at":"2025-12-17T22:34:07.281133-08:00","closed_at":"2025-12-17T22:34:07.281133-08:00"} +{"id":"bd-kwjh.7","title":"bd mol burn deletes ephemeral without digest","description":"Update bd mol burn to handle ephemeral molecules.\n\n## Behavior for Ephemeral Molecules\n- Delete wisp from .beads-ephemeral/\n- NO digest created (unlike squash)\n- Used for abandoned/crashed cycles\n\n## Difference from Squash\n| Command | Ephemeral Behavior |\n|---------|-------------------|\n| squash | Delete wisp, create digest |\n| burn | Delete wisp, no trace |\n\n## Implementation\n- Detect if molecule is ephemeral\n- Delete from ephemeral store\n- Skip digest creation","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-22T00:07:32.020144-08:00","updated_at":"2025-12-22T01:11:05.487605-08:00","closed_at":"2025-12-22T01:11:05.487605-08:00","dependencies":[{"issue_id":"bd-kwjh.7","depends_on_id":"bd-kwjh.2","type":"blocks","created_at":"2025-12-22T00:07:32.023217-08:00","created_by":"daemon"},{"issue_id":"bd-kwjh.7","depends_on_id":"bd-kwjh","type":"parent-child","created_at":"2025-12-22T00:07:32.022117-08:00","created_by":"daemon"}]} +{"id":"bd-fi05","title":"bd sync fails with orphaned issues and duplicate ID conflict","description":"After fixing the deleted_at TEXT column scanning bug (commit 18b1eb2), bd sync still fails with two issues:\n\n1. Orphan Detection Warning: 12 orphaned child issues whose parents no longer exist (bd-cb64c226.* and bd-cbed9619.*)\n\n2. Import Failure: UNIQUE constraint failed for bd-360 - this tombstone exists in both DB and JSONL\n\nError: \"Import failed: error creating depth-0 issues: bulk insert issues: failed to insert issue bd-360: sqlite3: constraint failed: UNIQUE constraint failed: issues.id\"\n\nFix options:\n- Delete orphaned child issues with bd delete\n- Resolve bd-360 duplicate (in deletions.jsonl vs tombstone in DB)\n- Reset sync branch: git branch -f beads-sync main \u0026\u0026 git push --force-with-lease origin beads-sync","notes":"Fixed tombstone constraint violation bug. When deleting closed issues, the CHECK constraint (status = 'closed') = (closed_at IS NOT NULL) was violated because CreateTombstone didn't clear closed_at. Fix: set closed_at = NULL in tombstone creation SQL.\n\nThe sync data corruption (orphaned issues in beads-sync branch) requires manual cleanup: reset sync branch with 'git branch -f beads-sync main \u0026\u0026 git push --force-with-lease origin beads-sync'","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-13T07:14:33.831346-08:00","updated_at":"2025-12-13T10:50:48.545465-08:00","closed_at":"2025-12-13T07:30:33.843986-08:00"} +{"id":"bd-zc3","title":"Add --pinned and --no-pinned flags to bd list","description":"Add filtering flags to bd list: --pinned shows only pinned issues, --no-pinned excludes pinned issues. Default behavior shows all issues with a pin indicator.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T23:33:29.518028-08:00","updated_at":"2025-12-21T11:30:01.484978-08:00","closed_at":"2025-12-21T11:30:01.484978-08:00","dependencies":[{"issue_id":"bd-zc3","depends_on_id":"bd-0vg","type":"blocks","created_at":"2025-12-18T23:33:56.256764-08:00","created_by":"daemon"},{"issue_id":"bd-zc3","depends_on_id":"bd-7h5","type":"blocks","created_at":"2025-12-18T23:34:07.486361-08:00","created_by":"daemon"}]} +{"id":"bd-4opy","title":"Refactor long SQLite test files","description":"The SQLite test files have grown unwieldy. Review and refactor.\n\n## Goals\n- Break up large test files into focused modules\n- Improve test organization by feature area\n- Reduce test duplication\n- Make tests easier to maintain and extend\n\n## Areas to Review\n- main_test.go (likely the largest)\n- Any test files over 500 lines\n- Shared test fixtures and helpers\n- Test coverage gaps\n\n## Approach\n- Group tests by feature (CRUD, sync, queries, transactions)\n- Extract common fixtures to test helpers\n- Consider table-driven tests where appropriate\n- Ensure each test file has clear focus\n\n## Reference\nSee docs/dev-notes/ for any existing test audit notes","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-21T23:41:47.025285-08:00","updated_at":"2025-12-23T01:33:25.733299-08:00","closed_at":"2025-12-23T01:33:25.733299-08:00"} +{"id":"bd-cb64c226.8","title":"Update Metrics and Health Endpoints","description":"Remove cache-related metrics from health/metrics endpoints","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-10-27T22:55:49.212047-07:00","updated_at":"2025-12-17T23:18:29.110022-08:00","deleted_at":"2025-12-17T23:18:29.110022-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} +{"id":"bd-qqc","title":"Release v{{version}}","description":"Version bump workflow for beads release {{version}}.\n\n## Variables\n- `{{version}}` - The new version number (e.g., 0.31.0)\n- `{{date}}` - Release date (YYYY-MM-DD format)\n\n## Workflow Steps\n1. Kill running daemons\n2. Run tests and linting\n3. Bump version in all files (10 files total)\n4. Update cmd/bd/info.go with release notes\n5. Commit and push version bump\n6. Create and push git tag\n7. Update Homebrew formula\n8. Upgrade local Homebrew installation\n9. Verify installation\n\n## Files Updated by bump-version.sh\n- cmd/bd/version.go\n- .claude-plugin/plugin.json\n- .claude-plugin/marketplace.json\n- integrations/beads-mcp/pyproject.toml\n- integrations/beads-mcp/src/beads_mcp/__init__.py\n- README.md\n- npm-package/package.json\n- cmd/bd/templates/hooks/* (4 files)\n- CHANGELOG.md\n\n## Manual Step Required\n- cmd/bd/info.go - Add versionChanges entry with release notes","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-18T12:59:00.610371-08:00","updated_at":"2025-12-20T17:59:26.263219-08:00","closed_at":"2025-12-20T01:18:46.71424-08:00"} +{"id":"bd-0zp7","title":"Add missing hook calls in mail reply and ack","description":"The mail commands are missing hook calls:\n\n1. runMailReply (mail.go:525-672) creates a message but doesn't call hookRunner.Run(hooks.EventMessage, ...) after creating the reply in direct mode (around line 640)\n\n2. runMailAck (mail.go:432-523) closes messages but doesn't call hookRunner.Run(hooks.EventClose, ...) after closing each message (around line 487 for daemon mode, 493 for direct mode)\n\nThis means GGT hooks won't fire for replies or message acknowledgments.","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-16T20:52:53.069412-08:00","updated_at":"2025-12-17T23:13:40.532054-08:00","closed_at":"2025-12-17T17:22:59.368024-08:00"} +{"id":"bd-hvng","title":"Merge: bd-w193","description":"branch: polecat/nux\ntarget: main\nsource_issue: bd-w193\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-19T23:23:47.496139-08:00","updated_at":"2025-12-20T23:17:26.996479-08:00","closed_at":"2025-12-20T23:17:26.996479-08:00"} +{"id":"bd-n6fm","title":"witness Handoff","description":"attached_molecule: bd-ndye\nattached_at: 2025-12-23T12:35:02Z","status":"pinned","priority":2,"issue_type":"task","created_at":"2025-12-23T04:35:02.675024-08:00","updated_at":"2025-12-23T04:35:02.99197-08:00"} +{"id":"bd-gfo3","title":"Merge: bd-ykd9","description":"branch: polecat/Doctor\ntarget: main\nsource_issue: bd-ykd9\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-23T13:34:43.778808-08:00","updated_at":"2025-12-23T19:12:08.353427-08:00","closed_at":"2025-12-23T19:12:08.353427-08:00"} +{"id":"bd-y2v","title":"Refactor duplicate JSONL-from-git parsing code","description":"Both readFirstIssueFromGit() in init.go and importFromGit() in autoimport.go have similar code patterns for:\n1. Running git show \u003cref\u003e:\u003cpath\u003e\n2. Scanning the output with bufio.Scanner\n3. Parsing JSON lines\n\nCould be refactored to share a helper like:\n- readJSONLFromGit(gitRef, path string) ([]byte, error)\n- Or a streaming version: streamJSONLFromGit(gitRef, path string) (io.Reader, error)\n\nFiles:\n- cmd/bd/autoimport.go:225-256 (importFromGit)\n- cmd/bd/init.go:1212-1243 (readFirstIssueFromGit)\n\nPriority is low since code duplication is minimal and both functions work correctly.","status":"in_progress","priority":2,"issue_type":"task","created_at":"2025-12-05T14:51:18.41124-08:00","updated_at":"2025-12-23T22:29:35.786445-08:00"} +{"id":"bd-8e0q","title":"Merge: beads-ocs","description":"branch: polecat/valkyrie\ntarget: main\nsource_issue: beads-ocs\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-19T23:24:45.281478-08:00","updated_at":"2025-12-20T23:17:26.995706-08:00","closed_at":"2025-12-20T23:17:26.995706-08:00"} +{"id":"bd-aec5439f","title":"Update LINTING.md with current baseline","description":"After cleanup, document the remaining acceptable baseline in LINTING.md so we can track regression.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-27T18:53:10.38679-07:00","updated_at":"2025-12-17T22:58:34.564854-08:00","closed_at":"2025-12-17T22:58:34.564854-08:00"} +{"id":"bd-du9h","title":"Add Validation type and validations field to Issue","description":"Add Validation struct (Validator *EntityRef, Outcome string, Timestamp time.Time, Score *float32) and Validations []Validation field to Issue. Tracks who validated/approved work completion. Core to HOP proof-of-stake concept - validators stake reputation on approvals.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-22T17:53:37.725701-08:00","updated_at":"2025-12-22T20:08:59.925028-08:00","closed_at":"2025-12-22T20:08:59.925028-08:00","dependencies":[{"issue_id":"bd-du9h","depends_on_id":"bd-7pwh","type":"parent-child","created_at":"2025-12-22T17:53:43.470984-08:00","created_by":"daemon"},{"issue_id":"bd-du9h","depends_on_id":"bd-nmch","type":"blocks","created_at":"2025-12-22T17:53:47.896552-08:00","created_by":"daemon"}]} +{"id":"bd-phtv","title":"bd pin: pinned field overwritten by subsequent bd commands","description":"## Summary\n\nThe `bd pin` command correctly sets `pinned=1` in SQLite, but any subsequent `bd` command (including read-only commands like `bd show`) resets `pinned` to 0.\n\n## Reproduction Steps\n\n```bash\nbd --no-daemon pin \u003cissue-id\u003e --for=max\nsqlite3 .beads/beads.db \"SELECT id, pinned FROM issues WHERE id=\\\"\u003cissue-id\u003e\\\"\"\n# Shows pinned=1 βœ“\n\nbd --no-daemon show \u003cissue-id\u003e --json\nsqlite3 .beads/beads.db \"SELECT id, pinned FROM issues WHERE id=\\\"\u003cissue-id\u003e\\\"\"\n# Shows pinned=0 βœ— WRONG\n```\n\n## Root Cause Investigation\n\n### Prime Suspects\n\n1. **JSONL import overwrites DB** - The `pinned` field has `omitempty` so false values arent in JSONL. When JSONL is imported, it overwrites the DB pinned=1 with default pinned=0.\n\n2. **Files to check:**\n - `internal/importer/importer.go` - ImportIssue() may unconditionally set all fields\n - `internal/storage/sqlite/issues.go` - UpsertIssue() may not preserve pinned\n - `cmd/bd/main.go` - ensureStoreActive() may trigger import\n\n### Debug Steps\n\n```bash\n# Add debug logging to track what is writing pinned=0\ngrep -rn \"pinned\" internal/storage/sqlite/*.go\ngrep -rn \"Pinned\" internal/importer/*.go\n```\n\n## Likely Fix\n\nIn `internal/importer/importer.go` or `internal/storage/sqlite/issues.go`:\n\n```go\n// When upserting from JSONL, preserve pinned field if already set\nfunc (s *SQLiteStorage) UpsertIssue(ctx context.Context, issue *types.Issue) error {\n // Check if issue exists and is pinned\n existing, _ := s.GetIssue(ctx, issue.ID)\n if existing != nil \u0026\u0026 existing.Pinned \u0026\u0026 !issue.Pinned {\n // Preserve existing pinned status\n issue.Pinned = existing.Pinned\n }\n // ... rest of upsert\n}\n```\n\nOR the import should skip fields that are omitempty and not present in JSONL:\n\n```go\n// In importer, only update fields that are explicitly set in JSONL\n// Pinned with omitempty means absent = dont change, not absent = false\n```\n\n## Testing\n\n```bash\n# After fix:\nbd --no-daemon pin \u003cissue-id\u003e --for=max\nbd --no-daemon show \u003cissue-id\u003e --json # Should not reset pinned\nbd list --pinned # Should show the pinned issue\nbd hook --agent max # Should show pinned work\n```\n\n## Files to Modify\n\n1. **internal/importer/importer.go** - Preserve pinned on import\n2. **internal/storage/sqlite/issues.go** - UpsertIssue preserve pinned\n3. **Add test** in internal/importer/importer_test.go\n\n## Success Criteria\n- `bd pin` survives subsequent bd commands\n- `bd list --pinned` shows pinned issues\n- `bd hook --agent X` shows pinned work\n- Existing tests still pass","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-23T12:32:20.046988-08:00","updated_at":"2025-12-23T13:47:49.936021-08:00","closed_at":"2025-12-23T13:47:49.936021-08:00","dependencies":[{"issue_id":"bd-phtv","depends_on_id":"bd-iz5t","type":"parent-child","created_at":"2025-12-23T12:44:07.140151-08:00","created_by":"daemon"}]} +{"id":"bd-t4u1","title":"False positive detection by Kaspersky Antivirus (Trojan)","description":"Kaspersky Antivirus falsely detects beads (bd.exe v0.23.1) as a Trojan (PDM:Trojan.Win32.Generic) and removes it.\nEvent: Malicious object detected\nComponent: System Watcher\nObject name: bd.exe\n","status":"open","priority":1,"issue_type":"task","created_at":"2025-11-20T18:56:12.498187-05:00","updated_at":"2025-11-20T18:56:12.498187-05:00"} +{"id":"bd-aydr.8","title":"Respond to GitHub issue #479 with solution","description":"Once bd reset is implemented and released, respond to GitHub issue #479.\n\n## Response should include\n- Announce the new bd reset command\n- Show basic usage examples\n- Link to any documentation\n- Thank the user for the feedback\n\n## Example response\n```\nThanks for raising this! We've added a `bd reset` command to handle this case.\n\nUsage:\n- `bd reset` - Reset to clean state (prompts for confirmation)\n- `bd reset --backup` - Create backup first\n- `bd reset --hard` - Also clean up git history\n\nThis is available in version X.Y.Z.\n```\n\n## Notes\n- Wait until feature is merged and released\n- Consider if issue should be closed or left for user confirmation","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-13T08:45:00.112351+11:00","updated_at":"2025-12-13T06:24:29.562177-08:00","closed_at":"2025-12-13T10:18:06.646796+11:00","dependencies":[{"issue_id":"bd-aydr.8","depends_on_id":"bd-aydr","type":"parent-child","created_at":"2025-12-13T08:45:00.112732+11:00","created_by":"daemon"},{"issue_id":"bd-aydr.8","depends_on_id":"bd-aydr.7","type":"blocks","created_at":"2025-12-13T08:45:12.640243+11:00","created_by":"daemon"}]} +{"id":"bd-co29","title":"Merge: bd-n386","description":"branch: polecat/immortan\ntarget: main\nsource_issue: bd-n386\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-23T20:41:45.644113-08:00","updated_at":"2025-12-23T21:21:57.70152-08:00","closed_at":"2025-12-23T21:21:57.70152-08:00"} +{"id":"bd-b3og","title":"Fix TestImportBugIntegration deadlock in importer_test.go","description":"Code health review found internal/importer/importer_test.go has TestImportBugIntegration skipped with:\n\nTODO: Test hangs due to database deadlock - needs investigation\n\nThis indicates a potential unresolved concurrency issue in the importer. The test has been skipped for an unknown duration.\n\nFix: Investigate the deadlock, fix the underlying issue, and re-enable the test.","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-16T18:17:22.103838-08:00","updated_at":"2025-12-17T23:13:40.529671-08:00","closed_at":"2025-12-17T17:25:26.645901-08:00","dependencies":[{"issue_id":"bd-b3og","depends_on_id":"bd-tggf","type":"blocks","created_at":"2025-12-16T18:19:05.740642-08:00","created_by":"daemon"}]} +{"id":"bd-4p3k","title":"Release v0.34.0","description":"Minor version release for beads v0.34.0. This bead serves as my persistent work assignment; the actual release steps are tracked in an attached wisp.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-22T03:03:20.73092-08:00","updated_at":"2025-12-22T03:05:03.168622-08:00","deleted_at":"2025-12-22T03:05:03.168622-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} +{"id":"bd-eyto","title":"Time-dependent tests may be flaky near TTL boundary","description":"Several tombstone merge tests use time.Now() to create test data: time.Now().Add(-24 * time.Hour), time.Now().Add(-60 * 24 * time.Hour), etc. While these work reliably in practice (24h vs 30d TTL has large margin), they could theoretically be flaky if: 1) Tests run slowly, 2) System clock changes during test, 3) TTL constants change. Recommendation: Consider using a fixed reference time or time injection for deterministic tests. Lower priority since current margin is large. Files: internal/merge/merge_test.go:1337-1338, 1352-1353, 1548-1549, 1590-1591","status":"open","priority":3,"issue_type":"task","created_at":"2025-12-05T16:37:02.348143-08:00","updated_at":"2025-12-05T16:37:02.348143-08:00"} +{"id":"bd-z86n","title":"Code Review: PR #551 - Persist close_reason to issues table","description":"Code review of PR #551 which fixes close_reason persistence bug.\n\n## Summary\nThe PR correctly fixes a bug where close_reason was only stored in the events table, not in the issues.close_reason column. This caused `bd show --json` to return empty close_reason.\n\n## What Was Fixed\n- βœ… CloseIssue now updates both close_reason and closed_at\n- βœ… ReOpenIssue clears both close_reason and closed_at\n- βœ… Comprehensive tests added for both storage and CLI layers\n- βœ… Clear documentation in queries.go about dual storage strategy\n\n## Quality Assessment\nβœ… Tests cover both storage layer and CLI JSON output\nβœ… Handles reopen case (clearing close_reason)\nβœ… Good comments explaining dual-storage design\nβœ… No known issues\n\n## Potential Followups\nSee linked issues for suggestions.","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-14T14:25:06.887069-08:00","updated_at":"2025-12-14T14:25:06.887069-08:00"} +{"id":"bd-hnkg","title":"GH#540: Add silent quick-capture mode (bd q)","description":"Add bd q alias for quick capture that outputs only issue ID. Useful for piping/scripting. See GitHub issue #540.","status":"tombstone","priority":2,"issue_type":"feature","created_at":"2025-12-16T01:03:38.260135-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"feature"} +{"id":"bd-kwro.4","title":"Graph Link: duplicates for deduplication","description":"Implement duplicates link type for marking issues as duplicates.\n\nNew command:\n- bd duplicate \u003cid\u003e --of \u003ccanonical\u003e - marks id as duplicate of canonical\n- Auto-closes the duplicate issue\n\nQuery support:\n- bd show \u003cid\u003e shows 'Duplicate of: \u003ccanonical\u003e'\n- bd list --duplicates shows all duplicate pairs\n\nStorage:\n- duplicates column pointing to canonical issue ID\n\nEssential for large issue databases with many similar reports.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-16T03:01:36.257223-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-haxi","title":"Restart running daemons","description":"Kill and restart any running bd daemons to pick up new version: pkill -f 'bd daemon' \u0026\u0026 bd daemon --start","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T13:52:33.066262-08:00","updated_at":"2025-12-21T13:53:49.757078-08:00","deleted_at":"2025-12-21T13:53:49.757078-08:00","deleted_by":"stevey","delete_reason":"manual delete","original_type":"task"} +{"id":"bd-h0we","title":"Review SQLite indexes and scaling bottlenecks","description":"Audit the beads SQLite schema for:\n\n## Index Review\n- Are all frequently-queried columns indexed?\n- Are compound indexes needed for common query patterns?\n- Any missing indexes on foreign keys or filter columns?\n\n## Scaling Bottlenecks\n- How does performance degrade with 10k, 100k, 1M issues?\n- Full table scans in hot paths?\n- JSONL export/import performance at scale\n- Transaction contention in multi-agent scenarios\n\n## Common Query Patterns to Optimize\n- bd ready (status + blocked_by resolution)\n- bd list with filters (status, type, priority, labels)\n- bd show with dependency graph traversal\n- bd sync import/export\n\n## Deliverables\n- Document current indexes\n- Identify missing indexes\n- Benchmark key operations at scale\n- Recommend schema improvements","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-21T23:41:06.481881-08:00","updated_at":"2025-12-22T22:59:25.178175-08:00","closed_at":"2025-12-22T22:59:25.178175-08:00"} +{"id":"bd-au0.5","title":"Add date and priority filters to bd search","description":"Add date and priority filters to bd search for parity with bd list.\n\n## Current State\nbd search supports: --status, --type, --assignee, --label, --limit\nbd list supports: all of the above PLUS date ranges and priority filters\n\n## Filters to Add\n\n### Priority Filters\n```bash\nbd search \"query\" --priority 1 # Exact priority\nbd search \"query\" --priority-min 0 # P0 and above (higher priority)\nbd search \"query\" --priority-max 2 # P2 and below (lower priority)\n```\n\n### Date Filters\n```bash\nbd search \"query\" --created-after 2025-01-01\nbd search \"query\" --created-before 2025-12-31\nbd search \"query\" --updated-after 2025-01-01\nbd search \"query\" --closed-after 2025-01-01\n```\n\n### Content Filters\n```bash\nbd search \"query\" --desc-contains \"bug\"\nbd search \"query\" --notes-contains \"todo\"\nbd search \"query\" --empty-description # Issues with no description\nbd search \"query\" --no-assignee # Unassigned issues\nbd search \"query\" --no-labels # Issues without labels\n```\n\n## Files to Modify\n\n### 1. cmd/bd/search.go\nAdd flag definitions in init():\n```go\nsearchCmd.Flags().IntP(\"priority\", \"p\", -1, \"Filter by exact priority (0-4)\")\nsearchCmd.Flags().Int(\"priority-min\", -1, \"Filter by minimum priority\")\nsearchCmd.Flags().Int(\"priority-max\", -1, \"Filter by maximum priority\")\nsearchCmd.Flags().String(\"created-after\", \"\", \"Filter by creation date (YYYY-MM-DD)\")\nsearchCmd.Flags().String(\"created-before\", \"\", \"Filter by creation date\")\nsearchCmd.Flags().String(\"updated-after\", \"\", \"Filter by update date\")\nsearchCmd.Flags().String(\"updated-before\", \"\", \"Filter by update date\")\nsearchCmd.Flags().String(\"closed-after\", \"\", \"Filter by close date\")\nsearchCmd.Flags().String(\"closed-before\", \"\", \"Filter by close date\")\nsearchCmd.Flags().String(\"desc-contains\", \"\", \"Filter by description content\")\nsearchCmd.Flags().String(\"notes-contains\", \"\", \"Filter by notes content\")\nsearchCmd.Flags().Bool(\"empty-description\", false, \"Filter issues with empty description\")\nsearchCmd.Flags().Bool(\"no-assignee\", false, \"Filter unassigned issues\")\nsearchCmd.Flags().Bool(\"no-labels\", false, \"Filter issues without labels\")\n```\n\n### 2. internal/rpc/protocol.go\nUpdate SearchArgs struct:\n```go\ntype SearchArgs struct {\n Query string\n Filter types.IssueFilter\n // Already has most fields via IssueFilter\n}\n```\n\nNote: types.IssueFilter already has these fields - just need to wire them up!\n\n### 3. cmd/bd/search.go Run function\nParse flags and populate filter:\n```go\nif priority, _ := cmd.Flags().GetInt(\"priority\"); priority \u003e= 0 {\n filter.Priority = \u0026priority\n}\nif createdAfter, _ := cmd.Flags().GetString(\"created-after\"); createdAfter != \"\" {\n t, err := time.Parse(\"2006-01-02\", createdAfter)\n if err != nil {\n FatalError(\"invalid date format for --created-after: %v\", err)\n }\n filter.CreatedAfter = \u0026t\n}\n// ... similar for other flags\n```\n\n## Implementation Steps\n\n1. **Check types.IssueFilter** - verify all needed fields exist\n2. **Add flags to search.go** init()\n3. **Parse flags** in Run function\n4. **Pass to SearchIssues** via filter\n5. **Test all combinations**\n\n## Testing\n```bash\n# Create test issues\nbd create \"Test P1\" -p 1\nbd create \"Test P2\" -p 2 --description \"Has description\"\n\n# Test filters\nbd search \"\" --priority 1\nbd search \"\" --priority-min 0 --priority-max 1\nbd search \"\" --empty-description\nbd search \"\" --desc-contains \"description\"\n```\n\n## Success Criteria\n- All filters work in both direct and daemon mode\n- Date parsing handles YYYY-MM-DD format\n- --json output includes filtered results\n- Help text documents all new flags","status":"closed","priority":1,"issue_type":"task","created_at":"2025-11-21T21:07:05.496726-05:00","updated_at":"2025-12-23T13:38:28.475606-08:00","closed_at":"2025-12-23T13:38:28.475606-08:00","dependencies":[{"issue_id":"bd-au0.5","depends_on_id":"bd-au0","type":"parent-child","created_at":"2025-11-21T21:07:05.497762-05:00","created_by":"daemon"},{"issue_id":"bd-au0.5","depends_on_id":"bd-iz5t","type":"parent-child","created_at":"2025-12-23T12:44:07.657303-08:00","created_by":"daemon"}]} +{"id":"bd-r36u","title":"gt mq list shows empty when MRs exist","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-20T01:13:07.561256-08:00","updated_at":"2025-12-21T17:51:25.891037-08:00","closed_at":"2025-12-21T17:51:25.891037-08:00"} +{"id":"bd-pzw7","title":"gt handoff deadlock at handoff.go:125","notes":"When running 'gt handoff -m \"message\"' after successful MR submit, go panics with 'fatal error: all goroutines are asleep - deadlock\\!' at handoff.go:125. The shutdown request still appears to be sent successfully but the command crashes. Stack trace shows issue is in runHandoff select statement.","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-19T23:22:12.46315-08:00","updated_at":"2025-12-21T17:51:25.817355-08:00","closed_at":"2025-12-21T17:51:25.817355-08:00"} +{"id":"bd-wc2","title":"Test body-file","description":"This is a test description from a file.\n\nIt has multiple lines.\n","status":"closed","priority":4,"issue_type":"task","created_at":"2025-12-17T17:27:20.508724-08:00","updated_at":"2025-12-17T17:28:33.83142-08:00","closed_at":"2025-12-17T17:28:33.83142-08:00"} +{"id":"bd-zgb9","title":"gt polecat done should auto-stop running session","description":"Currently 'gt polecat done' fails if session is running, requiring a separate 'gt session stop' first. This is unnecessary friction - done should just stop the session automatically since that's always what you want.","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-23T04:11:23.899653-08:00","updated_at":"2025-12-23T04:12:13.029479-08:00","closed_at":"2025-12-23T04:12:13.029479-08:00"} +{"id":"bd-c3u","title":"Review PR #512: clarify bd ready docs","description":"Review and merge PR #512 from aspiers. This PR clarifies what bd ready does after git pull in README.md. Simple 1-line change. URL: https://github.com/anthropics/beads/pull/512","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-13T08:15:13.405161+11:00","updated_at":"2025-12-13T07:07:29.641265-08:00","closed_at":"2025-12-13T07:07:29.641265-08:00"} +{"id":"bd-0d5p","title":"Fix TestRunSync_Timeout failing on macOS","description":"The hooks timeout test fails because exec.CommandContext doesn't properly terminate child processes of shell scripts on macOS. The test creates a hook that runs 'sleep 60' with a 500ms timeout, but it waits the full 60 seconds.\n\nOptions to fix:\n- Use SysProcAttr{Setpgid: true} to create process group and kill the group\n- Skip test on darwin with build tag\n- Use a different approach for timeout testing\n\nLocation: internal/hooks/hooks_test.go:220-253","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-16T20:52:51.771217-08:00","updated_at":"2025-12-17T23:13:40.532688-08:00","closed_at":"2025-12-17T17:23:55.678799-08:00"} +{"id":"bd-zt59","title":"Deferred HOP schema additions (P2/P3)","description":"Deferred from bd-7pwh after review. Add when semantics are clearer and actually needed:\n\n- assignee_ref: Structured EntityRef alongside string assignee\n- work_type: 'mutex' vs 'open_competition' (everything is mutex in v0.1)\n- crystallizes: bool for work that compounds vs evaporates (can derive from issue_type)\n- cross_refs: URIs to beads in other repos (needs federation first)\n- skill_vector: []float32 embeddings placeholder (YAGNI)\n\nThese can be added later without breaking changes (all optional fields).","status":"deferred","priority":4,"issue_type":"task","created_at":"2025-12-22T17:54:20.02496-08:00","updated_at":"2025-12-23T12:27:02.445219-08:00"} +{"id":"bd-om4a","title":"Support external: prefix in blocked_by field","description":"Allow blocked_by to include external project references:\n\n```bash\nbd update gt-xyz --blocked-by=\"external:beads:mol-run-assignee\"\n```\n\nSyntax: `external:\u003cproject\u003e:\u003ccapability\u003e`\n- project: name from external_projects config\n- capability: matches provides:\u003ccapability\u003e label in target project\n\nStorage: Store as-is in blocked_by array. Resolution happens at query time.\n\nPart of cross-project dependency system.\nSee: gastown/docs/cross-project-deps.md","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-12-21T22:37:29.725196-08:00","updated_at":"2025-12-21T23:07:48.127045-08:00","closed_at":"2025-12-21T23:07:48.127045-08:00"} +{"id":"bd-zwtq","title":"Run bd doctor at end of bd init to verify setup","description":"Run bd doctor diagnostics at end of bd init (after line 398 in init.go). If issues found, warn user immediately: '⚠ Setup incomplete. Run bd doctor --fix to complete setup.' Catches configuration problems before user encounters them in normal workflow.","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-11-21T23:16:09.596778-08:00","updated_at":"2025-12-23T04:20:51.887338-08:00","closed_at":"2025-12-23T04:20:51.887338-08:00","dependencies":[{"issue_id":"bd-zwtq","depends_on_id":"bd-tbz3","type":"parent-child","created_at":"2025-11-21T23:16:09.597617-08:00","created_by":"daemon"}]} +{"id":"bd-z3rf","title":"dave Handoff","description":"attached_molecule: bd-ifuw\nattached_at: 2025-12-23T12:49:44Z","status":"pinned","priority":2,"issue_type":"task","created_at":"2025-12-23T04:33:42.874554-08:00","updated_at":"2025-12-23T04:49:44.1246-08:00"} +{"id":"bd-05a8","title":"Split large cmd/bd files: doctor.go (2948 lines), sync.go (2121 lines)","description":"Code health review found several oversized files:\n\n1. doctor.go - 2948 lines, 48 functions mixed together\n - Should split into doctor/checks/*.go for individual diagnostics\n - applyFixes() and previewFixes() are nearly identical\n\n2. sync.go - 2121 lines\n - ZFC (Zero Flush Check) logic embedded inline (lines 213-247)\n - Multiple mode handlers should be extracted\n\n3. init.go - 1732 lines\n4. compact.go - 1097 lines\n5. show.go - 1069 lines\n\nRecommendation: Extract into focused sub-packages or split into logical files.","status":"in_progress","priority":2,"issue_type":"task","created_at":"2025-12-16T18:17:18.169927-08:00","updated_at":"2025-12-23T22:31:50.769229-08:00"} +{"id":"bd-udsi","title":"Async Gates for Agent Coordination","description":"Agents need an async primitive for waiting on external events (CI completion, API responses, human approval). Gates are wisp issues that block until external conditions are met, managed by the Deacon.\n\n## Core Concepts\n\n**Gate** = wisp issue that blocks until external condition is met\n- Type: gate\n- Phase: wisp (never synced, ephemeral)\n- Assignee: deacon/ (Deacon monitors it)\n- Fields: await_type, await_id, timeout, waiters[]\n\n**Await Types:**\n- gh:run:\u003cid\u003e - GitHub Actions run completion\n- gh:pr:\u003cid\u003e - PR merged/closed\n- timer:\u003cduration\u003e - Simple delay\n- human:\u003cprompt\u003e - Human approval required\n- mail:\u003cpattern\u003e - Wait for mail matching pattern\n\n## Open Questions\n- Should gates live in wisp storage or main storage with wisp flag?\n- Do we need a gate catalog (like molecule catalog)?\n- Should waits-for dep type work with gates?","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-23T11:44:02.711062-08:00","updated_at":"2025-12-23T12:24:43.537615-08:00","closed_at":"2025-12-23T12:24:43.537615-08:00"} +{"id":"bd-pbh.14","title":"Monitor PyPI publish","description":"Watch the PyPI publish action:\nhttps://github.com/steveyegge/beads/actions/workflows/pypi-publish.yml\n\nVerify at: https://pypi.org/project/beads-mcp/0.30.4/\n\nCheck:\n```bash\npip index versions beads-mcp 2\u003e/dev/null | grep -q '0.30.4'\n```\n\n\n```verify\npip index versions beads-mcp 2\u003e/dev/null | grep -q '0.30.4' || curl -s https://pypi.org/pypi/beads-mcp/json | jq -e '.releases[\"0.30.4\"]'\n```","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-17T21:19:11.083809-08:00","updated_at":"2025-12-17T21:46:46.320922-08:00","closed_at":"2025-12-17T21:46:46.320922-08:00","dependencies":[{"issue_id":"bd-pbh.14","depends_on_id":"bd-pbh","type":"parent-child","created_at":"2025-12-17T21:19:11.084126-08:00","created_by":"daemon"},{"issue_id":"bd-pbh.14","depends_on_id":"bd-pbh.12","type":"blocks","created_at":"2025-12-17T21:19:11.289698-08:00","created_by":"daemon"}]} +{"id":"bd-tm2p","title":"Polecats get stuck on interactive shell prompts (cp/mv/rm -i)","description":"During swarm operations, polecats frequently get stuck waiting for interactive prompts from shell commands like:\n- cp prompting 'overwrite file? (y/n)'\n- mv prompting 'overwrite file? (y/n)' \n- rm prompting 'remove file?'\n\nThis happens because macOS aliases or shell configs may have -i flags set by default.\n\nRoot cause: Claude Code runs commands that trigger interactive confirmation prompts, but cannot respond to them, causing the agent to hang indefinitely.\n\nObserved in: Multiple polecats during GH issues swarm (Dec 2024)\n- Derrick, Roustabout, Prospector, Warboy all got stuck on y/n prompts\n\nSuggested fixes:\n1. AGENTS.md should instruct agents to always use -f flag with cp/mv/rm\n2. Polecat startup could set shell aliases to use non-interactive versions\n3. bd prime hook could include guidance about non-interactive commands\n4. Consider detecting stuck prompts and auto-recovering","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-14T16:51:24.572271-08:00","updated_at":"2025-12-17T23:13:40.536312-08:00","closed_at":"2025-12-17T19:13:04.074424-08:00"} +{"id":"bd-5exm","title":"Merge: bd-49kw","description":"branch: polecat/nux\ntarget: main\nsource_issue: bd-49kw\nrig: beads","status":"closed","priority":1,"issue_type":"merge-request","created_at":"2025-12-23T20:43:23.156375-08:00","updated_at":"2025-12-23T21:21:57.693169-08:00","closed_at":"2025-12-23T21:21:57.693169-08:00"} +{"id":"bd-jvu","title":"Add bd update --parent flag to change issue parent","description":"Allow changing an issue's parent with bd update --parent \u003cnew-parent-id\u003e. Useful for reorganizing tasks under different epics or moving issues between hierarchies. Should update the parent-child dependency relationship.","status":"closed","priority":3,"issue_type":"feature","created_at":"2025-12-17T22:24:07.274485-08:00","updated_at":"2025-12-17T22:34:07.318938-08:00","closed_at":"2025-12-17T22:34:07.318938-08:00"} +{"id":"bd-hkr6","title":"GH#518: Document bd setup command","description":"bd setup is undiscoverable. Add to README/docs. Currently only findable by grepping source. See GitHub issue #518.","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-12-16T01:03:54.664668-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-ajdv","title":"Push release v0.33.2 to remote","description":"Push the commit and tag:\n\n```bash\ngit push \u0026\u0026 git push --tags\n```\n\nVerify on GitHub that the tag appears in releases.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.762058-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-29fb","title":"Implement bd close --continue flag","description":"Auto-advance to next step in molecule when closing an issue. Referenced by gt-um6q, gt-lz13. Needed for molecule navigation workflow.","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-12-23T00:17:55.032875-08:00","updated_at":"2025-12-23T01:26:47.255313-08:00","closed_at":"2025-12-23T01:26:47.255313-08:00"} +{"id":"bd-vzds","title":"Create git tag v0.33.2","description":"Create the release tag:\n\n```bash\ngit tag v0.33.2\n```\n\nVerify: `git tag | grep 0.33.2`","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T16:10:13.761888-08:00","updated_at":"2025-12-21T17:29:31.791368-08:00","deleted_at":"2025-12-21T17:29:31.791368-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-h807","title":"Cross-project dependency support","description":"Enable tracking dependencies across project boundaries.\n\n## Mechanism\n- Producer: `bd ship \u003ccapability\u003e` adds `provides:\u003ccapability\u003e` label\n- Consumer: `blocked_by: external:\u003cproject\u003e:\u003ccapability\u003e`\n- Resolution: `bd ready` checks external deps via config\n\n## Design Doc\nSee: gastown/docs/cross-project-deps.md\n\n## Children\n- bd-eijl: bd ship command\n- bd-om4a: external: prefix in blocked_by\n- bd-66w1: external_projects config\n- bd-zmmy: bd ready resolution","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-21T22:38:01.116241-08:00","updated_at":"2025-12-22T00:02:09.271076-08:00","closed_at":"2025-12-22T00:02:09.271076-08:00"} +{"id":"bd-in7","title":"Test message","description":"Hello world","status":"closed","priority":2,"issue_type":"message","created_at":"2025-12-17T23:16:13.184946-08:00","updated_at":"2025-12-18T17:42:26.000073-08:00","closed_at":"2025-12-17T23:37:38.563369-08:00"} +{"id":"bd-mh4w","title":"Rename 'bond' to 'spawn' for instantiation","description":"Rename the bd mol bond command to bd mol spawn for instantiating protos.\n \n- Rename molBondCmd to molSpawnCmd\n- Update command Use/Short/Long descriptions \n- Keep 'bond' available for the new bonding feature\n- Update all documentation references\n- Add 'protomolecule' as easter egg alias for 'proto'","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-21T00:58:44.529026-08:00","updated_at":"2025-12-21T01:19:42.942819-08:00","closed_at":"2025-12-21T01:19:42.942819-08:00","dependencies":[{"issue_id":"bd-mh4w","depends_on_id":"bd-o5xe","type":"parent-child","created_at":"2025-12-21T00:59:51.167902-08:00","created_by":"daemon"}]} +{"id":"bd-bgr","title":"Test stdin 2","description":"Description from stdin test\n","status":"closed","priority":4,"issue_type":"task","created_at":"2025-12-17T17:28:05.41434-08:00","updated_at":"2025-12-17T17:28:33.833288-08:00","closed_at":"2025-12-17T17:28:33.833288-08:00"} +{"id":"bd-6gd","title":"Remove legacy MCP Agent Mail integration","description":"## Summary\n\nRemove the legacy MCP Agent Mail system that requires an external HTTP server. Keep the native `bd mail` system which stores messages as git-synced issues.\n\n## Background\n\nTwo mail systems exist in the codebase:\n1. **Legacy Agent Mail** (`bd message`) - External server dependency, complex setup\n2. **Native bd mail** (`bd mail`) - Built-in, git-synced, no dependencies\n\nThe legacy system causes confusion and is no longer needed. Gas Town's Town Mail will use the native `bd mail` system.\n\n## Files to Delete\n\n### CLI Command\n- [ ] `cmd/bd/message.go` - The `bd message` command implementation\n\n### MCP Integration\n- [ ] `integrations/beads-mcp/src/beads_mcp/mail.py` - HTTP wrapper for Agent Mail server\n- [ ] `integrations/beads-mcp/src/beads_mcp/mail_tools.py` - MCP tool definitions\n- [ ] `integrations/beads-mcp/tests/test_mail.py` - Tests for legacy mail\n\n### Documentation\n- [ ] `docs/AGENT_MAIL.md`\n- [ ] `docs/AGENT_MAIL_QUICKSTART.md`\n- [ ] `docs/AGENT_MAIL_DEPLOYMENT.md`\n- [ ] `docs/AGENT_MAIL_MULTI_WORKSPACE_SETUP.md`\n- [ ] `docs/adr/002-agent-mail-integration.md`\n\n## Code to Update\n\n- [ ] Remove `message` command registration from `cmd/bd/main.go`\n- [ ] Remove mail tool imports/registration from MCP server `__init__.py` or `server.py`\n- [ ] Check for any other references to Agent Mail in the codebase\n\n## Verification\n\n- [ ] `bd message` command no longer exists\n- [ ] `bd mail` command still works\n- [ ] MCP server starts without errors\n- [ ] Tests pass\n","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-17T23:04:04.099935-08:00","updated_at":"2025-12-17T23:13:24.128752-08:00","closed_at":"2025-12-17T23:13:24.128752-08:00"} +{"id":"bd-gjla","title":"Test Thread","description":"Initial message for threading test","status":"tombstone","priority":2,"issue_type":"message","created_at":"2025-12-16T18:19:51.704324-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","dependencies":[{"issue_id":"bd-gjla","depends_on_id":"bd-f5cc","type":"duplicates","created_at":"2025-12-18T13:45:31.137191-08:00","created_by":"migration"}],"deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"message"} +{"id":"bd-u2sc.1","title":"Replace map[string]interface{} with typed JSON response structs","description":"Many CLI commands use map[string]interface{} for JSON output which loses type safety and compile-time error detection.\n\nFiles with map[string]interface{}:\n- cmd/bd/compact.go (10+ instances)\n- cmd/bd/cleanup.go\n- cmd/bd/daemons.go\n- cmd/bd/daemon_lifecycle.go\n\nExample fix:\n```go\n// Before\nresult := map[string]interface{}{\n \"status\": \"ok\",\n \"count\": 42,\n}\n\n// After\ntype CompactResponse struct {\n Status string `json:\"status\"`\n Count int `json:\"count\"`\n}\nresult := CompactResponse{Status: \"ok\", Count: 42}\n```\n\nBenefits:\n- Compile-time type checking\n- IDE autocompletion\n- Easier refactoring\n- Self-documenting API","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-22T14:26:44.088548-08:00","updated_at":"2025-12-22T15:48:22.88824-08:00","closed_at":"2025-12-22T15:48:22.88824-08:00","dependencies":[{"issue_id":"bd-u2sc.1","depends_on_id":"bd-u2sc","type":"parent-child","created_at":"2025-12-22T14:26:44.088931-08:00","created_by":"daemon"}]} +{"id":"bd-4ec8","title":"Widespread double JSON encoding bug in daemon mode RPC calls","description":"Multiple CLI commands had the same double JSON encoding bug found in bd-1048. All commands that called ResolveID via RPC used string(resp.Data) instead of properly unmarshaling the JSON response. This caused IDs to retain JSON quotes (\"bd-1048\" instead of bd-1048), which then got double-encoded when passed to subsequent RPC calls.\n\nAffected commands:\n- bd show (3 instances)\n- bd dep add/remove/tree (5 instances)\n- bd label add/remove/list (3 instances)\n- bd reopen (1 instance)\n\nRoot cause: resp.Data is json.RawMessage (already JSON-encoded), so string() conversion preserves quotes.\n\nFix: Replace all string(resp.Data) with json.Unmarshal(resp.Data, \u0026id) for proper deserialization.\n\nAll commands now tested and working correctly with daemon mode.","status":"closed","priority":0,"issue_type":"bug","created_at":"2025-11-02T22:33:01.632691-08:00","updated_at":"2025-12-17T23:13:40.533631-08:00","closed_at":"2025-12-17T16:26:05.851197-08:00"} +{"id":"bd-qioh","title":"Standardize error handling: replace direct fmt.Fprintf+os.Exit with FatalError","description":"Standardize error handling in cmd/bd/ using FatalError pattern.\n\n## Current State\n~200+ instances of direct `fmt.Fprintf(os.Stderr, ...) + os.Exit(1)` pattern scattered across cmd/bd/*.go files.\n\n## Target Pattern\n\nUse existing FatalError helper (or create if missing):\n\n```go\n// In cmd/bd/helpers.go or similar\nfunc FatalError(format string, args ...interface{}) {\n fmt.Fprintf(os.Stderr, \"Error: \"+format+\"\\n\", args...)\n os.Exit(1)\n}\n\nfunc FatalErrorf(err error, context string) {\n fmt.Fprintf(os.Stderr, \"Error: %s: %v\\n\", context, err)\n os.Exit(1)\n}\n```\n\n## Transformation\n\n```go\n// Before\nfmt.Fprintf(os.Stderr, \"Error: %v\\n\", err)\nos.Exit(1)\n\n// After\nFatalError(\"%v\", err)\n\n// Before\nfmt.Fprintf(os.Stderr, \"Error: invalid --since duration: %v\\n\", err)\nos.Exit(1)\n\n// After\nFatalError(\"invalid --since duration: %v\", err)\n```\n\n## Files to Update (by occurrence count)\nRun: `grep -c \"os.Exit(1)\" cmd/bd/*.go | sort -t: -k2 -rn | head -20`\n\nPriority files (highest occurrence):\n- close.go\n- show.go\n- sync.go\n- init.go\n- list.go\n- create.go\n- update.go\n\n## Implementation Steps\n\n1. **Check if FatalError exists** in cmd/bd/helpers.go or create it\n2. **Create migration script** or use sed:\n ```bash\n # Find patterns to replace\n grep -rn \"fmt.Fprintf(os.Stderr.*Error.*\\n.*os.Exit(1)\" cmd/bd/\n ```\n3. **Replace systematically** file by file\n4. **Run tests** after each file to verify behavior unchanged\n5. **Run linter** to catch any missed patterns\n\n## Verification\n```bash\n# Count remaining direct exits (should be near zero)\ngrep -c \"os.Exit(1)\" cmd/bd/*.go | awk -F: \"{sum+=\\$2} END {print sum}\"\n\n# Run tests\ngo test -short ./cmd/bd/...\n```\n\n## Success Criteria\n- All error exits use FatalError/FatalErrorf\n- Consistent \"Error: \" prefix on all error messages\n- Tests pass\n- No behavior changes (exit codes remain 1)","notes":"## Progress (Dec 2025)\n\nStandardized error handling in 3 major files:\n- compact.go: All 48 os.Exit(1) calls converted to FatalError\n- sync.go: All error patterns converted (kept 1 valid summary exit)\n- migrate.go: 4 patterns converted\n\n## Remaining Work\n~326 fmt.Fprintf(os.Stderr, \"Error:\") patterns remain across ~30 files.\nHigh-count files remaining: show.go (16), dep.go (14), gate.go (28), init.go (11).\n\n## Verification\n- Build compiles successfully\n- Tests pass","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-16T18:17:19.309394-08:00","updated_at":"2025-12-23T14:14:37.939802-08:00","closed_at":"2025-12-23T14:14:37.939802-08:00","dependencies":[{"issue_id":"bd-qioh","depends_on_id":"bd-iz5t","type":"parent-child","created_at":"2025-12-23T12:44:07.43514-08:00","created_by":"daemon"}]} +{"id":"bd-jke6","title":"Add covering index (label, issue_id) for label queries","description":"GetIssuesByLabel joins labels table but requires table lookup after using idx_labels_label.\n\n**Query (labels.go:165):**\n```sql\nSELECT ... FROM issues i\nJOIN labels l ON i.id = l.issue_id\nWHERE l.label = ?\n```\n\n**Problem:** Current idx_labels_label index doesn't cover issue_id, requiring row lookup.\n\n**Solution:** Add migration:\n```sql\nCREATE INDEX IF NOT EXISTS idx_labels_label_issue ON labels(label, issue_id);\n```\n\nThis is a covering index - query can be satisfied entirely from the index without touching the labels table rows.","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-22T22:58:51.485354-08:00","updated_at":"2025-12-22T23:15:13.839904-08:00","closed_at":"2025-12-22T23:15:13.839904-08:00","dependencies":[{"issue_id":"bd-jke6","depends_on_id":"bd-h0we","type":"discovered-from","created_at":"2025-12-22T22:58:51.485984-08:00","created_by":"daemon"}]} +{"id":"bd-vgi5","title":"Push version bump to GitHub","description":"git push origin main - triggers CI but no release yet.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T22:43:05.363604-08:00","updated_at":"2025-12-18T22:46:57.50777-08:00","closed_at":"2025-12-18T22:46:57.50777-08:00","dependencies":[{"issue_id":"bd-vgi5","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T22:43:16.87736-08:00","created_by":"daemon"},{"issue_id":"bd-vgi5","depends_on_id":"bd-3ggb","type":"blocks","created_at":"2025-12-18T22:43:21.078208-08:00","created_by":"daemon"}]} +{"id":"bd-y7j8","title":"Update info.go versionChanges","description":"Add entry to versionChanges in cmd/bd/info.go with agent-actionable changes for test-squash","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-21T13:52:33.066625-08:00","updated_at":"2025-12-21T13:53:49.554496-08:00","deleted_at":"2025-12-21T13:53:49.554496-08:00","deleted_by":"stevey","delete_reason":"manual delete","original_type":"task"} +{"id":"bd-cb64c226.6","title":"Verify MCP Server Compatibility","description":"Ensure MCP server works with cache-free daemon","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-10-27T22:56:03.241615-07:00","updated_at":"2025-12-17T23:18:29.109644-08:00","deleted_at":"2025-12-17T23:18:29.109644-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} diff --git a/CLAUDE.md b/CLAUDE.md index fbe7caaa..50aa29c6 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -740,17 +740,6 @@ bd close bd-42 --reason "Completed" --json - `3` - Low (polish, optimization) - `4` - Backlog (future ideas) -### Dependencies: Avoid the Temporal Trap - -When adding dependencies, think "X **needs** Y" not "X **comes before** Y": - -```bash -# ❌ WRONG: "Phase 1 blocks Phase 2" β†’ bd dep add phase1 phase2 -# βœ… RIGHT: "Phase 2 needs Phase 1" β†’ bd dep add phase2 phase1 -``` - -Verify with `bd blocked` - tasks should be blocked by prerequisites, not dependents. - ### Workflow for AI Agents 1. **Check your inbox**: `gt mail inbox` (from your cwd, not ~/gt) diff --git a/cmd/bd/compact.go b/cmd/bd/compact.go index b95fb469..214bdcbe 100644 --- a/cmd/bd/compact.go +++ b/cmd/bd/compact.go @@ -166,7 +166,8 @@ Examples: } else { sqliteStore, ok := store.(*sqlite.SQLiteStorage) if !ok { - FatalError("compact requires SQLite storage") + fmt.Fprintf(os.Stderr, "Error: compact requires SQLite storage\n") + os.Exit(1) } runCompactStats(ctx, sqliteStore) } @@ -187,20 +188,26 @@ Examples: // Check for exactly one mode if activeModes == 0 { - FatalError("must specify one mode: --analyze, --apply, or --auto") + fmt.Fprintf(os.Stderr, "Error: must specify one mode: --analyze, --apply, or --auto\n") + os.Exit(1) } if activeModes > 1 { - FatalError("cannot use multiple modes together (--analyze, --apply, --auto are mutually exclusive)") + fmt.Fprintf(os.Stderr, "Error: cannot use multiple modes together (--analyze, --apply, --auto are mutually exclusive)\n") + os.Exit(1) } // Handle analyze mode (requires direct database access) if compactAnalyze { if err := ensureDirectMode("compact --analyze requires direct database access"); err != nil { - FatalErrorWithHint(fmt.Sprintf("%v", err), "Use --no-daemon flag to bypass daemon and access database directly") + fmt.Fprintf(os.Stderr, "Error: %v\n", err) + fmt.Fprintf(os.Stderr, "Hint: Use --no-daemon flag to bypass daemon and access database directly\n") + os.Exit(1) } sqliteStore, ok := store.(*sqlite.SQLiteStorage) if !ok { - FatalErrorWithHint("failed to open database in direct mode", "Ensure .beads/beads.db exists and is readable") + fmt.Fprintf(os.Stderr, "Error: failed to open database in direct mode\n") + fmt.Fprintf(os.Stderr, "Hint: Ensure .beads/beads.db exists and is readable\n") + os.Exit(1) } runCompactAnalyze(ctx, sqliteStore) return @@ -209,17 +216,23 @@ Examples: // Handle apply mode (requires direct database access) if compactApply { if err := ensureDirectMode("compact --apply requires direct database access"); err != nil { - FatalErrorWithHint(fmt.Sprintf("%v", err), "Use --no-daemon flag to bypass daemon and access database directly") + fmt.Fprintf(os.Stderr, "Error: %v\n", err) + fmt.Fprintf(os.Stderr, "Hint: Use --no-daemon flag to bypass daemon and access database directly\n") + os.Exit(1) } if compactID == "" { - FatalError("--apply requires --id") + fmt.Fprintf(os.Stderr, "Error: --apply requires --id\n") + os.Exit(1) } if compactSummary == "" { - FatalError("--apply requires --summary") + fmt.Fprintf(os.Stderr, "Error: --apply requires --summary\n") + os.Exit(1) } sqliteStore, ok := store.(*sqlite.SQLiteStorage) if !ok { - FatalErrorWithHint("failed to open database in direct mode", "Ensure .beads/beads.db exists and is readable") + fmt.Fprintf(os.Stderr, "Error: failed to open database in direct mode\n") + fmt.Fprintf(os.Stderr, "Hint: Ensure .beads/beads.db exists and is readable\n") + os.Exit(1) } runCompactApply(ctx, sqliteStore) return @@ -235,13 +248,16 @@ Examples: // Validation checks if compactID != "" && compactAll { - FatalError("cannot use --id and --all together") + fmt.Fprintf(os.Stderr, "Error: cannot use --id and --all together\n") + os.Exit(1) } if compactForce && compactID == "" { - FatalError("--force requires --id") + fmt.Fprintf(os.Stderr, "Error: --force requires --id\n") + os.Exit(1) } if compactID == "" && !compactAll && !compactDryRun { - FatalError("must specify --all, --id, or --dry-run") + fmt.Fprintf(os.Stderr, "Error: must specify --all, --id, or --dry-run\n") + os.Exit(1) } // Use RPC if daemon available, otherwise direct mode @@ -253,12 +269,14 @@ Examples: // Fallback to direct mode apiKey := os.Getenv("ANTHROPIC_API_KEY") if apiKey == "" && !compactDryRun { - FatalError("--auto mode requires ANTHROPIC_API_KEY environment variable") + fmt.Fprintf(os.Stderr, "Error: --auto mode requires ANTHROPIC_API_KEY environment variable\n") + os.Exit(1) } sqliteStore, ok := store.(*sqlite.SQLiteStorage) if !ok { - FatalError("compact requires SQLite storage") + fmt.Fprintf(os.Stderr, "Error: compact requires SQLite storage\n") + os.Exit(1) } config := &compact.Config{ @@ -271,7 +289,8 @@ Examples: compactor, err := compact.New(sqliteStore, apiKey, config) if err != nil { - FatalError("failed to create compactor: %v", err) + fmt.Fprintf(os.Stderr, "Error: failed to create compactor: %v\n", err) + os.Exit(1) } if compactID != "" { @@ -290,16 +309,19 @@ func runCompactSingle(ctx context.Context, compactor *compact.Compactor, store * if !compactForce { eligible, reason, err := store.CheckEligibility(ctx, issueID, compactTier) if err != nil { - FatalError("failed to check eligibility: %v", err) + fmt.Fprintf(os.Stderr, "Error: failed to check eligibility: %v\n", err) + os.Exit(1) } if !eligible { - FatalError("%s is not eligible for Tier %d compaction: %s", issueID, compactTier, reason) + fmt.Fprintf(os.Stderr, "Error: %s is not eligible for Tier %d compaction: %s\n", issueID, compactTier, reason) + os.Exit(1) } } issue, err := store.GetIssue(ctx, issueID) if err != nil { - FatalError("failed to get issue: %v", err) + fmt.Fprintf(os.Stderr, "Error: failed to get issue: %v\n", err) + os.Exit(1) } originalSize := len(issue.Description) + len(issue.Design) + len(issue.Notes) + len(issue.AcceptanceCriteria) @@ -327,16 +349,19 @@ func runCompactSingle(ctx context.Context, compactor *compact.Compactor, store * if compactTier == 1 { compactErr = compactor.CompactTier1(ctx, issueID) } else { - FatalError("Tier 2 compaction not yet implemented") + fmt.Fprintf(os.Stderr, "Error: Tier 2 compaction not yet implemented\n") + os.Exit(1) } if compactErr != nil { - FatalError("%v", compactErr) + fmt.Fprintf(os.Stderr, "Error: %v\n", compactErr) + os.Exit(1) } issue, err = store.GetIssue(ctx, issueID) if err != nil { - FatalError("failed to get updated issue: %v", err) + fmt.Fprintf(os.Stderr, "Error: failed to get updated issue: %v\n", err) + os.Exit(1) } compactedSize := len(issue.Description) @@ -382,7 +407,8 @@ func runCompactAll(ctx context.Context, compactor *compact.Compactor, store *sql if compactTier == 1 { tier1, err := store.GetTier1Candidates(ctx) if err != nil { - FatalError("failed to get candidates: %v", err) + fmt.Fprintf(os.Stderr, "Error: failed to get candidates: %v\n", err) + os.Exit(1) } for _, c := range tier1 { candidates = append(candidates, c.IssueID) @@ -390,7 +416,8 @@ func runCompactAll(ctx context.Context, compactor *compact.Compactor, store *sql } else { tier2, err := store.GetTier2Candidates(ctx) if err != nil { - FatalError("failed to get candidates: %v", err) + fmt.Fprintf(os.Stderr, "Error: failed to get candidates: %v\n", err) + os.Exit(1) } for _, c := range tier2 { candidates = append(candidates, c.IssueID) @@ -444,7 +471,8 @@ func runCompactAll(ctx context.Context, compactor *compact.Compactor, store *sql results, err := compactor.CompactTier1Batch(ctx, candidates) if err != nil { - FatalError("batch compaction failed: %v", err) + fmt.Fprintf(os.Stderr, "Error: batch compaction failed: %v\n", err) + os.Exit(1) } successCount := 0 @@ -507,12 +535,14 @@ func runCompactAll(ctx context.Context, compactor *compact.Compactor, store *sql func runCompactStats(ctx context.Context, store *sqlite.SQLiteStorage) { tier1, err := store.GetTier1Candidates(ctx) if err != nil { - FatalError("failed to get Tier 1 candidates: %v", err) + fmt.Fprintf(os.Stderr, "Error: failed to get Tier 1 candidates: %v\n", err) + os.Exit(1) } tier2, err := store.GetTier2Candidates(ctx) if err != nil { - FatalError("failed to get Tier 2 candidates: %v", err) + fmt.Fprintf(os.Stderr, "Error: failed to get Tier 2 candidates: %v\n", err) + os.Exit(1) } tier1Size := 0 @@ -578,20 +608,24 @@ func progressBar(current, total int) string { //nolint:unparam // ctx may be used in future for cancellation func runCompactRPC(_ context.Context) { if compactID != "" && compactAll { - FatalError("cannot use --id and --all together") + fmt.Fprintf(os.Stderr, "Error: cannot use --id and --all together\n") + os.Exit(1) } if compactForce && compactID == "" { - FatalError("--force requires --id") + fmt.Fprintf(os.Stderr, "Error: --force requires --id\n") + os.Exit(1) } if compactID == "" && !compactAll && !compactDryRun { - FatalError("must specify --all, --id, or --dry-run") + fmt.Fprintf(os.Stderr, "Error: must specify --all, --id, or --dry-run\n") + os.Exit(1) } apiKey := os.Getenv("ANTHROPIC_API_KEY") if apiKey == "" && !compactDryRun { - FatalError("ANTHROPIC_API_KEY environment variable not set") + fmt.Fprintf(os.Stderr, "Error: ANTHROPIC_API_KEY environment variable not set\n") + os.Exit(1) } args := map[string]interface{}{ @@ -609,11 +643,13 @@ func runCompactRPC(_ context.Context) { resp, err := daemonClient.Execute("compact", args) if err != nil { - FatalError("%v", err) + fmt.Fprintf(os.Stderr, "Error: %v\n", err) + os.Exit(1) } if !resp.Success { - FatalError("%s", resp.Error) + fmt.Fprintf(os.Stderr, "Error: %s\n", resp.Error) + os.Exit(1) } if jsonOutput { @@ -640,7 +676,8 @@ func runCompactRPC(_ context.Context) { } if err := json.Unmarshal(resp.Data, &result); err != nil { - FatalError("parsing response: %v", err) + fmt.Fprintf(os.Stderr, "Error parsing response: %v\n", err) + os.Exit(1) } if compactID != "" { @@ -685,11 +722,13 @@ func runCompactStatsRPC() { resp, err := daemonClient.Execute("compact_stats", args) if err != nil { - FatalError("%v", err) + fmt.Fprintf(os.Stderr, "Error: %v\n", err) + os.Exit(1) } if !resp.Success { - FatalError("%s", resp.Error) + fmt.Fprintf(os.Stderr, "Error: %s\n", resp.Error) + os.Exit(1) } if jsonOutput { @@ -710,7 +749,8 @@ func runCompactStatsRPC() { } if err := json.Unmarshal(resp.Data, &result); err != nil { - FatalError("parsing response: %v", err) + fmt.Fprintf(os.Stderr, "Error parsing response: %v\n", err) + os.Exit(1) } fmt.Printf("\nCompaction Statistics\n") @@ -744,7 +784,8 @@ func runCompactAnalyze(ctx context.Context, store *sqlite.SQLiteStorage) { if compactID != "" { issue, err := store.GetIssue(ctx, compactID) if err != nil { - FatalError("failed to get issue: %v", err) + fmt.Fprintf(os.Stderr, "Error: failed to get issue: %v\n", err) + os.Exit(1) } sizeBytes := len(issue.Description) + len(issue.Design) + len(issue.Notes) + len(issue.AcceptanceCriteria) @@ -775,7 +816,8 @@ func runCompactAnalyze(ctx context.Context, store *sqlite.SQLiteStorage) { tierCandidates, err = store.GetTier2Candidates(ctx) } if err != nil { - FatalError("failed to get candidates: %v", err) + fmt.Fprintf(os.Stderr, "Error: failed to get candidates: %v\n", err) + os.Exit(1) } // Apply limit if specified @@ -837,13 +879,15 @@ func runCompactApply(ctx context.Context, store *sqlite.SQLiteStorage) { // Read from stdin summaryBytes, err = io.ReadAll(os.Stdin) if err != nil { - FatalError("failed to read summary from stdin: %v", err) + fmt.Fprintf(os.Stderr, "Error: failed to read summary from stdin: %v\n", err) + os.Exit(1) } } else { // #nosec G304 -- summary file path provided explicitly by operator summaryBytes, err = os.ReadFile(compactSummary) if err != nil { - FatalError("failed to read summary file: %v", err) + fmt.Fprintf(os.Stderr, "Error: failed to read summary file: %v\n", err) + os.Exit(1) } } summary := string(summaryBytes) @@ -851,7 +895,8 @@ func runCompactApply(ctx context.Context, store *sqlite.SQLiteStorage) { // Get issue issue, err := store.GetIssue(ctx, compactID) if err != nil { - FatalError("failed to get issue: %v", err) + fmt.Fprintf(os.Stderr, "Error: failed to get issue: %v\n", err) + os.Exit(1) } // Calculate sizes @@ -862,15 +907,20 @@ func runCompactApply(ctx context.Context, store *sqlite.SQLiteStorage) { if !compactForce { eligible, reason, err := store.CheckEligibility(ctx, compactID, compactTier) if err != nil { - FatalError("failed to check eligibility: %v", err) + fmt.Fprintf(os.Stderr, "Error: failed to check eligibility: %v\n", err) + os.Exit(1) } if !eligible { - FatalErrorWithHint(fmt.Sprintf("%s is not eligible for Tier %d compaction: %s", compactID, compactTier, reason), "use --force to bypass eligibility checks") + fmt.Fprintf(os.Stderr, "Error: %s is not eligible for Tier %d compaction: %s\n", compactID, compactTier, reason) + fmt.Fprintf(os.Stderr, "Hint: use --force to bypass eligibility checks\n") + os.Exit(1) } // Enforce size reduction unless --force if compactedSize >= originalSize { - FatalErrorWithHint(fmt.Sprintf("summary (%d bytes) is not shorter than original (%d bytes)", compactedSize, originalSize), "use --force to bypass size validation") + fmt.Fprintf(os.Stderr, "Error: summary (%d bytes) is not shorter than original (%d bytes)\n", compactedSize, originalSize) + fmt.Fprintf(os.Stderr, "Hint: use --force to bypass size validation\n") + os.Exit(1) } } @@ -888,23 +938,27 @@ func runCompactApply(ctx context.Context, store *sqlite.SQLiteStorage) { } if err := store.UpdateIssue(ctx, compactID, updates, actor); err != nil { - FatalError("failed to update issue: %v", err) + fmt.Fprintf(os.Stderr, "Error: failed to update issue: %v\n", err) + os.Exit(1) } commitHash := compact.GetCurrentCommitHash() if err := store.ApplyCompaction(ctx, compactID, compactTier, originalSize, compactedSize, commitHash); err != nil { - FatalError("failed to apply compaction: %v", err) + fmt.Fprintf(os.Stderr, "Error: failed to apply compaction: %v\n", err) + os.Exit(1) } savingBytes := originalSize - compactedSize reductionPct := float64(savingBytes) / float64(originalSize) * 100 eventData := fmt.Sprintf("Tier %d compaction: %d β†’ %d bytes (saved %d, %.1f%%)", compactTier, originalSize, compactedSize, savingBytes, reductionPct) if err := store.AddComment(ctx, compactID, actor, eventData); err != nil { - FatalError("failed to record event: %v", err) + fmt.Fprintf(os.Stderr, "Error: failed to record event: %v\n", err) + os.Exit(1) } if err := store.MarkIssueDirty(ctx, compactID); err != nil { - FatalError("failed to mark dirty: %v", err) + fmt.Fprintf(os.Stderr, "Error: failed to mark dirty: %v\n", err) + os.Exit(1) } elapsed := time.Since(start) diff --git a/cmd/bd/config.go b/cmd/bd/config.go index a53b5f4c..7e104b11 100644 --- a/cmd/bd/config.go +++ b/cmd/bd/config.go @@ -7,7 +7,6 @@ import ( "strings" "github.com/spf13/cobra" - "github.com/steveyegge/beads/internal/config" "github.com/steveyegge/beads/internal/syncbranch" ) @@ -50,38 +49,17 @@ var configSetCmd = &cobra.Command{ Short: "Set a configuration value", Args: cobra.ExactArgs(2), Run: func(_ *cobra.Command, args []string) { - key := args[0] - value := args[1] - - // Check if this is a yaml-only key (startup settings like no-db, no-daemon, etc.) - // These must be written to config.yaml, not SQLite, because they're read - // before the database is opened. (GH#536) - if config.IsYamlOnlyKey(key) { - if err := config.SetYamlConfig(key, value); err != nil { - fmt.Fprintf(os.Stderr, "Error setting config: %v\n", err) - os.Exit(1) - } - - if jsonOutput { - outputJSON(map[string]interface{}{ - "key": key, - "value": value, - "location": "config.yaml", - }) - } else { - fmt.Printf("Set %s = %s (in config.yaml)\n", key, value) - } - return - } - - // Database-stored config requires direct mode + // Config operations work in direct mode only if err := ensureDirectMode("config set requires direct database access"); err != nil { fmt.Fprintf(os.Stderr, "Error: %v\n", err) os.Exit(1) } - ctx := rootCtx + key := args[0] + value := args[1] + ctx := rootCtx + // Special handling for sync.branch to apply validation if strings.TrimSpace(key) == syncbranch.ConfigKey { if err := syncbranch.Set(ctx, store, value); err != nil { @@ -111,46 +89,25 @@ var configGetCmd = &cobra.Command{ Short: "Get a configuration value", Args: cobra.ExactArgs(1), Run: func(cmd *cobra.Command, args []string) { - key := args[0] - - // Check if this is a yaml-only key (startup settings) - // These are read from config.yaml via viper, not SQLite. (GH#536) - if config.IsYamlOnlyKey(key) { - value := config.GetYamlConfig(key) - - if jsonOutput { - outputJSON(map[string]interface{}{ - "key": key, - "value": value, - "location": "config.yaml", - }) - } else { - if value == "" { - fmt.Printf("%s (not set in config.yaml)\n", key) - } else { - fmt.Printf("%s\n", value) - } - } - return - } - - // Database-stored config requires direct mode + // Config operations work in direct mode only if err := ensureDirectMode("config get requires direct database access"); err != nil { fmt.Fprintf(os.Stderr, "Error: %v\n", err) os.Exit(1) } + key := args[0] + ctx := rootCtx var value string var err error - + // Special handling for sync.branch to support env var override if strings.TrimSpace(key) == syncbranch.ConfigKey { value, err = syncbranch.Get(ctx, store) } else { value, err = store.GetConfig(ctx, key) } - + if err != nil { fmt.Fprintf(os.Stderr, "Error getting config: %v\n", err) os.Exit(1) diff --git a/cmd/bd/daemon.go b/cmd/bd/daemon.go index 0cab9bdd..84eb633c 100644 --- a/cmd/bd/daemon.go +++ b/cmd/bd/daemon.go @@ -56,8 +56,6 @@ Run 'bd daemon' with no flags to see available options.`, localMode, _ := cmd.Flags().GetBool("local") logFile, _ := cmd.Flags().GetString("log") foreground, _ := cmd.Flags().GetBool("foreground") - logLevel, _ := cmd.Flags().GetString("log-level") - logJSON, _ := cmd.Flags().GetBool("log-json") // If no operation flags provided, show help if !start && !stop && !stopAll && !status && !health && !metrics { @@ -247,7 +245,7 @@ Run 'bd daemon' with no flags to see available options.`, fmt.Printf("Logging to: %s\n", logFile) } - startDaemon(interval, autoCommit, autoPush, autoPull, localMode, foreground, logFile, pidFile, logLevel, logJSON) + startDaemon(interval, autoCommit, autoPush, autoPull, localMode, foreground, logFile, pidFile) }, } @@ -265,8 +263,6 @@ func init() { daemonCmd.Flags().Bool("metrics", false, "Show detailed daemon metrics") daemonCmd.Flags().String("log", "", "Log file path (default: .beads/daemon.log)") daemonCmd.Flags().Bool("foreground", false, "Run in foreground (don't daemonize)") - daemonCmd.Flags().String("log-level", "info", "Log level (debug, info, warn, error)") - daemonCmd.Flags().Bool("log-json", false, "Output logs in JSON format (structured logging)") daemonCmd.Flags().BoolVar(&jsonOutput, "json", false, "Output JSON format") rootCmd.AddCommand(daemonCmd) } @@ -283,9 +279,8 @@ func computeDaemonParentPID() int { } return os.Getppid() } -func runDaemonLoop(interval time.Duration, autoCommit, autoPush, autoPull, localMode bool, logPath, pidFile, logLevel string, logJSON bool) { - level := parseLogLevel(logLevel) - logF, log := setupDaemonLogger(logPath, logJSON, level) +func runDaemonLoop(interval time.Duration, autoCommit, autoPush, autoPull, localMode bool, logPath, pidFile string) { + logF, log := setupDaemonLogger(logPath) defer func() { _ = logF.Close() }() // Set up signal-aware context for graceful shutdown @@ -295,13 +290,13 @@ func runDaemonLoop(interval time.Duration, autoCommit, autoPush, autoPull, local // Top-level panic recovery to ensure clean shutdown and diagnostics defer func() { if r := recover(); r != nil { - log.Error("daemon crashed", "panic", r) + log.log("PANIC: daemon crashed: %v", r) // Capture stack trace stackBuf := make([]byte, 4096) stackSize := runtime.Stack(stackBuf, false) stackTrace := string(stackBuf[:stackSize]) - log.Error("stack trace", "trace", stackTrace) + log.log("Stack trace:\n%s", stackTrace) // Write crash report to daemon-error file for user visibility var beadsDir string @@ -310,21 +305,21 @@ func runDaemonLoop(interval time.Duration, autoCommit, autoPush, autoPull, local } else if foundDB := beads.FindDatabasePath(); foundDB != "" { beadsDir = filepath.Dir(foundDB) } - + if beadsDir != "" { errFile := filepath.Join(beadsDir, "daemon-error") crashReport := fmt.Sprintf("Daemon crashed at %s\n\nPanic: %v\n\nStack trace:\n%s\n", time.Now().Format(time.RFC3339), r, stackTrace) // nolint:gosec // G306: Error file needs to be readable for debugging if err := os.WriteFile(errFile, []byte(crashReport), 0644); err != nil { - log.Warn("could not write crash report", "error", err) + log.log("Warning: could not write crash report: %v", err) } } - + // Clean up PID file _ = os.Remove(pidFile) - - log.Info("daemon terminated after panic") + + log.log("Daemon terminated after panic") } }() @@ -334,8 +329,8 @@ func runDaemonLoop(interval time.Duration, autoCommit, autoPush, autoPull, local if foundDB := beads.FindDatabasePath(); foundDB != "" { daemonDBPath = foundDB } else { - log.Error("no beads database found") - log.Info("hint: run 'bd init' to create a database or set BEADS_DB environment variable") + log.log("Error: no beads database found") + log.log("Hint: run 'bd init' to create a database or set BEADS_DB environment variable") return // Use return instead of os.Exit to allow defers to run } } @@ -381,7 +376,7 @@ func runDaemonLoop(interval time.Duration, autoCommit, autoPush, autoPull, local errFile := filepath.Join(beadsDir, "daemon-error") // nolint:gosec // G306: Error file needs to be readable for debugging if err := os.WriteFile(errFile, []byte(errMsg), 0644); err != nil { - log.Warn("could not write daemon-error file", "error", err) + log.log("Warning: could not write daemon-error file: %v", err) } return // Use return instead of os.Exit to allow defers to run @@ -391,22 +386,24 @@ func runDaemonLoop(interval time.Duration, autoCommit, autoPush, autoPull, local // Validate using canonical name dbBaseName := filepath.Base(daemonDBPath) if dbBaseName != beads.CanonicalDatabaseName { - log.Error("non-canonical database name", "name", dbBaseName, "expected", beads.CanonicalDatabaseName) - log.Info("run 'bd init' to migrate to canonical name") + log.log("Error: Non-canonical database name: %s", dbBaseName) + log.log("Expected: %s", beads.CanonicalDatabaseName) + log.log("") + log.log("Run 'bd init' to migrate to canonical name") return // Use return instead of os.Exit to allow defers to run } - log.Info("using database", "path", daemonDBPath) + log.log("Using database: %s", daemonDBPath) // Clear any previous daemon-error file on successful startup errFile := filepath.Join(beadsDir, "daemon-error") if err := os.Remove(errFile); err != nil && !os.IsNotExist(err) { - log.Warn("could not remove daemon-error file", "error", err) + log.log("Warning: could not remove daemon-error file: %v", err) } store, err := sqlite.New(ctx, daemonDBPath) if err != nil { - log.Error("cannot open database", "error", err) + log.log("Error: cannot open database: %v", err) return // Use return instead of os.Exit to allow defers to run } defer func() { _ = store.Close() }() @@ -414,71 +411,73 @@ func runDaemonLoop(interval time.Duration, autoCommit, autoPush, autoPull, local // Enable freshness checking to detect external database file modifications // (e.g., when git merge replaces the database file) store.EnableFreshnessChecking() - log.Info("database opened", "path", daemonDBPath, "freshness_checking", true) + log.log("Database opened: %s (freshness checking enabled)", daemonDBPath) // Auto-upgrade .beads/.gitignore if outdated gitignoreCheck := doctor.CheckGitignore() if gitignoreCheck.Status == "warning" || gitignoreCheck.Status == "error" { - log.Info("upgrading .beads/.gitignore") + log.log("Upgrading .beads/.gitignore...") if err := doctor.FixGitignore(); err != nil { - log.Warn("failed to upgrade .gitignore", "error", err) + log.log("Warning: failed to upgrade .gitignore: %v", err) } else { - log.Info("successfully upgraded .beads/.gitignore") + log.log("Successfully upgraded .beads/.gitignore") } } // Hydrate from multi-repo if configured if results, err := store.HydrateFromMultiRepo(ctx); err != nil { - log.Error("multi-repo hydration failed", "error", err) + log.log("Error: multi-repo hydration failed: %v", err) return // Use return instead of os.Exit to allow defers to run } else if results != nil { - log.Info("multi-repo hydration complete") + log.log("Multi-repo hydration complete:") for repo, count := range results { - log.Info("hydrated issues", "repo", repo, "count", count) + log.log(" %s: %d issues", repo, count) } } // Validate database fingerprint (skip in local mode - no git available) if localMode { - log.Info("skipping fingerprint validation (local mode)") + log.log("Skipping fingerprint validation (local mode)") } else if err := validateDatabaseFingerprint(ctx, store, &log); err != nil { if os.Getenv("BEADS_IGNORE_REPO_MISMATCH") != "1" { - log.Error("repository fingerprint validation failed", "error", err) + log.log("Error: %v", err) return // Use return instead of os.Exit to allow defers to run } - log.Warn("repository mismatch ignored (BEADS_IGNORE_REPO_MISMATCH=1)") + log.log("Warning: repository mismatch ignored (BEADS_IGNORE_REPO_MISMATCH=1)") } // Validate schema version matches daemon version versionCtx := context.Background() dbVersion, err := store.GetMetadata(versionCtx, "bd_version") if err != nil && err.Error() != "metadata key not found: bd_version" { - log.Error("failed to read database version", "error", err) + log.log("Error: failed to read database version: %v", err) return // Use return instead of os.Exit to allow defers to run } if dbVersion != "" && dbVersion != Version { - log.Warn("database schema version mismatch", "db_version", dbVersion, "daemon_version", Version) - log.Info("auto-upgrading database to daemon version") + log.log("Warning: Database schema version mismatch") + log.log(" Database version: %s", dbVersion) + log.log(" Daemon version: %s", Version) + log.log(" Auto-upgrading database to daemon version...") // Auto-upgrade database to daemon version // The daemon operates on its own database, so it should always use its own version if err := store.SetMetadata(versionCtx, "bd_version", Version); err != nil { - log.Error("failed to update database version", "error", err) + log.log("Error: failed to update database version: %v", err) // Allow override via environment variable for emergencies if os.Getenv("BEADS_IGNORE_VERSION_MISMATCH") != "1" { return // Use return instead of os.Exit to allow defers to run } - log.Warn("proceeding despite version update failure (BEADS_IGNORE_VERSION_MISMATCH=1)") + log.log("Warning: Proceeding despite version update failure (BEADS_IGNORE_VERSION_MISMATCH=1)") } else { - log.Info("database version updated", "version", Version) + log.log(" Database version updated to %s", Version) } } else if dbVersion == "" { // Old database without version metadata - set it now - log.Warn("database missing version metadata", "setting_to", Version) + log.log("Warning: Database missing version metadata, setting to %s", Version) if err := store.SetMetadata(versionCtx, "bd_version", Version); err != nil { - log.Error("failed to set database version", "error", err) + log.log("Error: failed to set database version: %v", err) return // Use return instead of os.Exit to allow defers to run } } @@ -507,7 +506,7 @@ func runDaemonLoop(interval time.Duration, autoCommit, autoPush, autoPull, local // Register daemon in global registry registry, err := daemon.NewRegistry() if err != nil { - log.Warn("failed to create registry", "error", err) + log.log("Warning: failed to create registry: %v", err) } else { entry := daemon.RegistryEntry{ WorkspacePath: workspacePath, @@ -518,14 +517,14 @@ func runDaemonLoop(interval time.Duration, autoCommit, autoPush, autoPull, local StartedAt: time.Now(), } if err := registry.Register(entry); err != nil { - log.Warn("failed to register daemon", "error", err) + log.log("Warning: failed to register daemon: %v", err) } else { - log.Info("registered in global registry") + log.log("Registered in global registry") } // Ensure we unregister on exit defer func() { if err := registry.Unregister(workspacePath, os.Getpid()); err != nil { - log.Warn("failed to unregister daemon", "error", err) + log.log("Warning: failed to unregister daemon: %v", err) } }() } @@ -544,16 +543,16 @@ func runDaemonLoop(interval time.Duration, autoCommit, autoPush, autoPull, local // Get parent PID for monitoring (exit if parent dies) parentPID := computeDaemonParentPID() - log.Info("monitoring parent process", "pid", parentPID) + log.log("Monitoring parent process (PID %d)", parentPID) // daemonMode already determined above for SetConfig switch daemonMode { case "events": - log.Info("using event-driven mode") + log.log("Using event-driven mode") jsonlPath := findJSONLPath() if jsonlPath == "" { - log.Error("JSONL path not found, cannot use event-driven mode") - log.Info("falling back to polling mode") + log.log("Error: JSONL path not found, cannot use event-driven mode") + log.log("Falling back to polling mode") runEventLoop(ctx, cancel, ticker, doSync, server, serverErrChan, parentPID, log) } else { // Event-driven mode uses separate export-only and import-only functions @@ -568,10 +567,10 @@ func runDaemonLoop(interval time.Duration, autoCommit, autoPush, autoPull, local runEventDrivenLoop(ctx, cancel, server, serverErrChan, store, jsonlPath, doExport, doAutoImport, autoPull, parentPID, log) } case "poll": - log.Info("using polling mode", "interval", interval) + log.log("Using polling mode (interval: %v)", interval) runEventLoop(ctx, cancel, ticker, doSync, server, serverErrChan, parentPID, log) default: - log.Warn("unknown BEADS_DAEMON_MODE, defaulting to poll", "mode", daemonMode, "valid", "poll, events") + log.log("Unknown BEADS_DAEMON_MODE: %s (valid: poll, events), defaulting to poll", daemonMode) runEventLoop(ctx, cancel, ticker, doSync, server, serverErrChan, parentPID, log) } } diff --git a/cmd/bd/daemon_integration_test.go b/cmd/bd/daemon_integration_test.go index 2e11b0f1..fcc47b11 100644 --- a/cmd/bd/daemon_integration_test.go +++ b/cmd/bd/daemon_integration_test.go @@ -457,7 +457,11 @@ func TestEventLoopSignalHandling(t *testing.T) { // createTestLogger creates a daemonLogger for testing func createTestLogger(t *testing.T) daemonLogger { - return newTestLogger() + return daemonLogger{ + logFunc: func(format string, args ...interface{}) { + t.Logf("[daemon] "+format, args...) + }, + } } // TestDaemonIntegration_SocketCleanup verifies socket cleanup after daemon stops diff --git a/cmd/bd/daemon_lifecycle.go b/cmd/bd/daemon_lifecycle.go index 2ee0404a..1ca5c669 100644 --- a/cmd/bd/daemon_lifecycle.go +++ b/cmd/bd/daemon_lifecycle.go @@ -369,7 +369,7 @@ func stopAllDaemons() { } // startDaemon starts the daemon (in foreground if requested, otherwise background) -func startDaemon(interval time.Duration, autoCommit, autoPush, autoPull, localMode, foreground bool, logFile, pidFile, logLevel string, logJSON bool) { +func startDaemon(interval time.Duration, autoCommit, autoPush, autoPull, localMode, foreground bool, logFile, pidFile string) { logPath, err := getLogFilePath(logFile) if err != nil { fmt.Fprintf(os.Stderr, "Error: %v\n", err) @@ -378,7 +378,7 @@ func startDaemon(interval time.Duration, autoCommit, autoPush, autoPull, localMo // Run in foreground if --foreground flag set or if we're the forked child process if foreground || os.Getenv("BD_DAEMON_FOREGROUND") == "1" { - runDaemonLoop(interval, autoCommit, autoPush, autoPull, localMode, logPath, pidFile, logLevel, logJSON) + runDaemonLoop(interval, autoCommit, autoPush, autoPull, localMode, logPath, pidFile) return } @@ -406,12 +406,6 @@ func startDaemon(interval time.Duration, autoCommit, autoPush, autoPull, localMo if logFile != "" { args = append(args, "--log", logFile) } - if logLevel != "" && logLevel != "info" { - args = append(args, "--log-level", logLevel) - } - if logJSON { - args = append(args, "--log-json") - } cmd := exec.Command(exe, args...) // #nosec G204 - bd daemon command from trusted binary cmd.Env = append(os.Environ(), "BD_DAEMON_FOREGROUND=1") @@ -461,18 +455,18 @@ func setupDaemonLock(pidFile string, dbPath string, log daemonLogger) (*DaemonLo // Detect nested .beads directories (e.g., .beads/.beads/.beads/) cleanPath := filepath.Clean(beadsDir) if strings.Contains(cleanPath, string(filepath.Separator)+".beads"+string(filepath.Separator)+".beads") { - log.Error("nested .beads directory detected", "path", cleanPath) - log.Info("hint: do not run 'bd daemon' from inside .beads/ directory") - log.Info("hint: use absolute paths for BEADS_DB or run from workspace root") + log.log("Error: Nested .beads directory detected: %s", cleanPath) + log.log("Hint: Do not run 'bd daemon' from inside .beads/ directory") + log.log("Hint: Use absolute paths for BEADS_DB or run from workspace root") return nil, fmt.Errorf("nested .beads directory detected") } lock, err := acquireDaemonLock(beadsDir, dbPath) if err != nil { if err == ErrDaemonLocked { - log.Info("daemon already running (lock held), exiting") + log.log("Daemon already running (lock held), exiting") } else { - log.Error("acquiring daemon lock", "error", err) + log.log("Error acquiring daemon lock: %v", err) } return nil, err } @@ -483,11 +477,11 @@ func setupDaemonLock(pidFile string, dbPath string, log daemonLogger) (*DaemonLo if pid, err := strconv.Atoi(strings.TrimSpace(string(data))); err == nil && pid == myPID { // PID file is correct, continue } else { - log.Warn("PID file has wrong PID, overwriting", "expected", myPID, "got", pid) + log.log("PID file has wrong PID (expected %d, got %d), overwriting", myPID, pid) _ = os.WriteFile(pidFile, []byte(fmt.Sprintf("%d\n", myPID)), 0600) } } else { - log.Info("PID file missing after lock acquisition, creating") + log.log("PID file missing after lock acquisition, creating") _ = os.WriteFile(pidFile, []byte(fmt.Sprintf("%d\n", myPID)), 0600) } diff --git a/cmd/bd/daemon_local_test.go b/cmd/bd/daemon_local_test.go index 9ed2f18a..33d7b691 100644 --- a/cmd/bd/daemon_local_test.go +++ b/cmd/bd/daemon_local_test.go @@ -122,8 +122,12 @@ func TestCreateLocalSyncFunc(t *testing.T) { t.Fatalf("Failed to create issue: %v", err) } - // Create logger (test output via newTestLogger) - log := newTestLogger() + // Create logger + log := daemonLogger{ + logFunc: func(format string, args ...interface{}) { + t.Logf(format, args...) + }, + } // Create and run local sync function doSync := createLocalSyncFunc(ctx, testStore, log) @@ -189,7 +193,11 @@ func TestCreateLocalExportFunc(t *testing.T) { } } - log := newTestLogger() + log := daemonLogger{ + logFunc: func(format string, args ...interface{}) { + t.Logf(format, args...) + }, + } doExport := createLocalExportFunc(ctx, testStore, log) doExport() @@ -250,7 +258,11 @@ func TestCreateLocalAutoImportFunc(t *testing.T) { t.Fatalf("Failed to write JSONL: %v", err) } - log := newTestLogger() + log := daemonLogger{ + logFunc: func(format string, args ...interface{}) { + t.Logf(format, args...) + }, + } doImport := createLocalAutoImportFunc(ctx, testStore, log) doImport() @@ -367,7 +379,11 @@ func TestLocalModeInNonGitDirectory(t *testing.T) { t.Fatalf("Failed to create issue: %v", err) } - log := newTestLogger() + log := daemonLogger{ + logFunc: func(format string, args ...interface{}) { + t.Logf(format, args...) + }, + } // Run local sync (should work without git) doSync := createLocalSyncFunc(ctx, testStore, log) @@ -421,7 +437,11 @@ func TestLocalModeExportImportRoundTrip(t *testing.T) { defer func() { dbPath = oldDBPath }() dbPath = testDBPath - log := newTestLogger() + log := daemonLogger{ + logFunc: func(format string, args ...interface{}) { + t.Logf(format, args...) + }, + } // Create issues for i := 0; i < 5; i++ { diff --git a/cmd/bd/daemon_logger.go b/cmd/bd/daemon_logger.go index bf085871..ecb29e05 100644 --- a/cmd/bd/daemon_logger.go +++ b/cmd/bd/daemon_logger.go @@ -1,97 +1,23 @@ package main import ( - "io" - "log/slog" - "os" - "strings" + "fmt" + "time" "gopkg.in/natefinch/lumberjack.v2" ) -// daemonLogger wraps slog for daemon logging. -// Provides level-specific methods and backward-compatible log() for migration. +// daemonLogger wraps a logging function for the daemon type daemonLogger struct { - logger *slog.Logger + logFunc func(string, ...interface{}) } -// log is the backward-compatible logging method (maps to Info level). -// Use Info(), Warn(), Error(), Debug() for explicit levels. func (d *daemonLogger) log(format string, args ...interface{}) { - d.logger.Info(format, toSlogArgs(args)...) + d.logFunc(format, args...) } -// Info logs at INFO level. -func (d *daemonLogger) Info(msg string, args ...interface{}) { - d.logger.Info(msg, toSlogArgs(args)...) -} - -// Warn logs at WARN level. -func (d *daemonLogger) Warn(msg string, args ...interface{}) { - d.logger.Warn(msg, toSlogArgs(args)...) -} - -// Error logs at ERROR level. -func (d *daemonLogger) Error(msg string, args ...interface{}) { - d.logger.Error(msg, toSlogArgs(args)...) -} - -// Debug logs at DEBUG level. -func (d *daemonLogger) Debug(msg string, args ...interface{}) { - d.logger.Debug(msg, toSlogArgs(args)...) -} - -// toSlogArgs converts variadic args to slog-compatible key-value pairs. -// If args are already in key-value format (string, value, string, value...), -// they're passed through. Otherwise, they're wrapped as "args" for sprintf-style logs. -func toSlogArgs(args []interface{}) []any { - if len(args) == 0 { - return nil - } - // Check if args look like slog key-value pairs (string key followed by value) - // If first arg is a string and we have pairs, treat as slog format - if len(args) >= 2 { - if _, ok := args[0].(string); ok { - // Likely slog-style: "key", value, "key2", value2 - result := make([]any, len(args)) - for i, a := range args { - result[i] = a - } - return result - } - } - // For sprintf-style args, wrap them (caller should use fmt.Sprintf) - result := make([]any, len(args)) - for i, a := range args { - result[i] = a - } - return result -} - -// parseLogLevel converts a log level string to slog.Level. -func parseLogLevel(level string) slog.Level { - switch strings.ToLower(level) { - case "debug": - return slog.LevelDebug - case "info": - return slog.LevelInfo - case "warn", "warning": - return slog.LevelWarn - case "error": - return slog.LevelError - default: - return slog.LevelInfo - } -} - -// setupDaemonLogger creates a structured logger for the daemon. -// Returns the lumberjack logger (for cleanup) and the daemon logger. -// -// Parameters: -// - logPath: path to log file (uses lumberjack for rotation) -// - jsonFormat: if true, output JSON; otherwise text format -// - level: log level (debug, info, warn, error) -func setupDaemonLogger(logPath string, jsonFormat bool, level slog.Level) (*lumberjack.Logger, daemonLogger) { +// setupDaemonLogger creates a rotating log file logger for the daemon +func setupDaemonLogger(logPath string) (*lumberjack.Logger, daemonLogger) { maxSizeMB := getEnvInt("BEADS_DAEMON_LOG_MAX_SIZE", 50) maxBackups := getEnvInt("BEADS_DAEMON_LOG_MAX_BACKUPS", 7) maxAgeDays := getEnvInt("BEADS_DAEMON_LOG_MAX_AGE", 30) @@ -105,65 +31,13 @@ func setupDaemonLogger(logPath string, jsonFormat bool, level slog.Level) (*lumb Compress: compress, } - // Create multi-writer to log to both file and stderr (for foreground mode visibility) - var w io.Writer = logF - - // Configure slog handler - opts := &slog.HandlerOptions{ - Level: level, - } - - var handler slog.Handler - if jsonFormat { - handler = slog.NewJSONHandler(w, opts) - } else { - handler = slog.NewTextHandler(w, opts) - } - logger := daemonLogger{ - logger: slog.New(handler), + logFunc: func(format string, args ...interface{}) { + msg := fmt.Sprintf(format, args...) + timestamp := time.Now().Format("2006-01-02 15:04:05") + _, _ = fmt.Fprintf(logF, "[%s] %s\n", timestamp, msg) + }, } return logF, logger } - -// setupDaemonLoggerLegacy is the old signature for backward compatibility during migration. -// TODO: Remove this once all callers are updated to use the new signature. -func setupDaemonLoggerLegacy(logPath string) (*lumberjack.Logger, daemonLogger) { - return setupDaemonLogger(logPath, false, slog.LevelInfo) -} - -// SetupStderrLogger creates a logger that writes to stderr only (no file). -// Useful for foreground mode or testing. -func SetupStderrLogger(jsonFormat bool, level slog.Level) daemonLogger { - opts := &slog.HandlerOptions{ - Level: level, - } - - var handler slog.Handler - if jsonFormat { - handler = slog.NewJSONHandler(os.Stderr, opts) - } else { - handler = slog.NewTextHandler(os.Stderr, opts) - } - - return daemonLogger{ - logger: slog.New(handler), - } -} - -// newTestLogger creates a no-op logger for testing. -// Logs are discarded - use this when you don't need to verify log output. -func newTestLogger() daemonLogger { - return daemonLogger{ - logger: slog.New(slog.NewTextHandler(io.Discard, nil)), - } -} - -// newTestLoggerWithWriter creates a logger that writes to the given writer. -// Use this when you need to capture and verify log output in tests. -func newTestLoggerWithWriter(w io.Writer) daemonLogger { - return daemonLogger{ - logger: slog.New(slog.NewTextHandler(w, nil)), - } -} diff --git a/cmd/bd/daemon_server.go b/cmd/bd/daemon_server.go index 81780b85..9ad4f8fc 100644 --- a/cmd/bd/daemon_server.go +++ b/cmd/bd/daemon_server.go @@ -19,21 +19,21 @@ func startRPCServer(ctx context.Context, socketPath string, store storage.Storag serverErrChan := make(chan error, 1) go func() { - log.Info("starting RPC server", "socket", socketPath) + log.log("Starting RPC server: %s", socketPath) if err := server.Start(ctx); err != nil { - log.Error("RPC server error", "error", err) + log.log("RPC server error: %v", err) serverErrChan <- err } }() select { case err := <-serverErrChan: - log.Error("RPC server failed to start", "error", err) + log.log("RPC server failed to start: %v", err) return nil, nil, err case <-server.WaitReady(): - log.Info("RPC server ready (socket listening)") + log.log("RPC server ready (socket listening)") case <-time.After(5 * time.Second): - log.Warn("server didn't signal ready after 5 seconds (may still be starting)") + log.log("WARNING: Server didn't signal ready after 5 seconds (may still be starting)") } return server, serverErrChan, nil @@ -78,35 +78,35 @@ func runEventLoop(ctx context.Context, cancel context.CancelFunc, ticker *time.T case <-parentCheckTicker.C: // Check if parent process is still alive if !checkParentProcessAlive(parentPID) { - log.Info("parent process died, shutting down daemon", "parent_pid", parentPID) + log.log("Parent process (PID %d) died, shutting down daemon", parentPID) cancel() if err := server.Stop(); err != nil { - log.Error("stopping server", "error", err) + log.log("Error stopping server: %v", err) } return } case sig := <-sigChan: if isReloadSignal(sig) { - log.Info("received reload signal, ignoring (daemon continues running)") + log.log("Received reload signal, ignoring (daemon continues running)") continue } - log.Info("received signal, shutting down gracefully", "signal", sig) + log.log("Received signal %v, shutting down gracefully...", sig) cancel() if err := server.Stop(); err != nil { - log.Error("stopping RPC server", "error", err) + log.log("Error stopping RPC server: %v", err) } return case <-ctx.Done(): - log.Info("context canceled, shutting down") + log.log("Context canceled, shutting down") if err := server.Stop(); err != nil { - log.Error("stopping RPC server", "error", err) + log.log("Error stopping RPC server: %v", err) } return case err := <-serverErrChan: - log.Error("RPC server failed", "error", err) + log.log("RPC server failed: %v", err) cancel() if err := server.Stop(); err != nil { - log.Error("stopping RPC server", "error", err) + log.log("Error stopping RPC server: %v", err) } return } diff --git a/cmd/bd/daemon_sync_branch_test.go b/cmd/bd/daemon_sync_branch_test.go index d6731347..78b5c810 100644 --- a/cmd/bd/daemon_sync_branch_test.go +++ b/cmd/bd/daemon_sync_branch_test.go @@ -772,11 +772,13 @@ func TestSyncBranchIntegration_EndToEnd(t *testing.T) { // Helper types for testing func newTestSyncBranchLogger() (daemonLogger, *string) { - // Note: With slog, we can't easily capture formatted messages like before. - // For tests that need to verify log output, use strings.Builder and newTestLoggerWithWriter. - // This helper is kept for backward compatibility but messages won't be captured. messages := "" - return newTestLogger(), &messages + logger := daemonLogger{ + logFunc: func(format string, args ...interface{}) { + messages += "\n" + format + }, + } + return logger, &messages } // TestSyncBranchConfigChange tests changing sync.branch after worktree exists diff --git a/cmd/bd/daemon_sync_test.go b/cmd/bd/daemon_sync_test.go index 42060763..a96e94e2 100644 --- a/cmd/bd/daemon_sync_test.go +++ b/cmd/bd/daemon_sync_test.go @@ -335,7 +335,11 @@ func TestExportUpdatesMetadata(t *testing.T) { // Update metadata using the actual daemon helper function (bd-ar2.3 fix) // This verifies that updateExportMetadata (used by createExportFunc and createSyncFunc) works correctly - mockLogger := newTestLogger() + mockLogger := daemonLogger{ + logFunc: func(format string, args ...interface{}) { + t.Logf(format, args...) + }, + } updateExportMetadata(ctx, store, jsonlPath, mockLogger, "") // Verify metadata was set (renamed from last_import_hash to jsonl_content_hash - bd-39o) @@ -434,7 +438,11 @@ func TestUpdateExportMetadataMultiRepo(t *testing.T) { } // Create mock logger - mockLogger := newTestLogger() + mockLogger := daemonLogger{ + logFunc: func(format string, args ...interface{}) { + t.Logf(format, args...) + }, + } // Update metadata for each repo with different keys (bd-ar2.2 multi-repo support) updateExportMetadata(ctx, store, jsonlPath1, mockLogger, jsonlPath1) @@ -546,7 +554,11 @@ func TestExportWithMultiRepoConfigUpdatesAllMetadata(t *testing.T) { // Simulate multi-repo export flow (as in createExportFunc) // This tests the full integration: getMultiRepoJSONLPaths -> getRepoKeyForPath -> updateExportMetadata - mockLogger := newTestLogger() + mockLogger := daemonLogger{ + logFunc: func(format string, args ...interface{}) { + t.Logf(format, args...) + }, + } // Simulate multi-repo mode with stable keys multiRepoPaths := []string{primaryJSONL, additionalJSONL} @@ -664,7 +676,11 @@ func TestUpdateExportMetadataInvalidKeySuffix(t *testing.T) { } // Create mock logger - mockLogger := newTestLogger() + mockLogger := daemonLogger{ + logFunc: func(format string, args ...interface{}) { + t.Logf(format, args...) + }, + } // Update metadata with keySuffix containing ':' (bd-web8: should be auto-sanitized) // This simulates Windows absolute paths like "C:\Users\..." diff --git a/cmd/bd/daemon_watcher_test.go b/cmd/bd/daemon_watcher_test.go index d7146b07..26236ec2 100644 --- a/cmd/bd/daemon_watcher_test.go +++ b/cmd/bd/daemon_watcher_test.go @@ -15,7 +15,9 @@ import ( // newMockLogger creates a daemonLogger that does nothing func newMockLogger() daemonLogger { - return newTestLogger() + return daemonLogger{ + logFunc: func(format string, args ...interface{}) {}, + } } func TestFileWatcher_JSONLChangeDetection(t *testing.T) { diff --git a/cmd/bd/delete_test.go b/cmd/bd/delete_test.go index aded261c..fc07cd6a 100644 --- a/cmd/bd/delete_test.go +++ b/cmd/bd/delete_test.go @@ -272,330 +272,3 @@ func countJSONLIssuesTest(t *testing.T, jsonlPath string) int { } return count } - -// TestCreateTombstoneWrapper tests the createTombstone wrapper function -func TestCreateTombstoneWrapper(t *testing.T) { - if testing.Short() { - t.Skip("skipping integration test in short mode") - } - - tmpDir := t.TempDir() - beadsDir := filepath.Join(tmpDir, ".beads") - testDB := filepath.Join(beadsDir, "beads.db") - - s := newTestStore(t, testDB) - ctx := context.Background() - - // Save and restore global store - oldStore := store - defer func() { store = oldStore }() - store = s - - t.Run("successful tombstone creation", func(t *testing.T) { - issue := &types.Issue{ - Title: "Test Issue", - Description: "Issue to be tombstoned", - Status: types.StatusOpen, - Priority: 2, - IssueType: "task", - } - if err := s.CreateIssue(ctx, issue, "test"); err != nil { - t.Fatalf("Failed to create issue: %v", err) - } - - err := createTombstone(ctx, issue.ID, "test-actor", "Test deletion reason") - if err != nil { - t.Fatalf("createTombstone failed: %v", err) - } - - // Verify tombstone status - updated, err := s.GetIssue(ctx, issue.ID) - if err != nil { - t.Fatalf("GetIssue failed: %v", err) - } - if updated == nil { - t.Fatal("Issue should still exist as tombstone") - } - if updated.Status != types.StatusTombstone { - t.Errorf("Expected status %s, got %s", types.StatusTombstone, updated.Status) - } - }) - - t.Run("tombstone with actor and reason tracking", func(t *testing.T) { - issue := &types.Issue{ - Title: "Issue with tracking", - Description: "Check actor/reason", - Status: types.StatusOpen, - Priority: 1, - IssueType: "bug", - } - if err := s.CreateIssue(ctx, issue, "test"); err != nil { - t.Fatalf("Failed to create issue: %v", err) - } - - actor := "admin-user" - reason := "Duplicate issue" - err := createTombstone(ctx, issue.ID, actor, reason) - if err != nil { - t.Fatalf("createTombstone failed: %v", err) - } - - // Verify actor and reason were recorded - updated, err := s.GetIssue(ctx, issue.ID) - if err != nil { - t.Fatalf("GetIssue failed: %v", err) - } - if updated.DeletedBy != actor { - t.Errorf("Expected DeletedBy %q, got %q", actor, updated.DeletedBy) - } - if updated.DeleteReason != reason { - t.Errorf("Expected DeleteReason %q, got %q", reason, updated.DeleteReason) - } - }) - - t.Run("error when issue does not exist", func(t *testing.T) { - err := createTombstone(ctx, "nonexistent-issue-id", "actor", "reason") - if err == nil { - t.Error("Expected error for non-existent issue") - } - }) - - t.Run("verify tombstone preserves original type", func(t *testing.T) { - issue := &types.Issue{ - Title: "Feature issue", - Description: "Should preserve type", - Status: types.StatusOpen, - Priority: 2, - IssueType: types.TypeFeature, - } - if err := s.CreateIssue(ctx, issue, "test"); err != nil { - t.Fatalf("Failed to create issue: %v", err) - } - - err := createTombstone(ctx, issue.ID, "actor", "reason") - if err != nil { - t.Fatalf("createTombstone failed: %v", err) - } - - updated, err := s.GetIssue(ctx, issue.ID) - if err != nil { - t.Fatalf("GetIssue failed: %v", err) - } - if updated.OriginalType != string(types.TypeFeature) { - t.Errorf("Expected OriginalType %q, got %q", types.TypeFeature, updated.OriginalType) - } - }) - - t.Run("verify audit trail recorded", func(t *testing.T) { - issue := &types.Issue{ - Title: "Issue for audit", - Description: "Check event recording", - Status: types.StatusOpen, - Priority: 2, - IssueType: "task", - } - if err := s.CreateIssue(ctx, issue, "test"); err != nil { - t.Fatalf("Failed to create issue: %v", err) - } - - err := createTombstone(ctx, issue.ID, "audit-actor", "audit-reason") - if err != nil { - t.Fatalf("createTombstone failed: %v", err) - } - - // Verify an event was recorded - events, err := s.GetEvents(ctx, issue.ID, 100) - if err != nil { - t.Fatalf("GetEvents failed: %v", err) - } - - found := false - for _, e := range events { - if e.EventType == "deleted" && e.Actor == "audit-actor" { - found = true - break - } - } - if !found { - t.Error("Expected 'deleted' event in audit trail") - } - }) -} - -// TestDeleteIssueWrapper tests the deleteIssue wrapper function -func TestDeleteIssueWrapper(t *testing.T) { - if testing.Short() { - t.Skip("skipping integration test in short mode") - } - - tmpDir := t.TempDir() - beadsDir := filepath.Join(tmpDir, ".beads") - testDB := filepath.Join(beadsDir, "beads.db") - - s := newTestStore(t, testDB) - ctx := context.Background() - - // Save and restore global store - oldStore := store - defer func() { store = oldStore }() - store = s - - t.Run("successful issue deletion", func(t *testing.T) { - issue := &types.Issue{ - Title: "Issue to delete", - Description: "Will be permanently deleted", - Status: types.StatusOpen, - Priority: 2, - IssueType: "task", - } - if err := s.CreateIssue(ctx, issue, "test"); err != nil { - t.Fatalf("Failed to create issue: %v", err) - } - - err := deleteIssue(ctx, issue.ID) - if err != nil { - t.Fatalf("deleteIssue failed: %v", err) - } - - // Verify issue is gone - deleted, err := s.GetIssue(ctx, issue.ID) - if err != nil { - t.Fatalf("GetIssue failed: %v", err) - } - if deleted != nil { - t.Error("Issue should be completely deleted") - } - }) - - t.Run("error on non-existent issue", func(t *testing.T) { - err := deleteIssue(ctx, "nonexistent-issue-id") - if err == nil { - t.Error("Expected error for non-existent issue") - } - }) - - t.Run("verify dependencies are removed", func(t *testing.T) { - // Create two issues with a dependency - issue1 := &types.Issue{ - Title: "Blocker issue", - Status: types.StatusOpen, - Priority: 1, - IssueType: "task", - } - issue2 := &types.Issue{ - Title: "Dependent issue", - Status: types.StatusOpen, - Priority: 2, - IssueType: "task", - } - if err := s.CreateIssue(ctx, issue1, "test"); err != nil { - t.Fatalf("Failed to create issue1: %v", err) - } - if err := s.CreateIssue(ctx, issue2, "test"); err != nil { - t.Fatalf("Failed to create issue2: %v", err) - } - - // Add dependency: issue2 depends on issue1 - dep := &types.Dependency{ - IssueID: issue2.ID, - DependsOnID: issue1.ID, - Type: types.DepBlocks, - } - if err := s.AddDependency(ctx, dep, "test"); err != nil { - t.Fatalf("Failed to add dependency: %v", err) - } - - // Delete issue1 (the blocker) - err := deleteIssue(ctx, issue1.ID) - if err != nil { - t.Fatalf("deleteIssue failed: %v", err) - } - - // Verify issue2 no longer has dependencies - deps, err := s.GetDependencies(ctx, issue2.ID) - if err != nil { - t.Fatalf("GetDependencies failed: %v", err) - } - if len(deps) > 0 { - t.Errorf("Expected no dependencies after deleting blocker, got %d", len(deps)) - } - }) - - t.Run("verify issue removed from database", func(t *testing.T) { - issue := &types.Issue{ - Title: "Verify removal", - Status: types.StatusOpen, - Priority: 2, - IssueType: "task", - } - if err := s.CreateIssue(ctx, issue, "test"); err != nil { - t.Fatalf("Failed to create issue: %v", err) - } - - // Get statistics before delete - statsBefore, err := s.GetStatistics(ctx) - if err != nil { - t.Fatalf("GetStatistics failed: %v", err) - } - - err = deleteIssue(ctx, issue.ID) - if err != nil { - t.Fatalf("deleteIssue failed: %v", err) - } - - // Get statistics after delete - statsAfter, err := s.GetStatistics(ctx) - if err != nil { - t.Fatalf("GetStatistics failed: %v", err) - } - - if statsAfter.TotalIssues != statsBefore.TotalIssues-1 { - t.Errorf("Expected total issues to decrease by 1, was %d now %d", - statsBefore.TotalIssues, statsAfter.TotalIssues) - } - }) -} - -func TestCreateTombstoneUnsupportedStorage(t *testing.T) { - if testing.Short() { - t.Skip("skipping integration test in short mode") - } - - oldStore := store - defer func() { store = oldStore }() - - // Set store to nil - the type assertion will fail - store = nil - - ctx := context.Background() - err := createTombstone(ctx, "any-id", "actor", "reason") - if err == nil { - t.Error("Expected error when storage is nil") - } - expectedMsg := "tombstone operation not supported by this storage backend" - if err.Error() != expectedMsg { - t.Errorf("Expected error %q, got %q", expectedMsg, err.Error()) - } -} - -func TestDeleteIssueUnsupportedStorage(t *testing.T) { - if testing.Short() { - t.Skip("skipping integration test in short mode") - } - - oldStore := store - defer func() { store = oldStore }() - - // Set store to nil - the type assertion will fail - store = nil - - ctx := context.Background() - err := deleteIssue(ctx, "any-id") - if err == nil { - t.Error("Expected error when storage is nil") - } - expectedMsg := "delete operation not supported by this storage backend" - if err.Error() != expectedMsg { - t.Errorf("Expected error %q, got %q", expectedMsg, err.Error()) - } -} diff --git a/cmd/bd/doctor.go b/cmd/bd/doctor.go index 5e020566..b93f931a 100644 --- a/cmd/bd/doctor.go +++ b/cmd/bd/doctor.go @@ -7,7 +7,6 @@ import ( "fmt" "os" "path/filepath" - "slices" "strings" "time" @@ -53,6 +52,7 @@ var ( doctorInteractive bool // bd-3xl: per-fix confirmation mode doctorDryRun bool // bd-a5z: preview fixes without applying doctorOutput string // bd-9cc: export diagnostics to file + doctorVerbose bool // bd-4qfb: show all checks including passed perfMode bool checkHealthMode bool ) @@ -422,6 +422,10 @@ func applyFixList(path string, fixes []doctorCheck) { // No auto-fix: compaction requires agent review fmt.Printf(" ⚠ Run 'bd compact --analyze' to review candidates\n") continue + case "Large Database": + // No auto-fix: pruning deletes data, must be user-controlled + fmt.Printf(" ⚠ Run 'bd cleanup --older-than 90' to prune old closed issues\n") + continue default: fmt.Printf(" ⚠ No automatic fix available for %s\n", check.Name) fmt.Printf(" Manual fix: %s\n", check.Fix) @@ -817,6 +821,12 @@ func runDiagnostics(path string) doctorResult { result.Checks = append(result.Checks, compactionCheck) // Info only, not a warning - compaction requires human review + // Check 29: Database size (pruning suggestion) + // Note: This check has no auto-fix - pruning is destructive and user-controlled + sizeCheck := convertDoctorCheck(doctor.CheckDatabaseSize(path)) + result.Checks = append(result.Checks, sizeCheck) + // Don't fail overall check for size warning, just inform + return result } @@ -858,136 +868,118 @@ func exportDiagnostics(result doctorResult, outputPath string) error { } func printDiagnostics(result doctorResult) { - // Print header with version - fmt.Printf("\nbd doctor v%s\n\n", result.CLIVersion) - - // Group checks by category - checksByCategory := make(map[string][]doctorCheck) - for _, check := range result.Checks { - cat := check.Category - if cat == "" { - cat = "Other" - } - checksByCategory[cat] = append(checksByCategory[cat], check) - } - - // Track counts + // Count checks by status and collect into categories var passCount, warnCount, failCount int - var warnings []doctorCheck + var errors, warnings []doctorCheck + passedByCategory := make(map[string][]doctorCheck) - // Print checks by category in defined order - for _, category := range doctor.CategoryOrder { - checks, exists := checksByCategory[category] - if !exists || len(checks) == 0 { - continue + for _, check := range result.Checks { + switch check.Status { + case statusOK: + passCount++ + cat := check.Category + if cat == "" { + cat = "Other" + } + passedByCategory[cat] = append(passedByCategory[cat], check) + case statusWarning: + warnCount++ + warnings = append(warnings, check) + case statusError: + failCount++ + errors = append(errors, check) } - - // Print category header - fmt.Println(ui.RenderCategory(category)) - - // Print each check in this category - for _, check := range checks { - // Determine status icon - var statusIcon string - switch check.Status { - case statusOK: - statusIcon = ui.RenderPassIcon() - passCount++ - case statusWarning: - statusIcon = ui.RenderWarnIcon() - warnCount++ - warnings = append(warnings, check) - case statusError: - statusIcon = ui.RenderFailIcon() - failCount++ - warnings = append(warnings, check) - } - - // Print check line: icon + name + message - fmt.Printf(" %s %s", statusIcon, check.Name) - if check.Message != "" { - fmt.Printf("%s", ui.RenderMuted(" "+check.Message)) - } - fmt.Println() - - // Print detail if present (indented) - if check.Detail != "" { - fmt.Printf(" %s%s\n", ui.MutedStyle.Render(ui.TreeLast), ui.RenderMuted(check.Detail)) - } - } - fmt.Println() } - // Print any checks without a category - if otherChecks, exists := checksByCategory["Other"]; exists && len(otherChecks) > 0 { - fmt.Println(ui.RenderCategory("Other")) - for _, check := range otherChecks { - var statusIcon string - switch check.Status { - case statusOK: - statusIcon = ui.RenderPassIcon() - passCount++ - case statusWarning: - statusIcon = ui.RenderWarnIcon() - warnCount++ - warnings = append(warnings, check) - case statusError: - statusIcon = ui.RenderFailIcon() - failCount++ - warnings = append(warnings, check) - } - fmt.Printf(" %s %s", statusIcon, check.Name) - if check.Message != "" { - fmt.Printf("%s", ui.RenderMuted(" "+check.Message)) - } - fmt.Println() + // Print header with version and summary at TOP + fmt.Printf("\nbd doctor v%s\n\n", result.CLIVersion) + fmt.Printf("Summary: %d checks passed, %d warnings, %d errors\n", passCount, warnCount, failCount) + + // Print errors section (always shown if any) + if failCount > 0 { + fmt.Println() + fmt.Println(ui.RenderSeparator()) + fmt.Printf("%s Errors (%d)\n", ui.RenderFailIcon(), failCount) + fmt.Println(ui.RenderSeparator()) + fmt.Println() + + for _, check := range errors { + fmt.Printf("[%s] %s\n", check.Name, check.Message) if check.Detail != "" { - fmt.Printf(" %s%s\n", ui.MutedStyle.Render(ui.TreeLast), ui.RenderMuted(check.Detail)) - } - } - fmt.Println() - } - - // Print summary line - fmt.Println(ui.RenderSeparator()) - summary := fmt.Sprintf("%s %d passed %s %d warnings %s %d failed", - ui.RenderPassIcon(), passCount, - ui.RenderWarnIcon(), warnCount, - ui.RenderFailIcon(), failCount, - ) - fmt.Println(summary) - - // Print warnings/errors section with fixes - if len(warnings) > 0 { - fmt.Println() - fmt.Println(ui.RenderWarn(ui.IconWarn + " WARNINGS")) - - // Sort by severity: errors first, then warnings - slices.SortStableFunc(warnings, func(a, b doctorCheck) int { - // Errors (statusError) come before warnings (statusWarning) - if a.Status == statusError && b.Status != statusError { - return -1 - } - if a.Status != statusError && b.Status == statusError { - return 1 - } - return 0 // maintain original order within same severity - }) - - for i, check := range warnings { - // Show numbered items with icon and color based on status - // Errors get entire line in red, warnings just the number in yellow - line := fmt.Sprintf("%s: %s", check.Name, check.Message) - if check.Status == statusError { - fmt.Printf(" %s %s %s\n", ui.RenderFailIcon(), ui.RenderFail(fmt.Sprintf("%d.", i+1)), ui.RenderFail(line)) - } else { - fmt.Printf(" %s %s %s\n", ui.RenderWarnIcon(), ui.RenderWarn(fmt.Sprintf("%d.", i+1)), line) + fmt.Printf(" %s\n", check.Detail) } if check.Fix != "" { - fmt.Printf(" %s%s\n", ui.MutedStyle.Render(ui.TreeLast), check.Fix) + fmt.Printf(" Fix: %s\n", check.Fix) } + fmt.Println() } - } else { + } + + // Print warnings section (always shown if any) + if warnCount > 0 { + fmt.Println(ui.RenderSeparator()) + fmt.Printf("%s Warnings (%d)\n", ui.RenderWarnIcon(), warnCount) + fmt.Println(ui.RenderSeparator()) + fmt.Println() + + for _, check := range warnings { + fmt.Printf("[%s] %s\n", check.Name, check.Message) + if check.Detail != "" { + fmt.Printf(" %s\n", check.Detail) + } + if check.Fix != "" { + fmt.Printf(" Fix: %s\n", check.Fix) + } + fmt.Println() + } + } + + // Print passed section + if passCount > 0 { + fmt.Println(ui.RenderSeparator()) + if doctorVerbose { + // Verbose mode: show all passed checks grouped by category + fmt.Printf("%s Passed (%d)\n", ui.RenderPassIcon(), passCount) + fmt.Println(ui.RenderSeparator()) + fmt.Println() + + for _, category := range doctor.CategoryOrder { + checks, exists := passedByCategory[category] + if !exists || len(checks) == 0 { + continue + } + fmt.Printf(" %s\n", category) + for _, check := range checks { + fmt.Printf(" %s %s", ui.RenderPassIcon(), check.Name) + if check.Message != "" { + fmt.Printf(" %s", ui.RenderMuted(check.Message)) + } + fmt.Println() + } + fmt.Println() + } + + // Print "Other" category if exists + if otherChecks, exists := passedByCategory["Other"]; exists && len(otherChecks) > 0 { + fmt.Printf(" %s\n", "Other") + for _, check := range otherChecks { + fmt.Printf(" %s %s", ui.RenderPassIcon(), check.Name) + if check.Message != "" { + fmt.Printf(" %s", ui.RenderMuted(check.Message)) + } + fmt.Println() + } + fmt.Println() + } + } else { + // Default mode: collapsed summary + fmt.Printf("%s Passed (%d) %s\n", ui.RenderPassIcon(), passCount, ui.RenderMuted("[use --verbose to show details]")) + fmt.Println(ui.RenderSeparator()) + } + } + + // Final status message + if failCount == 0 && warnCount == 0 { fmt.Println() fmt.Printf("%s\n", ui.RenderPass("βœ“ All checks passed")) } @@ -998,4 +990,5 @@ func init() { doctorCmd.Flags().BoolVar(&perfMode, "perf", false, "Run performance diagnostics and generate CPU profile") doctorCmd.Flags().BoolVar(&checkHealthMode, "check-health", false, "Quick health check for git hooks (silent on success)") doctorCmd.Flags().StringVarP(&doctorOutput, "output", "o", "", "Export diagnostics to JSON file (bd-9cc)") + doctorCmd.Flags().BoolVarP(&doctorVerbose, "verbose", "v", false, "Show all checks including passed (bd-4qfb)") } diff --git a/cmd/bd/doctor/database.go b/cmd/bd/doctor/database.go index 674a6c17..56782367 100644 --- a/cmd/bd/doctor/database.go +++ b/cmd/bd/doctor/database.go @@ -620,3 +620,92 @@ func isNoDbModeConfigured(beadsDir string) bool { return cfg.NoDb } + +// CheckDatabaseSize warns when the database has accumulated many closed issues. +// This is purely informational - pruning is NEVER auto-fixed because it +// permanently deletes data. Users must explicitly run 'bd cleanup' to prune. +// +// Config: doctor.suggest_pruning_issue_count (default: 5000, 0 = disabled) +// +// DESIGN NOTE: This check intentionally has NO auto-fix. Unlike other doctor +// checks that fix configuration or sync issues, pruning is destructive and +// irreversible. The user must make an explicit decision to delete their +// closed issue history. We only provide guidance, never action. +func CheckDatabaseSize(path string) DoctorCheck { + beadsDir := filepath.Join(path, ".beads") + + // Get database path + var dbPath string + if cfg, err := configfile.Load(beadsDir); err == nil && cfg != nil && cfg.Database != "" { + dbPath = cfg.DatabasePath(beadsDir) + } else { + dbPath = filepath.Join(beadsDir, beads.CanonicalDatabaseName) + } + + // If no database, skip this check + if _, err := os.Stat(dbPath); os.IsNotExist(err) { + return DoctorCheck{ + Name: "Large Database", + Status: StatusOK, + Message: "N/A (no database)", + } + } + + // Read threshold from config (default 5000, 0 = disabled) + threshold := 5000 + db, err := sql.Open("sqlite3", "file:"+dbPath+"?mode=ro&_pragma=busy_timeout(30000)") + if err != nil { + return DoctorCheck{ + Name: "Large Database", + Status: StatusOK, + Message: "N/A (unable to open database)", + } + } + defer db.Close() + + // Check for custom threshold in config table + var thresholdStr string + err = db.QueryRow("SELECT value FROM config WHERE key = ?", "doctor.suggest_pruning_issue_count").Scan(&thresholdStr) + if err == nil { + if _, err := fmt.Sscanf(thresholdStr, "%d", &threshold); err != nil { + threshold = 5000 // Reset to default on parse error + } + } + + // If disabled, return OK + if threshold == 0 { + return DoctorCheck{ + Name: "Large Database", + Status: StatusOK, + Message: "Check disabled (threshold = 0)", + } + } + + // Count closed issues + var closedCount int + err = db.QueryRow("SELECT COUNT(*) FROM issues WHERE status = 'closed'").Scan(&closedCount) + if err != nil { + return DoctorCheck{ + Name: "Large Database", + Status: StatusOK, + Message: "N/A (unable to count issues)", + } + } + + // Check against threshold + if closedCount > threshold { + return DoctorCheck{ + Name: "Large Database", + Status: StatusWarning, + Message: fmt.Sprintf("%d closed issues (threshold: %d)", closedCount, threshold), + Detail: "Large number of closed issues may impact performance", + Fix: "Consider running 'bd cleanup --older-than 90' to prune old closed issues", + } + } + + return DoctorCheck{ + Name: "Large Database", + Status: StatusOK, + Message: fmt.Sprintf("%d closed issues (threshold: %d)", closedCount, threshold), + } +} diff --git a/cmd/bd/doctor/git.go b/cmd/bd/doctor/git.go index 99687b7c..ab373ff7 100644 --- a/cmd/bd/doctor/git.go +++ b/cmd/bd/doctor/git.go @@ -145,6 +145,8 @@ func CheckSyncBranchHookCompatibility(path string) DoctorCheck { Status: StatusWarning, Message: "Pre-push hook is not a bd hook", Detail: "Cannot verify sync-branch compatibility with custom hooks", + Fix: "Either run 'bd hooks install --force' to use bd hooks,\n" + + " or ensure your custom hook skips validation when pushing to sync-branch", } } diff --git a/cmd/bd/doctor/legacy.go b/cmd/bd/doctor/legacy.go index 3f5112b9..27b6f985 100644 --- a/cmd/bd/doctor/legacy.go +++ b/cmd/bd/doctor/legacy.go @@ -188,7 +188,7 @@ func CheckLegacyJSONLFilename(repoPath string) DoctorCheck { Detail: "Having multiple JSONL files can cause sync and merge conflicts.\n" + " Only one JSONL file should be used per repository.", Fix: "Determine which file is current and remove the others:\n" + - " 1. Check 'bd stats' to see which file is being used\n" + + " 1. Check .beads/metadata.json for 'jsonl_export' setting\n" + " 2. Verify with 'git log .beads/*.jsonl' to see commit history\n" + " 3. Remove the unused file(s): git rm .beads/.jsonl\n" + " 4. Commit the change", diff --git a/cmd/bd/export_mtime_test.go b/cmd/bd/export_mtime_test.go index fb829e17..df769cc0 100644 --- a/cmd/bd/export_mtime_test.go +++ b/cmd/bd/export_mtime_test.go @@ -65,7 +65,11 @@ func TestExportUpdatesDatabaseMtime(t *testing.T) { } // Update metadata after export (bd-ymj fix) - mockLogger := newTestLogger() + mockLogger := daemonLogger{ + logFunc: func(format string, args ...interface{}) { + t.Logf(format, args...) + }, + } updateExportMetadata(ctx, store, jsonlPath, mockLogger, "") // Get JSONL mtime @@ -166,7 +170,11 @@ func TestDaemonExportScenario(t *testing.T) { } // Daemon updates metadata after export (bd-ymj fix) - mockLogger := newTestLogger() + mockLogger := daemonLogger{ + logFunc: func(format string, args ...interface{}) { + t.Logf(format, args...) + }, + } updateExportMetadata(ctx, store, jsonlPath, mockLogger, "") // THIS IS THE FIX: daemon now calls TouchDatabaseFile after export @@ -241,7 +249,11 @@ func TestMultipleExportCycles(t *testing.T) { } // Update metadata after export (bd-ymj fix) - mockLogger := newTestLogger() + mockLogger := daemonLogger{ + logFunc: func(format string, args ...interface{}) { + t.Logf(format, args...) + }, + } updateExportMetadata(ctx, store, jsonlPath, mockLogger, "") // Apply fix diff --git a/cmd/bd/gate.go b/cmd/bd/gate.go index 40f7937a..6c2667af 100644 --- a/cmd/bd/gate.go +++ b/cmd/bd/gate.go @@ -8,7 +8,6 @@ import ( "time" "github.com/spf13/cobra" - "github.com/steveyegge/beads/internal/rpc" "github.com/steveyegge/beads/internal/storage/sqlite" "github.com/steveyegge/beads/internal/types" "github.com/steveyegge/beads/internal/ui" @@ -106,65 +105,42 @@ Examples: title = fmt.Sprintf("Gate: %s:%s", awaitType, awaitID) } - var gate *types.Issue - - // Try daemon first, fall back to direct store access - if daemonClient != nil { - resp, err := daemonClient.GateCreate(&rpc.GateCreateArgs{ - Title: title, - AwaitType: awaitType, - AwaitID: awaitID, - Timeout: timeout, - Waiters: notifyAddrs, - }) - if err != nil { - FatalError("gate create: %v", err) + // Gate creation requires direct store access + if store == nil { + if daemonClient != nil { + fmt.Fprintf(os.Stderr, "Error: gate create requires direct database access\n") + fmt.Fprintf(os.Stderr, "Hint: use --no-daemon flag: bd --no-daemon gate create ...\n") + } else { + fmt.Fprintf(os.Stderr, "Error: no database connection\n") } - - // Parse the gate ID from response and fetch full gate - var result rpc.GateCreateResult - if err := json.Unmarshal(resp.Data, &result); err != nil { - FatalError("failed to parse gate create result: %v", err) - } - - // Get the full gate for output - showResp, err := daemonClient.GateShow(&rpc.GateShowArgs{ID: result.ID}) - if err != nil { - FatalError("failed to fetch created gate: %v", err) - } - if err := json.Unmarshal(showResp.Data, &gate); err != nil { - FatalError("failed to parse gate: %v", err) - } - } else if store != nil { - now := time.Now() - gate = &types.Issue{ - // ID will be generated by CreateIssue - Title: title, - IssueType: types.TypeGate, - Status: types.StatusOpen, - Priority: 1, // Gates are typically high priority - Assignee: "deacon/", - Wisp: true, // Gates are wisps (ephemeral) - AwaitType: awaitType, - AwaitID: awaitID, - Timeout: timeout, - Waiters: notifyAddrs, - CreatedAt: now, - UpdatedAt: now, - } - gate.ContentHash = gate.ComputeContentHash() - - if err := store.CreateIssue(ctx, gate, actor); err != nil { - fmt.Fprintf(os.Stderr, "Error creating gate: %v\n", err) - os.Exit(1) - } - - markDirtyAndScheduleFlush() - } else { - fmt.Fprintf(os.Stderr, "Error: no database connection\n") os.Exit(1) } + now := time.Now() + gate := &types.Issue{ + // ID will be generated by CreateIssue + Title: title, + IssueType: types.TypeGate, + Status: types.StatusOpen, + Priority: 1, // Gates are typically high priority + Assignee: "deacon/", + Wisp: true, // Gates are wisps (ephemeral) + AwaitType: awaitType, + AwaitID: awaitID, + Timeout: timeout, + Waiters: notifyAddrs, + CreatedAt: now, + UpdatedAt: now, + } + gate.ContentHash = gate.ComputeContentHash() + + if err := store.CreateIssue(ctx, gate, actor); err != nil { + fmt.Fprintf(os.Stderr, "Error creating gate: %v\n", err) + os.Exit(1) + } + + markDirtyAndScheduleFlush() + if jsonOutput { outputJSON(gate) return @@ -221,39 +197,34 @@ var gateShowCmd = &cobra.Command{ Run: func(cmd *cobra.Command, args []string) { ctx := rootCtx - var gate *types.Issue + // Gate show requires direct store access + if store == nil { + if daemonClient != nil { + fmt.Fprintf(os.Stderr, "Error: gate show requires direct database access\n") + fmt.Fprintf(os.Stderr, "Hint: use --no-daemon flag: bd --no-daemon gate show %s\n", args[0]) + } else { + fmt.Fprintf(os.Stderr, "Error: no database connection\n") + } + os.Exit(1) + } - // Try daemon first, fall back to direct store access - if daemonClient != nil { - resp, err := daemonClient.GateShow(&rpc.GateShowArgs{ID: args[0]}) - if err != nil { - FatalError("gate show: %v", err) - } - if err := json.Unmarshal(resp.Data, &gate); err != nil { - FatalError("failed to parse gate: %v", err) - } - } else if store != nil { - gateID, err := utils.ResolvePartialID(ctx, store, args[0]) - if err != nil { - fmt.Fprintf(os.Stderr, "Error: %v\n", err) - os.Exit(1) - } + gateID, err := utils.ResolvePartialID(ctx, store, args[0]) + if err != nil { + fmt.Fprintf(os.Stderr, "Error: %v\n", err) + os.Exit(1) + } - gate, err = store.GetIssue(ctx, gateID) - if err != nil { - fmt.Fprintf(os.Stderr, "Error: %v\n", err) - os.Exit(1) - } - if gate == nil { - fmt.Fprintf(os.Stderr, "Error: gate %s not found\n", gateID) - os.Exit(1) - } - if gate.IssueType != types.TypeGate { - fmt.Fprintf(os.Stderr, "Error: %s is not a gate (type: %s)\n", gateID, gate.IssueType) - os.Exit(1) - } - } else { - fmt.Fprintf(os.Stderr, "Error: no database connection\n") + gate, err := store.GetIssue(ctx, gateID) + if err != nil { + fmt.Fprintf(os.Stderr, "Error: %v\n", err) + os.Exit(1) + } + if gate == nil { + fmt.Fprintf(os.Stderr, "Error: gate %s not found\n", gateID) + os.Exit(1) + } + if gate.IssueType != types.TypeGate { + fmt.Fprintf(os.Stderr, "Error: %s is not a gate (type: %s)\n", gateID, gate.IssueType) os.Exit(1) } @@ -292,36 +263,30 @@ var gateListCmd = &cobra.Command{ ctx := rootCtx showAll, _ := cmd.Flags().GetBool("all") - var issues []*types.Issue + // Gate list requires direct store access + if store == nil { + if daemonClient != nil { + fmt.Fprintf(os.Stderr, "Error: gate list requires direct database access\n") + fmt.Fprintf(os.Stderr, "Hint: use --no-daemon flag: bd --no-daemon gate list\n") + } else { + fmt.Fprintf(os.Stderr, "Error: no database connection\n") + } + os.Exit(1) + } - // Try daemon first, fall back to direct store access - if daemonClient != nil { - resp, err := daemonClient.GateList(&rpc.GateListArgs{All: showAll}) - if err != nil { - FatalError("gate list: %v", err) - } - if err := json.Unmarshal(resp.Data, &issues); err != nil { - FatalError("failed to parse gates: %v", err) - } - } else if store != nil { - // Build filter for gates - gateType := types.TypeGate - filter := types.IssueFilter{ - IssueType: &gateType, - } - if !showAll { - openStatus := types.StatusOpen - filter.Status = &openStatus - } + // Build filter for gates + gateType := types.TypeGate + filter := types.IssueFilter{ + IssueType: &gateType, + } + if !showAll { + openStatus := types.StatusOpen + filter.Status = &openStatus + } - var err error - issues, err = store.SearchIssues(ctx, "", filter) - if err != nil { - fmt.Fprintf(os.Stderr, "Error listing gates: %v\n", err) - os.Exit(1) - } - } else { - fmt.Fprintf(os.Stderr, "Error: no database connection\n") + issues, err := store.SearchIssues(ctx, "", filter) + if err != nil { + fmt.Fprintf(os.Stderr, "Error listing gates: %v\n", err) os.Exit(1) } @@ -373,58 +338,47 @@ var gateCloseCmd = &cobra.Command{ reason = "Gate closed" } - var closedGate *types.Issue - var gateID string - - // Try daemon first, fall back to direct store access - if daemonClient != nil { - resp, err := daemonClient.GateClose(&rpc.GateCloseArgs{ - ID: args[0], - Reason: reason, - }) - if err != nil { - FatalError("gate close: %v", err) + // Gate close requires direct store access + if store == nil { + if daemonClient != nil { + fmt.Fprintf(os.Stderr, "Error: gate close requires direct database access\n") + fmt.Fprintf(os.Stderr, "Hint: use --no-daemon flag: bd --no-daemon gate close %s\n", args[0]) + } else { + fmt.Fprintf(os.Stderr, "Error: no database connection\n") } - if err := json.Unmarshal(resp.Data, &closedGate); err != nil { - FatalError("failed to parse gate: %v", err) - } - gateID = closedGate.ID - } else if store != nil { - var err error - gateID, err = utils.ResolvePartialID(ctx, store, args[0]) - if err != nil { - fmt.Fprintf(os.Stderr, "Error: %v\n", err) - os.Exit(1) - } - - // Verify it's a gate - gate, err := store.GetIssue(ctx, gateID) - if err != nil { - fmt.Fprintf(os.Stderr, "Error: %v\n", err) - os.Exit(1) - } - if gate == nil { - fmt.Fprintf(os.Stderr, "Error: gate %s not found\n", gateID) - os.Exit(1) - } - if gate.IssueType != types.TypeGate { - fmt.Fprintf(os.Stderr, "Error: %s is not a gate (type: %s)\n", gateID, gate.IssueType) - os.Exit(1) - } - - if err := store.CloseIssue(ctx, gateID, reason, actor); err != nil { - fmt.Fprintf(os.Stderr, "Error closing gate: %v\n", err) - os.Exit(1) - } - - markDirtyAndScheduleFlush() - closedGate, _ = store.GetIssue(ctx, gateID) - } else { - fmt.Fprintf(os.Stderr, "Error: no database connection\n") os.Exit(1) } + gateID, err := utils.ResolvePartialID(ctx, store, args[0]) + if err != nil { + fmt.Fprintf(os.Stderr, "Error: %v\n", err) + os.Exit(1) + } + + // Verify it's a gate + gate, err := store.GetIssue(ctx, gateID) + if err != nil { + fmt.Fprintf(os.Stderr, "Error: %v\n", err) + os.Exit(1) + } + if gate == nil { + fmt.Fprintf(os.Stderr, "Error: gate %s not found\n", gateID) + os.Exit(1) + } + if gate.IssueType != types.TypeGate { + fmt.Fprintf(os.Stderr, "Error: %s is not a gate (type: %s)\n", gateID, gate.IssueType) + os.Exit(1) + } + + if err := store.CloseIssue(ctx, gateID, reason, actor); err != nil { + fmt.Fprintf(os.Stderr, "Error closing gate: %v\n", err) + os.Exit(1) + } + + markDirtyAndScheduleFlush() + if jsonOutput { + closedGate, _ := store.GetIssue(ctx, gateID) outputJSON(closedGate) return } @@ -448,116 +402,87 @@ var gateWaitCmd = &cobra.Command{ os.Exit(1) } - var addedCount int - var gateID string - var newWaiters []string - - // Try daemon first, fall back to direct store access - if daemonClient != nil { - resp, err := daemonClient.GateWait(&rpc.GateWaitArgs{ - ID: args[0], - Waiters: notifyAddrs, - }) - if err != nil { - FatalError("gate wait: %v", err) + // Gate wait requires direct store access for now + if store == nil { + if daemonClient != nil { + fmt.Fprintf(os.Stderr, "Error: gate wait requires direct database access\n") + fmt.Fprintf(os.Stderr, "Hint: use --no-daemon flag: bd --no-daemon gate wait %s --notify ...\n", args[0]) + } else { + fmt.Fprintf(os.Stderr, "Error: no database connection\n") } - var result rpc.GateWaitResult - if err := json.Unmarshal(resp.Data, &result); err != nil { - FatalError("failed to parse gate wait result: %v", err) - } - addedCount = result.AddedCount - gateID = args[0] // Use the input ID for display - // For daemon mode, we don't know exactly which waiters were added - // Just report the count - newWaiters = nil - } else if store != nil { - var err error - gateID, err = utils.ResolvePartialID(ctx, store, args[0]) - if err != nil { - fmt.Fprintf(os.Stderr, "Error: %v\n", err) - os.Exit(1) - } - - // Get existing gate - gate, err := store.GetIssue(ctx, gateID) - if err != nil { - fmt.Fprintf(os.Stderr, "Error: %v\n", err) - os.Exit(1) - } - if gate == nil { - fmt.Fprintf(os.Stderr, "Error: gate %s not found\n", gateID) - os.Exit(1) - } - if gate.IssueType != types.TypeGate { - fmt.Fprintf(os.Stderr, "Error: %s is not a gate (type: %s)\n", gateID, gate.IssueType) - os.Exit(1) - } - if gate.Status == types.StatusClosed { - fmt.Fprintf(os.Stderr, "Error: gate %s is already closed\n", gateID) - os.Exit(1) - } - - // Add new waiters (avoiding duplicates) - waiterSet := make(map[string]bool) - for _, w := range gate.Waiters { - waiterSet[w] = true - } - for _, addr := range notifyAddrs { - if !waiterSet[addr] { - newWaiters = append(newWaiters, addr) - waiterSet[addr] = true - } - } - - addedCount = len(newWaiters) - - if addedCount == 0 { - fmt.Println("All specified waiters are already registered on this gate") - return - } - - // Update waiters - need to use SQLite directly for Waiters field - sqliteStore, ok := store.(*sqlite.SQLiteStorage) - if !ok { - fmt.Fprintf(os.Stderr, "Error: gate wait requires SQLite storage\n") - os.Exit(1) - } - - allWaiters := append(gate.Waiters, newWaiters...) - waitersJSON, _ := json.Marshal(allWaiters) - - // Use raw SQL to update the waiters field - _, err = sqliteStore.UnderlyingDB().ExecContext(ctx, `UPDATE issues SET waiters = ?, updated_at = ? WHERE id = ?`, - string(waitersJSON), time.Now(), gateID) - if err != nil { - fmt.Fprintf(os.Stderr, "Error adding waiters: %v\n", err) - os.Exit(1) - } - - markDirtyAndScheduleFlush() - - if jsonOutput { - updatedGate, _ := store.GetIssue(ctx, gateID) - outputJSON(updatedGate) - return - } - } else { - fmt.Fprintf(os.Stderr, "Error: no database connection\n") os.Exit(1) } - if addedCount == 0 { + gateID, err := utils.ResolvePartialID(ctx, store, args[0]) + if err != nil { + fmt.Fprintf(os.Stderr, "Error: %v\n", err) + os.Exit(1) + } + + // Get existing gate + gate, err := store.GetIssue(ctx, gateID) + if err != nil { + fmt.Fprintf(os.Stderr, "Error: %v\n", err) + os.Exit(1) + } + if gate == nil { + fmt.Fprintf(os.Stderr, "Error: gate %s not found\n", gateID) + os.Exit(1) + } + if gate.IssueType != types.TypeGate { + fmt.Fprintf(os.Stderr, "Error: %s is not a gate (type: %s)\n", gateID, gate.IssueType) + os.Exit(1) + } + if gate.Status == types.StatusClosed { + fmt.Fprintf(os.Stderr, "Error: gate %s is already closed\n", gateID) + os.Exit(1) + } + + // Add new waiters (avoiding duplicates) + waiterSet := make(map[string]bool) + for _, w := range gate.Waiters { + waiterSet[w] = true + } + newWaiters := []string{} + for _, addr := range notifyAddrs { + if !waiterSet[addr] { + newWaiters = append(newWaiters, addr) + waiterSet[addr] = true + } + } + + if len(newWaiters) == 0 { fmt.Println("All specified waiters are already registered on this gate") return } + // Update waiters - need to use SQLite directly for Waiters field + sqliteStore, ok := store.(*sqlite.SQLiteStorage) + if !ok { + fmt.Fprintf(os.Stderr, "Error: gate wait requires SQLite storage\n") + os.Exit(1) + } + + allWaiters := append(gate.Waiters, newWaiters...) + waitersJSON, _ := json.Marshal(allWaiters) + + // Use raw SQL to update the waiters field + _, err = sqliteStore.UnderlyingDB().ExecContext(ctx, `UPDATE issues SET waiters = ?, updated_at = ? WHERE id = ?`, + string(waitersJSON), time.Now(), gateID) + if err != nil { + fmt.Fprintf(os.Stderr, "Error adding waiters: %v\n", err) + os.Exit(1) + } + + markDirtyAndScheduleFlush() + if jsonOutput { - // For daemon mode, output the result - outputJSON(map[string]interface{}{"added_count": addedCount, "gate_id": gateID}) + updatedGate, _ := store.GetIssue(ctx, gateID) + outputJSON(updatedGate) return } - fmt.Printf("%s Added %d waiter(s) to gate %s\n", ui.RenderPass("βœ“"), addedCount, gateID) + fmt.Printf("%s Added waiter(s) to gate %s:\n", ui.RenderPass("βœ“"), gateID) for _, addr := range newWaiters { fmt.Printf(" + %s\n", addr) } diff --git a/cmd/bd/import_multipart_id_test.go b/cmd/bd/import_multipart_id_test.go index 0a1c51a4..edf0584a 100644 --- a/cmd/bd/import_multipart_id_test.go +++ b/cmd/bd/import_multipart_id_test.go @@ -84,92 +84,6 @@ func TestImportMultiPartIDs(t *testing.T) { } } -// TestImportMultiHyphenPrefix tests GH#422: importing with multi-hyphen prefixes -// like "asianops-audit-" should not cause false positive prefix mismatch errors. -func TestImportMultiHyphenPrefix(t *testing.T) { - tmpDir := t.TempDir() - dbPath := filepath.Join(tmpDir, ".beads", "beads.db") - - // Create database with multi-hyphen prefix "asianops-audit" - st := newTestStoreWithPrefix(t, dbPath, "asianops-audit") - - ctx := context.Background() - - // Create issues with hash-like suffixes that could be mistaken for words - // The key is that "test", "task", "demo" look like English words (4+ chars, no digits) - // which previously caused ExtractIssuePrefix to fall back to first hyphen - issues := []*types.Issue{ - { - ID: "asianops-audit-sa0", - Title: "Issue with short hash suffix", - Description: "Short hash suffix should work", - Status: "open", - Priority: 1, - IssueType: "task", - }, - { - ID: "asianops-audit-test", - Title: "Issue with word-like suffix", - Description: "Word-like suffix 'test' was causing false positive", - Status: "open", - Priority: 1, - IssueType: "task", - }, - { - ID: "asianops-audit-task", - Title: "Another word-like suffix", - Description: "Word-like suffix 'task' was also problematic", - Status: "open", - Priority: 1, - IssueType: "task", - }, - { - ID: "asianops-audit-demo", - Title: "Demo issue", - Description: "Word-like suffix 'demo'", - Status: "open", - Priority: 1, - IssueType: "task", - }, - } - - // Import should succeed without prefix mismatch errors - opts := ImportOptions{ - DryRun: false, - SkipUpdate: false, - Strict: false, - } - - result, err := importIssuesCore(ctx, dbPath, st, issues, opts) - if err != nil { - t.Fatalf("Import failed: %v", err) - } - - // GH#422: Should NOT detect prefix mismatch - if result.PrefixMismatch { - t.Errorf("Import incorrectly detected prefix mismatch for multi-hyphen prefix") - t.Logf("Expected prefix: asianops-audit") - t.Logf("Mismatched prefixes detected: %v", result.MismatchPrefixes) - } - - // All issues should be created - if result.Created != 4 { - t.Errorf("Expected 4 issues created, got %d", result.Created) - } - - // Verify issues exist in database - for _, issue := range issues { - dbIssue, err := st.GetIssue(ctx, issue.ID) - if err != nil { - t.Errorf("Failed to get issue %s: %v", issue.ID, err) - continue - } - if dbIssue.Title != issue.Title { - t.Errorf("Issue %s title mismatch: got %q, want %q", issue.ID, dbIssue.Title, issue.Title) - } - } -} - // TestDetectPrefixFromIssues tests the detectPrefixFromIssues function // with multi-part IDs func TestDetectPrefixFromIssues(t *testing.T) { diff --git a/cmd/bd/init.go b/cmd/bd/init.go index c3260f59..5725f04e 100644 --- a/cmd/bd/init.go +++ b/cmd/bd/init.go @@ -33,8 +33,8 @@ and database file. Optionally specify a custom issue prefix. With --no-db: creates .beads/ directory and issues.jsonl file instead of SQLite database. -With --stealth: configures per-repository git settings for invisible beads usage: - β€’ .git/info/exclude to prevent beads files from being committed +With --stealth: configures global git settings for invisible beads usage: + β€’ Global gitignore to prevent beads files from being committed β€’ Claude Code settings with bd onboard instruction Perfect for personal use without affecting repo collaborators.`, Run: func(cmd *cobra.Command, _ []string) { @@ -1364,15 +1364,22 @@ func readFirstIssueFromGit(jsonlPath, gitRef string) (*types.Issue, error) { return nil, nil } -// setupStealthMode configures git settings for stealth operation -// Uses .git/info/exclude (per-repository) instead of global gitignore because: -// - Global gitignore doesn't support absolute paths (GitHub #704) -// - .git/info/exclude is designed for user-specific, repo-local ignores -// - Patterns are relative to repo root, so ".beads/" works correctly +// setupStealthMode configures global git settings for stealth operation func setupStealthMode(verbose bool) error { - // Setup per-repository git exclude file - if err := setupGitExclude(verbose); err != nil { - return fmt.Errorf("failed to setup git exclude: %w", err) + homeDir, err := os.UserHomeDir() + if err != nil { + return fmt.Errorf("failed to get user home directory: %w", err) + } + + // Get the absolute path of the current project + projectPath, err := os.Getwd() + if err != nil { + return fmt.Errorf("failed to get current working directory: %w", err) + } + + // Setup global gitignore with project-specific paths + if err := setupGlobalGitIgnore(homeDir, projectPath, verbose); err != nil { + return fmt.Errorf("failed to setup global gitignore: %w", err) } // Setup claude settings @@ -1382,7 +1389,7 @@ func setupStealthMode(verbose bool) error { if verbose { fmt.Printf("\n%s Stealth mode configured successfully!\n\n", ui.RenderPass("βœ“")) - fmt.Printf(" Git exclude: %s\n", ui.RenderAccent(".git/info/exclude configured")) + fmt.Printf(" Global gitignore: %s\n", ui.RenderAccent(projectPath+"/.beads/ ignored")) fmt.Printf(" Claude settings: %s\n\n", ui.RenderAccent("configured with bd onboard instruction")) fmt.Printf("Your beads setup is now %s - other repo collaborators won't see any beads-related files.\n\n", ui.RenderAccent("invisible")) } @@ -1390,80 +1397,7 @@ func setupStealthMode(verbose bool) error { return nil } -// setupGitExclude configures .git/info/exclude to ignore beads and claude files -// This is the correct approach for per-repository user-specific ignores (GitHub #704). -// Unlike global gitignore, patterns here are relative to the repo root. -func setupGitExclude(verbose bool) error { - // Find the .git directory (handles both regular repos and worktrees) - gitDir, err := exec.Command("git", "rev-parse", "--git-dir").Output() - if err != nil { - return fmt.Errorf("not a git repository") - } - gitDirPath := strings.TrimSpace(string(gitDir)) - - // Path to the exclude file - excludePath := filepath.Join(gitDirPath, "info", "exclude") - - // Ensure the info directory exists - infoDir := filepath.Join(gitDirPath, "info") - if err := os.MkdirAll(infoDir, 0755); err != nil { - return fmt.Errorf("failed to create git info directory: %w", err) - } - - // Read existing exclude file if it exists - var existingContent string - // #nosec G304 - git config path - if content, err := os.ReadFile(excludePath); err == nil { - existingContent = string(content) - } - - // Use relative patterns (these work correctly in .git/info/exclude) - beadsPattern := ".beads/" - claudePattern := ".claude/settings.local.json" - - hasBeads := strings.Contains(existingContent, beadsPattern) - hasClaude := strings.Contains(existingContent, claudePattern) - - if hasBeads && hasClaude { - if verbose { - fmt.Printf("Git exclude already configured for stealth mode\n") - } - return nil - } - - // Append missing patterns - newContent := existingContent - if !strings.HasSuffix(newContent, "\n") && len(newContent) > 0 { - newContent += "\n" - } - - if !hasBeads || !hasClaude { - newContent += "\n# Beads stealth mode (added by bd init --stealth)\n" - } - - if !hasBeads { - newContent += beadsPattern + "\n" - } - if !hasClaude { - newContent += claudePattern + "\n" - } - - // Write the updated exclude file - // #nosec G306 - config file needs 0644 - if err := os.WriteFile(excludePath, []byte(newContent), 0644); err != nil { - return fmt.Errorf("failed to write git exclude file: %w", err) - } - - if verbose { - fmt.Printf("Configured git exclude for stealth mode: %s\n", excludePath) - } - - return nil -} - // setupGlobalGitIgnore configures global gitignore to ignore beads and claude files for a specific project -// DEPRECATED: This function uses absolute paths which don't work in gitignore (GitHub #704). -// Use setupGitExclude instead for new code. func setupGlobalGitIgnore(homeDir string, projectPath string, verbose bool) error { // Check if user already has a global gitignore file configured cmd := exec.Command("git", "config", "--global", "core.excludesfile") diff --git a/cmd/bd/migrate.go b/cmd/bd/migrate.go index 24d30ad0..e06470f2 100644 --- a/cmd/bd/migrate.go +++ b/cmd/bd/migrate.go @@ -74,10 +74,11 @@ This command: "error": "no_beads_directory", "message": "No .beads directory found. Run 'bd init' first.", }) - os.Exit(1) } else { - FatalErrorWithHint("no .beads directory found", "run 'bd init' to initialize bd") + fmt.Fprintf(os.Stderr, "Error: no .beads directory found\n") + fmt.Fprintf(os.Stderr, "Hint: run 'bd init' to initialize bd\n") } + os.Exit(1) } // Load config to get target database name (respects user's config.json) @@ -102,10 +103,10 @@ This command: "error": "detection_failed", "message": err.Error(), }) - os.Exit(1) } else { - FatalError("%v", err) + fmt.Fprintf(os.Stderr, "Error: %v\n", err) } + os.Exit(1) } if len(databases) == 0 { @@ -173,15 +174,14 @@ This command: "message": "Multiple old database files found", "databases": formatDBList(oldDBs), }) - os.Exit(1) } else { fmt.Fprintf(os.Stderr, "Error: multiple old database files found:\n") for _, db := range oldDBs { fmt.Fprintf(os.Stderr, " - %s (version: %s)\n", filepath.Base(db.path), db.version) } fmt.Fprintf(os.Stderr, "\nPlease manually rename the correct database to %s and remove others.\n", cfg.Database) - os.Exit(1) } + os.Exit(1) } else if currentDB != nil && currentDB.version != Version { // Update version metadata needsVersionUpdate = true diff --git a/cmd/bd/mol_bond.go b/cmd/bd/mol_bond.go index 070e3c7a..44417ba2 100644 --- a/cmd/bd/mol_bond.go +++ b/cmd/bd/mol_bond.go @@ -227,9 +227,9 @@ func runMolBond(cmd *cobra.Command, args []string) { // Compound protos are templates - always use permanent storage result, err = bondProtoProto(ctx, store, issueA, issueB, bondType, customTitle, actor) case aIsProto && !bIsProto: - result, err = bondProtoMol(ctx, targetStore, issueA, issueB, bondType, vars, childRef, actor, pour) + result, err = bondProtoMol(ctx, targetStore, issueA, issueB, bondType, vars, childRef, actor) case !aIsProto && bIsProto: - result, err = bondMolProto(ctx, targetStore, issueA, issueB, bondType, vars, childRef, actor, pour) + result, err = bondMolProto(ctx, targetStore, issueA, issueB, bondType, vars, childRef, actor) default: result, err = bondMolMol(ctx, targetStore, issueA, issueB, bondType, actor) } @@ -366,7 +366,7 @@ func bondProtoProto(ctx context.Context, s storage.Storage, protoA, protoB *type // bondProtoMol bonds a proto to an existing molecule by spawning the proto. // If childRef is provided, generates custom IDs like "parent.childref" (dynamic bonding). -func bondProtoMol(ctx context.Context, s storage.Storage, proto, mol *types.Issue, bondType string, vars map[string]string, childRef string, actorName string, pour bool) (*BondResult, error) { +func bondProtoMol(ctx context.Context, s storage.Storage, proto, mol *types.Issue, bondType string, vars map[string]string, childRef string, actorName string) (*BondResult, error) { // Load proto subgraph subgraph, err := loadTemplateSubgraph(ctx, s, proto.ID) if err != nil { @@ -389,7 +389,7 @@ func bondProtoMol(ctx context.Context, s storage.Storage, proto, mol *types.Issu opts := CloneOptions{ Vars: vars, Actor: actorName, - Wisp: !pour, // wisp by default, but --pour makes persistent (bd-l7y3) + Wisp: true, // wisp by default for molecule execution - bd-2vh3 } // Dynamic bonding: use custom IDs if childRef is provided @@ -444,9 +444,9 @@ func bondProtoMol(ctx context.Context, s storage.Storage, proto, mol *types.Issu } // bondMolProto bonds a molecule to a proto (symmetric with bondProtoMol) -func bondMolProto(ctx context.Context, s storage.Storage, mol, proto *types.Issue, bondType string, vars map[string]string, childRef string, actorName string, pour bool) (*BondResult, error) { +func bondMolProto(ctx context.Context, s storage.Storage, mol, proto *types.Issue, bondType string, vars map[string]string, childRef string, actorName string) (*BondResult, error) { // Same as bondProtoMol but with arguments swapped - return bondProtoMol(ctx, s, proto, mol, bondType, vars, childRef, actorName, pour) + return bondProtoMol(ctx, s, proto, mol, bondType, vars, childRef, actorName) } // bondMolMol bonds two molecules together diff --git a/cmd/bd/mol_run.go b/cmd/bd/mol_run.go index ec322521..82861b89 100644 --- a/cmd/bd/mol_run.go +++ b/cmd/bd/mol_run.go @@ -6,8 +6,6 @@ import ( "strings" "github.com/spf13/cobra" - "github.com/steveyegge/beads/internal/beads" - "github.com/steveyegge/beads/internal/storage/sqlite" "github.com/steveyegge/beads/internal/types" "github.com/steveyegge/beads/internal/ui" "github.com/steveyegge/beads/internal/utils" @@ -27,15 +25,9 @@ This command: After a crash or session reset, the pinned root issue ensures the agent can resume from where it left off by checking 'bd ready'. -The --template-db flag enables cross-database spawning: read templates from -one database (e.g., main) while writing spawned instances to another (e.g., wisp). -This is essential for wisp molecule spawning where templates exist in the main -database but instances should be ephemeral. - Example: bd mol run mol-version-bump --var version=1.2.0 - bd mol run bd-qqc --var version=0.32.0 --var date=2025-01-01 - bd --db .beads-wisp/beads.db mol run mol-patrol --template-db .beads/beads.db`, + bd mol run bd-qqc --var version=0.32.0 --var date=2025-01-01`, Args: cobra.ExactArgs(1), Run: runMolRun, } @@ -57,7 +49,6 @@ func runMolRun(cmd *cobra.Command, args []string) { } varFlags, _ := cmd.Flags().GetStringSlice("var") - templateDB, _ := cmd.Flags().GetString("template-db") // Parse variables vars := make(map[string]string) @@ -70,42 +61,15 @@ func runMolRun(cmd *cobra.Command, args []string) { vars[parts[0]] = parts[1] } - // Determine which store to use for reading the template - // If --template-db is set, open a separate connection for reading the template - // This enables cross-database spawning (read from main, write to wisp) - // - // Auto-discovery: if --db contains ".beads-wisp" (wisp storage) but --template-db - // is not set, automatically use the main database for templates. This handles the - // common case of spawning patrol molecules from main DB into wisp storage. - templateStore := store - if templateDB == "" && strings.Contains(dbPath, ".beads-wisp") { - // Auto-discover main database for templates - templateDB = beads.FindDatabasePath() - if templateDB == "" { - fmt.Fprintf(os.Stderr, "Error: cannot find main database for templates\n") - fmt.Fprintf(os.Stderr, "Hint: specify --template-db explicitly\n") - os.Exit(1) - } - } - if templateDB != "" { - var err error - templateStore, err = sqlite.NewWithTimeout(ctx, templateDB, lockTimeout) - if err != nil { - fmt.Fprintf(os.Stderr, "Error opening template database %s: %v\n", templateDB, err) - os.Exit(1) - } - defer templateStore.Close() - } - - // Resolve molecule ID from template store - moleculeID, err := utils.ResolvePartialID(ctx, templateStore, args[0]) + // Resolve molecule ID + moleculeID, err := utils.ResolvePartialID(ctx, store, args[0]) if err != nil { fmt.Fprintf(os.Stderr, "Error resolving molecule ID %s: %v\n", args[0], err) os.Exit(1) } - // Load the molecule subgraph from template store - subgraph, err := loadTemplateSubgraph(ctx, templateStore, moleculeID) + // Load the molecule subgraph + subgraph, err := loadTemplateSubgraph(ctx, store, moleculeID) if err != nil { fmt.Fprintf(os.Stderr, "Error loading molecule: %v\n", err) os.Exit(1) @@ -168,7 +132,6 @@ func runMolRun(cmd *cobra.Command, args []string) { func init() { molRunCmd.Flags().StringSlice("var", []string{}, "Variable substitution (key=value)") - molRunCmd.Flags().String("template-db", "", "Database to read templates from (enables cross-database spawning)") molCmd.AddCommand(molRunCmd) } diff --git a/cmd/bd/mol_spawn.go b/cmd/bd/mol_spawn.go index ae5a53e5..ef997f86 100644 --- a/cmd/bd/mol_spawn.go +++ b/cmd/bd/mol_spawn.go @@ -219,7 +219,7 @@ func runMolSpawn(cmd *cobra.Command, args []string) { } for _, attach := range attachments { - bondResult, err := bondProtoMol(ctx, store, attach.issue, spawnedMol, attachType, vars, "", actor, pour) + bondResult, err := bondProtoMol(ctx, store, attach.issue, spawnedMol, attachType, vars, "", actor) if err != nil { fmt.Fprintf(os.Stderr, "Error attaching %s: %v\n", attach.id, err) os.Exit(1) diff --git a/cmd/bd/mol_test.go b/cmd/bd/mol_test.go index 8c1e021b..2c962d19 100644 --- a/cmd/bd/mol_test.go +++ b/cmd/bd/mol_test.go @@ -343,7 +343,7 @@ func TestBondProtoMol(t *testing.T) { // Bond proto to molecule vars := map[string]string{"name": "auth-feature"} - result, err := bondProtoMol(ctx, store, proto, mol, types.BondTypeSequential, vars, "", "test", false) + result, err := bondProtoMol(ctx, store, proto, mol, types.BondTypeSequential, vars, "", "test") if err != nil { t.Fatalf("bondProtoMol failed: %v", err) } @@ -840,7 +840,7 @@ func TestSpawnWithBasicAttach(t *testing.T) { } // Attach the second proto (simulating --attach flag behavior) - bondResult, err := bondProtoMol(ctx, s, attachProto, spawnedMol, types.BondTypeSequential, vars, "", "test", false) + bondResult, err := bondProtoMol(ctx, s, attachProto, spawnedMol, types.BondTypeSequential, vars, "", "test") if err != nil { t.Fatalf("Failed to bond attachment: %v", err) } @@ -945,12 +945,12 @@ func TestSpawnWithMultipleAttachments(t *testing.T) { } // Attach both protos (simulating --attach A --attach B) - bondResultA, err := bondProtoMol(ctx, s, attachA, spawnedMol, types.BondTypeSequential, nil, "", "test", false) + bondResultA, err := bondProtoMol(ctx, s, attachA, spawnedMol, types.BondTypeSequential, nil, "", "test") if err != nil { t.Fatalf("Failed to bond attachA: %v", err) } - bondResultB, err := bondProtoMol(ctx, s, attachB, spawnedMol, types.BondTypeSequential, nil, "", "test", false) + bondResultB, err := bondProtoMol(ctx, s, attachB, spawnedMol, types.BondTypeSequential, nil, "", "test") if err != nil { t.Fatalf("Failed to bond attachB: %v", err) } @@ -1063,7 +1063,7 @@ func TestSpawnAttachTypes(t *testing.T) { } // Bond with specified type - bondResult, err := bondProtoMol(ctx, s, attachProto, spawnedMol, tt.bondType, nil, "", "test", false) + bondResult, err := bondProtoMol(ctx, s, attachProto, spawnedMol, tt.bondType, nil, "", "test") if err != nil { t.Fatalf("Failed to bond: %v", err) } @@ -1228,7 +1228,7 @@ func TestSpawnVariableAggregation(t *testing.T) { // Bond attachment with same variables spawnedMol, _ := s.GetIssue(ctx, spawnResult.NewEpicID) - bondResult, err := bondProtoMol(ctx, s, attachProto, spawnedMol, types.BondTypeSequential, vars, "", "test", false) + bondResult, err := bondProtoMol(ctx, s, attachProto, spawnedMol, types.BondTypeSequential, vars, "", "test") if err != nil { t.Fatalf("Failed to bond: %v", err) } @@ -2238,7 +2238,7 @@ func TestBondProtoMolWithRef(t *testing.T) { // Bond proto to patrol with custom child ref vars := map[string]string{"polecat_name": "ace"} childRef := "arm-{{polecat_name}}" - result, err := bondProtoMol(ctx, s, protoRoot, patrol, types.BondTypeSequential, vars, childRef, "test", false) + result, err := bondProtoMol(ctx, s, protoRoot, patrol, types.BondTypeSequential, vars, childRef, "test") if err != nil { t.Fatalf("bondProtoMol failed: %v", err) } @@ -2309,14 +2309,14 @@ func TestBondProtoMolMultipleArms(t *testing.T) { // Bond arm-ace varsAce := map[string]string{"name": "ace"} - resultAce, err := bondProtoMol(ctx, s, proto, patrol, types.BondTypeParallel, varsAce, "arm-{{name}}", "test", false) + resultAce, err := bondProtoMol(ctx, s, proto, patrol, types.BondTypeParallel, varsAce, "arm-{{name}}", "test") if err != nil { t.Fatalf("bondProtoMol (ace) failed: %v", err) } // Bond arm-nux varsNux := map[string]string{"name": "nux"} - resultNux, err := bondProtoMol(ctx, s, proto, patrol, types.BondTypeParallel, varsNux, "arm-{{name}}", "test", false) + resultNux, err := bondProtoMol(ctx, s, proto, patrol, types.BondTypeParallel, varsNux, "arm-{{name}}", "test") if err != nil { t.Fatalf("bondProtoMol (nux) failed: %v", err) } diff --git a/cmd/bd/pour.go b/cmd/bd/pour.go index d4684eb3..2665a47a 100644 --- a/cmd/bd/pour.go +++ b/cmd/bd/pour.go @@ -200,7 +200,7 @@ func runPour(cmd *cobra.Command, args []string) { } for _, attach := range attachments { - bondResult, err := bondProtoMol(ctx, store, attach.issue, spawnedMol, attachType, vars, "", actor, true) + bondResult, err := bondProtoMol(ctx, store, attach.issue, spawnedMol, attachType, vars, "", actor) if err != nil { fmt.Fprintf(os.Stderr, "Error attaching %s: %v\n", attach.id, err) os.Exit(1) diff --git a/cmd/bd/search.go b/cmd/bd/search.go index c078b92d..ac9c117d 100644 --- a/cmd/bd/search.go +++ b/cmd/bd/search.go @@ -26,9 +26,14 @@ Examples: bd search "database" --label backend --limit 10 bd search --query "performance" --assignee alice bd search "bd-5q" # Search by partial ID - bd search "security" --priority-min 0 --priority-max 2 + bd search "security" --priority 1 # Exact priority match + bd search "security" --priority-min 0 --priority-max 2 # Priority range bd search "bug" --created-after 2025-01-01 bd search "refactor" --updated-after 2025-01-01 --priority-min 1 + bd search "bug" --desc-contains "authentication" # Search in description + bd search "" --empty-description # Issues without description + bd search "" --no-assignee # Unassigned issues + bd search "" --no-labels # Issues without labels bd search "bug" --sort priority bd search "task" --sort created --reverse`, Run: func(cmd *cobra.Command, args []string) { @@ -41,9 +46,31 @@ Examples: query = queryFlag } - // If no query provided, show help - if query == "" { - fmt.Fprintf(os.Stderr, "Error: search query is required\n") + // Check if any filter flags are set (allows empty query with filters) + hasFilters := cmd.Flags().Changed("status") || + cmd.Flags().Changed("priority") || + cmd.Flags().Changed("assignee") || + cmd.Flags().Changed("type") || + cmd.Flags().Changed("label") || + cmd.Flags().Changed("label-any") || + cmd.Flags().Changed("created-after") || + cmd.Flags().Changed("created-before") || + cmd.Flags().Changed("updated-after") || + cmd.Flags().Changed("updated-before") || + cmd.Flags().Changed("closed-after") || + cmd.Flags().Changed("closed-before") || + cmd.Flags().Changed("priority-min") || + cmd.Flags().Changed("priority-max") || + cmd.Flags().Changed("title-contains") || + cmd.Flags().Changed("desc-contains") || + cmd.Flags().Changed("notes-contains") || + cmd.Flags().Changed("empty-description") || + cmd.Flags().Changed("no-assignee") || + cmd.Flags().Changed("no-labels") + + // If no query and no filters provided, show help + if query == "" && !hasFilters { + fmt.Fprintf(os.Stderr, "Error: search query or filter is required\n") if err := cmd.Help(); err != nil { fmt.Fprintf(os.Stderr, "Error displaying help: %v\n", err) } @@ -61,6 +88,11 @@ Examples: sortBy, _ := cmd.Flags().GetString("sort") reverse, _ := cmd.Flags().GetBool("reverse") + // Pattern matching flags + titleContains, _ := cmd.Flags().GetString("title-contains") + descContains, _ := cmd.Flags().GetString("desc-contains") + notesContains, _ := cmd.Flags().GetString("notes-contains") + // Date range flags createdAfter, _ := cmd.Flags().GetString("created-after") createdBefore, _ := cmd.Flags().GetString("created-before") @@ -69,6 +101,11 @@ Examples: closedAfter, _ := cmd.Flags().GetString("closed-after") closedBefore, _ := cmd.Flags().GetString("closed-before") + // Empty/null check flags + emptyDesc, _ := cmd.Flags().GetBool("empty-description") + noAssignee, _ := cmd.Flags().GetBool("no-assignee") + noLabels, _ := cmd.Flags().GetBool("no-labels") + // Priority range flags priorityMinStr, _ := cmd.Flags().GetString("priority-min") priorityMaxStr, _ := cmd.Flags().GetString("priority-max") @@ -104,6 +141,39 @@ Examples: filter.LabelsAny = labelsAny } + // Exact priority match (use Changed() to properly handle P0) + if cmd.Flags().Changed("priority") { + priorityStr, _ := cmd.Flags().GetString("priority") + priority, err := validation.ValidatePriority(priorityStr) + if err != nil { + fmt.Fprintf(os.Stderr, "Error: %v\n", err) + os.Exit(1) + } + filter.Priority = &priority + } + + // Pattern matching + if titleContains != "" { + filter.TitleContains = titleContains + } + if descContains != "" { + filter.DescriptionContains = descContains + } + if notesContains != "" { + filter.NotesContains = notesContains + } + + // Empty/null checks + if emptyDesc { + filter.EmptyDescription = true + } + if noAssignee { + filter.NoAssignee = true + } + if noLabels { + filter.NoLabels = true + } + // Date ranges if createdAfter != "" { t, err := parseTimeFlag(createdAfter) @@ -200,6 +270,21 @@ Examples: listArgs.LabelsAny = labelsAny } + // Exact priority match + if filter.Priority != nil { + listArgs.Priority = filter.Priority + } + + // Pattern matching + listArgs.TitleContains = titleContains + listArgs.DescriptionContains = descContains + listArgs.NotesContains = notesContains + + // Empty/null checks + listArgs.EmptyDescription = filter.EmptyDescription + listArgs.NoAssignee = filter.NoAssignee + listArgs.NoLabels = filter.NoLabels + // Date ranges if filter.CreatedAfter != nil { listArgs.CreatedAfter = filter.CreatedAfter.Format(time.RFC3339) @@ -372,6 +457,7 @@ func outputSearchResults(issues []*types.Issue, query string, longFormat bool) { func init() { searchCmd.Flags().String("query", "", "Search query (alternative to positional argument)") searchCmd.Flags().StringP("status", "s", "", "Filter by status (open, in_progress, blocked, deferred, closed)") + registerPriorityFlag(searchCmd, "") searchCmd.Flags().StringP("assignee", "a", "", "Filter by assignee") searchCmd.Flags().StringP("type", "t", "", "Filter by type (bug, feature, task, epic, chore, merge-request, molecule, gate)") searchCmd.Flags().StringSliceP("label", "l", []string{}, "Filter by labels (AND: must have ALL)") @@ -381,6 +467,11 @@ func init() { searchCmd.Flags().String("sort", "", "Sort by field: priority, created, updated, closed, status, id, title, type, assignee") searchCmd.Flags().BoolP("reverse", "r", false, "Reverse sort order") + // Pattern matching flags + searchCmd.Flags().String("title-contains", "", "Filter by title substring (case-insensitive)") + searchCmd.Flags().String("desc-contains", "", "Filter by description substring (case-insensitive)") + searchCmd.Flags().String("notes-contains", "", "Filter by notes substring (case-insensitive)") + // Date range flags searchCmd.Flags().String("created-after", "", "Filter issues created after date (YYYY-MM-DD or RFC3339)") searchCmd.Flags().String("created-before", "", "Filter issues created before date (YYYY-MM-DD or RFC3339)") @@ -389,6 +480,11 @@ func init() { searchCmd.Flags().String("closed-after", "", "Filter issues closed after date (YYYY-MM-DD or RFC3339)") searchCmd.Flags().String("closed-before", "", "Filter issues closed before date (YYYY-MM-DD or RFC3339)") + // Empty/null check flags + searchCmd.Flags().Bool("empty-description", false, "Filter issues with empty or missing description") + searchCmd.Flags().Bool("no-assignee", false, "Filter issues with no assignee") + searchCmd.Flags().Bool("no-labels", false, "Filter issues with no labels") + // Priority range flags searchCmd.Flags().String("priority-min", "", "Filter by minimum priority (inclusive, 0-4 or P0-P4)") searchCmd.Flags().String("priority-max", "", "Filter by maximum priority (inclusive, 0-4 or P0-P4)") diff --git a/cmd/bd/show.go b/cmd/bd/show.go index af885828..1f457414 100644 --- a/cmd/bd/show.go +++ b/cmd/bd/show.go @@ -972,6 +972,10 @@ var closeCmd = &cobra.Command{ Run: func(cmd *cobra.Command, args []string) { CheckReadonly("close") reason, _ := cmd.Flags().GetString("reason") + // Check --resolution alias if --reason not provided + if reason == "" { + reason, _ = cmd.Flags().GetString("resolution") + } if reason == "" { reason = "Closed" } @@ -1053,6 +1057,8 @@ var closeCmd = &cobra.Command{ if hookRunner != nil { hookRunner.Run(hooks.EventClose, &issue) } + // Run config-based close hooks (bd-g4b4) + hooks.RunConfigCloseHooks(ctx, &issue) if jsonOutput { closedIssues = append(closedIssues, &issue) } @@ -1105,8 +1111,12 @@ var closeCmd = &cobra.Command{ // Run close hook (bd-kwro.8) closedIssue, _ := store.GetIssue(ctx, id) - if closedIssue != nil && hookRunner != nil { - hookRunner.Run(hooks.EventClose, closedIssue) + if closedIssue != nil { + if hookRunner != nil { + hookRunner.Run(hooks.EventClose, closedIssue) + } + // Run config-based close hooks (bd-g4b4) + hooks.RunConfigCloseHooks(ctx, closedIssue) } if jsonOutput { @@ -1411,6 +1421,8 @@ func init() { rootCmd.AddCommand(editCmd) closeCmd.Flags().StringP("reason", "r", "", "Reason for closing") + closeCmd.Flags().String("resolution", "", "Alias for --reason (Jira CLI convention)") + _ = closeCmd.Flags().MarkHidden("resolution") // Hidden alias for agent/CLI ergonomics closeCmd.Flags().Bool("json", false, "Output JSON format") closeCmd.Flags().BoolP("force", "f", false, "Force close pinned issues") closeCmd.Flags().Bool("continue", false, "Auto-advance to next step in molecule") diff --git a/cmd/bd/sync.go b/cmd/bd/sync.go index ab7e6701..23d9d8b1 100644 --- a/cmd/bd/sync.go +++ b/cmd/bd/sync.go @@ -2,11 +2,15 @@ package main import ( "bufio" + "bytes" + "cmp" "context" + "encoding/json" "fmt" "os" "os/exec" "path/filepath" + "slices" "strings" "time" @@ -15,7 +19,9 @@ import ( "github.com/steveyegge/beads/internal/config" "github.com/steveyegge/beads/internal/debug" "github.com/steveyegge/beads/internal/git" + "github.com/steveyegge/beads/internal/rpc" "github.com/steveyegge/beads/internal/syncbranch" + "github.com/steveyegge/beads/internal/types" ) var syncCmd = &cobra.Command{ @@ -77,13 +83,15 @@ Use --merge to merge the sync branch back to main branch.`, // Find JSONL path jsonlPath := findJSONLPath() if jsonlPath == "" { - FatalError("not in a bd workspace (no .beads directory found)") + fmt.Fprintf(os.Stderr, "Error: not in a bd workspace (no .beads directory found)\n") + os.Exit(1) } // If status mode, show diff between sync branch and main if status { if err := showSyncStatus(ctx); err != nil { - FatalError("%v", err) + fmt.Fprintf(os.Stderr, "Error: %v\n", err) + os.Exit(1) } return } @@ -97,7 +105,8 @@ Use --merge to merge the sync branch back to main branch.`, // If merge mode, merge sync branch to main if merge { if err := mergeSyncBranch(ctx, dryRun); err != nil { - FatalError("%v", err) + fmt.Fprintf(os.Stderr, "Error: %v\n", err) + os.Exit(1) } return } @@ -105,7 +114,8 @@ Use --merge to merge the sync branch back to main branch.`, // If from-main mode, one-way sync from main branch (gt-ick9: ephemeral branch support) if fromMain { if err := doSyncFromMain(ctx, jsonlPath, renameOnImport, dryRun, noGitHistory); err != nil { - FatalError("%v", err) + fmt.Fprintf(os.Stderr, "Error: %v\n", err) + os.Exit(1) } return } @@ -117,7 +127,8 @@ Use --merge to merge the sync branch back to main branch.`, } else { fmt.Println("β†’ Importing from JSONL...") if err := importFromJSONL(ctx, jsonlPath, renameOnImport, noGitHistory); err != nil { - FatalError("importing: %v", err) + fmt.Fprintf(os.Stderr, "Error importing: %v\n", err) + os.Exit(1) } fmt.Println("βœ“ Import complete") } @@ -130,7 +141,8 @@ Use --merge to merge the sync branch back to main branch.`, fmt.Println("β†’ [DRY RUN] Would export pending changes to JSONL") } else { if err := exportToJSONL(ctx, jsonlPath); err != nil { - FatalError("exporting: %v", err) + fmt.Fprintf(os.Stderr, "Error exporting: %v\n", err) + os.Exit(1) } } return @@ -144,7 +156,8 @@ Use --merge to merge the sync branch back to main branch.`, } else { fmt.Println("β†’ Exporting pending changes to JSONL (squash mode)...") if err := exportToJSONL(ctx, jsonlPath); err != nil { - FatalError("exporting: %v", err) + fmt.Fprintf(os.Stderr, "Error exporting: %v\n", err) + os.Exit(1) } fmt.Println("βœ“ Changes accumulated in JSONL") fmt.Println(" Run 'bd sync' (without --squash) to commit all accumulated changes") @@ -154,14 +167,19 @@ Use --merge to merge the sync branch back to main branch.`, // Check if we're in a git repository if !isGitRepo() { - FatalErrorWithHint("not in a git repository", "run 'git init' to initialize a repository") + fmt.Fprintf(os.Stderr, "Error: not in a git repository\n") + fmt.Fprintf(os.Stderr, "Hint: run 'git init' to initialize a repository\n") + os.Exit(1) } // Preflight: check for merge/rebase in progress if inMerge, err := gitHasUnmergedPaths(); err != nil { - FatalError("checking git state: %v", err) + fmt.Fprintf(os.Stderr, "Error checking git state: %v\n", err) + os.Exit(1) } else if inMerge { - FatalErrorWithHint("unmerged paths or merge in progress", "resolve conflicts, run 'bd import' if needed, then 'bd sync' again") + fmt.Fprintf(os.Stderr, "Error: unmerged paths or merge in progress\n") + fmt.Fprintf(os.Stderr, "Hint: resolve conflicts, run 'bd import' if needed, then 'bd sync' again\n") + os.Exit(1) } // GH#638: Check sync.branch BEFORE upstream check @@ -183,7 +201,8 @@ Use --merge to merge the sync branch back to main branch.`, fmt.Println("β†’ No upstream configured, using --from-main mode") // Force noGitHistory=true for auto-detected from-main mode (fixes #417) if err := doSyncFromMain(ctx, jsonlPath, renameOnImport, dryRun, true); err != nil { - FatalError("%v", err) + fmt.Fprintf(os.Stderr, "Error: %v\n", err) + os.Exit(1) } return } @@ -216,7 +235,8 @@ Use --merge to merge the sync branch back to main branch.`, fmt.Printf("β†’ DB has %d issues but JSONL has %d (stale JSONL detected)\n", dbCount, jsonlCount) fmt.Println("β†’ Importing JSONL first (ZFC)...") if err := importFromJSONL(ctx, jsonlPath, renameOnImport, noGitHistory); err != nil { - FatalError("importing (ZFC): %v", err) + fmt.Fprintf(os.Stderr, "Error importing (ZFC): %v\n", err) + os.Exit(1) } // Skip export after ZFC import - JSONL is source of truth skipExport = true @@ -236,7 +256,8 @@ Use --merge to merge the sync branch back to main branch.`, fmt.Printf("β†’ JSONL has %d issues but DB has only %d (stale DB detected - bd-53c)\n", jsonlCount, dbCount) fmt.Println("β†’ Importing JSONL first to prevent data loss...") if err := importFromJSONL(ctx, jsonlPath, renameOnImport, noGitHistory); err != nil { - FatalError("importing (reverse ZFC): %v", err) + fmt.Fprintf(os.Stderr, "Error importing (reverse ZFC): %v\n", err) + os.Exit(1) } // Skip export after import - JSONL is source of truth skipExport = true @@ -264,7 +285,8 @@ Use --merge to merge the sync branch back to main branch.`, fmt.Println("β†’ JSONL content differs from last sync (bd-f2f)") fmt.Println("β†’ Importing JSONL first to prevent stale DB from overwriting changes...") if err := importFromJSONL(ctx, jsonlPath, renameOnImport, noGitHistory); err != nil { - FatalError("importing (bd-f2f hash mismatch): %v", err) + fmt.Fprintf(os.Stderr, "Error importing (bd-f2f hash mismatch): %v\n", err) + os.Exit(1) } // Don't skip export - we still want to export any remaining local dirty issues // The import updated DB with JSONL content, and export will write merged state @@ -277,10 +299,12 @@ Use --merge to merge the sync branch back to main branch.`, // Pre-export integrity checks if err := ensureStoreActive(); err == nil && store != nil { if err := validatePreExport(ctx, store, jsonlPath); err != nil { - FatalError("pre-export validation failed: %v", err) + fmt.Fprintf(os.Stderr, "Pre-export validation failed: %v\n", err) + os.Exit(1) } if err := checkDuplicateIDs(ctx, store); err != nil { - FatalError("database corruption detected: %v", err) + fmt.Fprintf(os.Stderr, "Database corruption detected: %v\n", err) + os.Exit(1) } if orphaned, err := checkOrphanedDeps(ctx, store); err != nil { fmt.Fprintf(os.Stderr, "Warning: orphaned dependency check failed: %v\n", err) @@ -291,14 +315,16 @@ Use --merge to merge the sync branch back to main branch.`, fmt.Println("β†’ Exporting pending changes to JSONL...") if err := exportToJSONL(ctx, jsonlPath); err != nil { - FatalError("exporting: %v", err) + fmt.Fprintf(os.Stderr, "Error exporting: %v\n", err) + os.Exit(1) } } // Capture left snapshot (pre-pull state) for 3-way merge // This is mandatory for deletion tracking integrity if err := captureLeftSnapshot(jsonlPath); err != nil { - FatalError("failed to capture snapshot (required for deletion tracking): %v", err) + fmt.Fprintf(os.Stderr, "Error: failed to capture snapshot (required for deletion tracking): %v\n", err) + os.Exit(1) } } @@ -314,7 +340,8 @@ Use --merge to merge the sync branch back to main branch.`, // Check for changes in the external beads repo externalRepoRoot, err := getRepoRootFromPath(ctx, beadsDir) if err != nil { - FatalError("%v", err) + fmt.Fprintf(os.Stderr, "Error: %v\n", err) + os.Exit(1) } // Check if there are changes to commit @@ -329,7 +356,8 @@ Use --merge to merge the sync branch back to main branch.`, } else { committed, err := commitToExternalBeadsRepo(ctx, beadsDir, message, !noPush) if err != nil { - FatalError("%v", err) + fmt.Fprintf(os.Stderr, "Error: %v\n", err) + os.Exit(1) } if committed { if !noPush { @@ -349,14 +377,16 @@ Use --merge to merge the sync branch back to main branch.`, } else { fmt.Println("β†’ Pulling from external beads repo...") if err := pullFromExternalBeadsRepo(ctx, beadsDir); err != nil { - FatalError("pulling: %v", err) + fmt.Fprintf(os.Stderr, "Error pulling: %v\n", err) + os.Exit(1) } fmt.Println("βœ“ Pulled from external beads repo") // Re-import after pull to update local database fmt.Println("β†’ Importing JSONL...") if err := importFromJSONL(ctx, jsonlPath, renameOnImport, noGitHistory); err != nil { - FatalError("importing: %v", err) + fmt.Fprintf(os.Stderr, "Error importing: %v\n", err) + os.Exit(1) } } } @@ -396,7 +426,8 @@ Use --merge to merge the sync branch back to main branch.`, // Step 2: Check if there are changes to commit (check entire .beads/ directory) hasChanges, err := gitHasBeadsChanges(ctx) if err != nil { - FatalError("checking git status: %v", err) + fmt.Fprintf(os.Stderr, "Error checking git status: %v\n", err) + os.Exit(1) } // Track if we already pushed via worktree (to skip Step 5) @@ -417,7 +448,8 @@ Use --merge to merge the sync branch back to main branch.`, fmt.Printf("β†’ Committing changes to sync branch '%s'...\n", syncBranchName) result, err := syncbranch.CommitToSyncBranch(ctx, repoRoot, syncBranchName, jsonlPath, !noPush) if err != nil { - FatalError("committing to sync branch: %v", err) + fmt.Fprintf(os.Stderr, "Error committing to sync branch: %v\n", err) + os.Exit(1) } if result.Committed { fmt.Printf("βœ“ Committed to %s\n", syncBranchName) @@ -435,7 +467,8 @@ Use --merge to merge the sync branch back to main branch.`, fmt.Println("β†’ Committing changes to git...") } if err := gitCommitBeadsDir(ctx, message); err != nil { - FatalError("committing: %v", err) + fmt.Fprintf(os.Stderr, "Error committing: %v\n", err) + os.Exit(1) } } } else { @@ -465,7 +498,8 @@ Use --merge to merge the sync branch back to main branch.`, pullResult, err := syncbranch.PullFromSyncBranch(ctx, repoRoot, syncBranchName, jsonlPath, !noPush, requireMassDeleteConfirmation) if err != nil { - FatalError("pulling from sync branch: %v", err) + fmt.Fprintf(os.Stderr, "Error pulling from sync branch: %v\n", err) + os.Exit(1) } if pullResult.Pulled { if pullResult.Merged { @@ -491,7 +525,8 @@ Use --merge to merge the sync branch back to main branch.`, if response == "y" || response == "yes" { fmt.Printf("β†’ Pushing to %s...\n", syncBranchName) if err := syncbranch.PushSyncBranch(ctx, repoRoot, syncBranchName); err != nil { - FatalError("pushing to sync branch: %v", err) + fmt.Fprintf(os.Stderr, "Error pushing to sync branch: %v\n", err) + os.Exit(1) } fmt.Printf("βœ“ Pushed merged changes to %s\n", syncBranchName) pushedViaSyncBranch = true @@ -529,23 +564,31 @@ Use --merge to merge the sync branch back to main branch.`, // Export clean JSONL from DB (database is source of truth) if exportErr := exportToJSONL(ctx, jsonlPath); exportErr != nil { - FatalErrorWithHint(fmt.Sprintf("failed to export for conflict resolution: %v", exportErr), "resolve conflicts manually and run 'bd import' then 'bd sync' again") + fmt.Fprintf(os.Stderr, "Error: failed to export for conflict resolution: %v\n", exportErr) + fmt.Fprintf(os.Stderr, "Hint: resolve conflicts manually and run 'bd import' then 'bd sync' again\n") + os.Exit(1) } // Mark conflict as resolved addCmd := exec.CommandContext(ctx, "git", "add", jsonlPath) if addErr := addCmd.Run(); addErr != nil { - FatalErrorWithHint(fmt.Sprintf("failed to mark conflict resolved: %v", addErr), "resolve conflicts manually and run 'bd import' then 'bd sync' again") + fmt.Fprintf(os.Stderr, "Error: failed to mark conflict resolved: %v\n", addErr) + fmt.Fprintf(os.Stderr, "Hint: resolve conflicts manually and run 'bd import' then 'bd sync' again\n") + os.Exit(1) } // Continue rebase if continueErr := runGitRebaseContinue(ctx); continueErr != nil { - FatalErrorWithHint(fmt.Sprintf("failed to continue rebase: %v", continueErr), "resolve conflicts manually and run 'bd import' then 'bd sync' again") + fmt.Fprintf(os.Stderr, "Error: failed to continue rebase: %v\n", continueErr) + fmt.Fprintf(os.Stderr, "Hint: resolve conflicts manually and run 'bd import' then 'bd sync' again\n") + os.Exit(1) } fmt.Println("βœ“ Auto-resolved JSONL conflict") } else { // Not an auto-resolvable conflict, fail with original error + fmt.Fprintf(os.Stderr, "Error pulling: %v\n", err) + // Check if this looks like a merge driver failure errStr := err.Error() if strings.Contains(errStr, "merge driver") || @@ -555,7 +598,8 @@ Use --merge to merge the sync branch back to main branch.`, fmt.Fprintf(os.Stderr, "Fix: bd doctor --fix\n\n") } - FatalErrorWithHint(fmt.Sprintf("pulling: %v", err), "resolve conflicts manually and run 'bd import' then 'bd sync' again") + fmt.Fprintf(os.Stderr, "Hint: resolve conflicts manually and run 'bd import' then 'bd sync' again\n") + os.Exit(1) } } } @@ -573,7 +617,8 @@ Use --merge to merge the sync branch back to main branch.`, // Step 3.5: Perform 3-way merge and prune deletions if err := ensureStoreActive(); err == nil && store != nil { if err := applyDeletionsFromMerge(ctx, store, jsonlPath); err != nil { - FatalError("during 3-way merge: %v", err) + fmt.Fprintf(os.Stderr, "Error during 3-way merge: %v\n", err) + os.Exit(1) } } @@ -582,7 +627,8 @@ Use --merge to merge the sync branch back to main branch.`, // tombstoning issues that were in our local export but got lost during merge (bd-sync-deletion fix) fmt.Println("β†’ Importing updated JSONL...") if err := importFromJSONL(ctx, jsonlPath, renameOnImport, noGitHistory, true); err != nil { - FatalError("importing: %v", err) + fmt.Fprintf(os.Stderr, "Error importing: %v\n", err) + os.Exit(1) } // Validate import didn't cause data loss @@ -593,7 +639,8 @@ Use --merge to merge the sync branch back to main branch.`, fmt.Fprintf(os.Stderr, "Warning: failed to count issues after import: %v\n", err) } else { if err := validatePostImportWithExpectedDeletions(beforeCount, afterCount, 0, jsonlPath); err != nil { - FatalError("post-import validation failed: %v", err) + fmt.Fprintf(os.Stderr, "Post-import validation failed: %v\n", err) + os.Exit(1) } } } @@ -634,13 +681,15 @@ Use --merge to merge the sync branch back to main branch.`, if needsExport { fmt.Println("β†’ Re-exporting after import to sync DB changes...") if err := exportToJSONL(ctx, jsonlPath); err != nil { - FatalError("re-exporting after import: %v", err) + fmt.Fprintf(os.Stderr, "Error re-exporting after import: %v\n", err) + os.Exit(1) } // Step 4.6: Commit the re-export if it created changes hasPostImportChanges, err := gitHasBeadsChanges(ctx) if err != nil { - FatalError("checking git status after re-export: %v", err) + fmt.Fprintf(os.Stderr, "Error checking git status after re-export: %v\n", err) + os.Exit(1) } if hasPostImportChanges { fmt.Println("β†’ Committing DB changes from import...") @@ -648,14 +697,16 @@ Use --merge to merge the sync branch back to main branch.`, // Commit to sync branch via worktree (bd-e3w) result, err := syncbranch.CommitToSyncBranch(ctx, repoRoot, syncBranchName, jsonlPath, !noPush) if err != nil { - FatalError("committing to sync branch: %v", err) + fmt.Fprintf(os.Stderr, "Error committing to sync branch: %v\n", err) + os.Exit(1) } if result.Pushed { pushedViaSyncBranch = true } } else { if err := gitCommitBeadsDir(ctx, "bd sync: apply DB changes after import"); err != nil { - FatalError("committing post-import changes: %v", err) + fmt.Fprintf(os.Stderr, "Error committing post-import changes: %v\n", err) + os.Exit(1) } } hasChanges = true // Mark that we have changes to push @@ -682,7 +733,9 @@ Use --merge to merge the sync branch back to main branch.`, } else { fmt.Println("β†’ Pushing to remote...") if err := gitPush(ctx); err != nil { - FatalErrorWithHint(fmt.Sprintf("pushing: %v", err), "pull may have brought new changes, run 'bd sync' again") + fmt.Fprintf(os.Stderr, "Error pushing: %v\n", err) + fmt.Fprintf(os.Stderr, "Hint: pull may have brought new changes, run 'bd sync' again\n") + os.Exit(1) } } } @@ -1183,9 +1236,968 @@ func getDefaultBranchForRemote(ctx context.Context, remote string) string { return "main" } -// doSyncFromMain function moved to sync_import.go -// Export function moved to sync_export.go -// Sync branch functions moved to sync_branch.go -// Import functions moved to sync_import.go -// External beads dir functions moved to sync_branch.go -// Integrity check types and functions moved to sync_check.go +// doSyncFromMain performs a one-way sync from the default branch (main/master) +// Used for ephemeral branches without upstream tracking (gt-ick9) +// This fetches beads from main and imports them, discarding local beads changes. +// If sync.remote is configured (e.g., "upstream" for fork workflows), uses that remote +// instead of "origin" (bd-bx9). +func doSyncFromMain(ctx context.Context, jsonlPath string, renameOnImport bool, dryRun bool, noGitHistory bool) error { + // Determine which remote to use (default: origin, but can be configured via sync.remote) + remote := "origin" + if err := ensureStoreActive(); err == nil && store != nil { + if configuredRemote, err := store.GetConfig(ctx, "sync.remote"); err == nil && configuredRemote != "" { + remote = configuredRemote + } + } + + if dryRun { + fmt.Println("β†’ [DRY RUN] Would sync beads from main branch") + fmt.Printf(" 1. Fetch %s main\n", remote) + fmt.Printf(" 2. Checkout .beads/ from %s/main\n", remote) + fmt.Println(" 3. Import JSONL into database") + fmt.Println("\nβœ“ Dry run complete (no changes made)") + return nil + } + + // Check if we're in a git repository + if !isGitRepo() { + return fmt.Errorf("not in a git repository") + } + + // Check if remote exists + if !hasGitRemote(ctx) { + return fmt.Errorf("no git remote configured") + } + + // Verify the configured remote exists + checkRemoteCmd := exec.CommandContext(ctx, "git", "remote", "get-url", remote) + if err := checkRemoteCmd.Run(); err != nil { + return fmt.Errorf("configured sync.remote '%s' does not exist (run 'git remote add %s ')", remote, remote) + } + + defaultBranch := getDefaultBranchForRemote(ctx, remote) + + // Step 1: Fetch from main + fmt.Printf("β†’ Fetching from %s/%s...\n", remote, defaultBranch) + fetchCmd := exec.CommandContext(ctx, "git", "fetch", remote, defaultBranch) + if output, err := fetchCmd.CombinedOutput(); err != nil { + return fmt.Errorf("git fetch %s %s failed: %w\n%s", remote, defaultBranch, err, output) + } + + // Step 2: Checkout .beads/ directory from main + fmt.Printf("β†’ Checking out beads from %s/%s...\n", remote, defaultBranch) + checkoutCmd := exec.CommandContext(ctx, "git", "checkout", fmt.Sprintf("%s/%s", remote, defaultBranch), "--", ".beads/") + if output, err := checkoutCmd.CombinedOutput(); err != nil { + return fmt.Errorf("git checkout .beads/ from %s/%s failed: %w\n%s", remote, defaultBranch, err, output) + } + + // Step 3: Import JSONL + fmt.Println("β†’ Importing JSONL...") + if err := importFromJSONL(ctx, jsonlPath, renameOnImport, noGitHistory); err != nil { + return fmt.Errorf("import failed: %w", err) + } + + fmt.Println("\nβœ“ Sync from main complete") + return nil +} + +// exportToJSONL exports the database to JSONL format +func exportToJSONL(ctx context.Context, jsonlPath string) error { + // If daemon is running, use RPC + if daemonClient != nil { + exportArgs := &rpc.ExportArgs{ + JSONLPath: jsonlPath, + } + resp, err := daemonClient.Export(exportArgs) + if err != nil { + return fmt.Errorf("daemon export failed: %w", err) + } + if !resp.Success { + return fmt.Errorf("daemon export error: %s", resp.Error) + } + return nil + } + + // Direct mode: access store directly + // Ensure store is initialized + if err := ensureStoreActive(); err != nil { + return fmt.Errorf("failed to initialize store: %w", err) + } + + // Get all issues including tombstones for sync propagation (bd-rp4o fix) + // Tombstones must be exported so they propagate to other clones and prevent resurrection + issues, err := store.SearchIssues(ctx, "", types.IssueFilter{IncludeTombstones: true}) + if err != nil { + return fmt.Errorf("failed to get issues: %w", err) + } + + // Safety check: prevent exporting empty database over non-empty JSONL + // Note: The main bd-53c protection is the reverse ZFC check earlier in sync.go + // which runs BEFORE export. Here we only block the most catastrophic case (empty DB) + // to allow legitimate deletions. + if len(issues) == 0 { + existingCount, countErr := countIssuesInJSONL(jsonlPath) + if countErr != nil { + // If we can't read the file, it might not exist yet, which is fine + if !os.IsNotExist(countErr) { + fmt.Fprintf(os.Stderr, "Warning: failed to read existing JSONL: %v\n", countErr) + } + } else if existingCount > 0 { + return fmt.Errorf("refusing to export empty database over non-empty JSONL file (database: 0 issues, JSONL: %d issues)", existingCount) + } + } + + // Sort by ID for consistent output + slices.SortFunc(issues, func(a, b *types.Issue) int { + return cmp.Compare(a.ID, b.ID) + }) + + // Populate dependencies for all issues (avoid N+1) + allDeps, err := store.GetAllDependencyRecords(ctx) + if err != nil { + return fmt.Errorf("failed to get dependencies: %w", err) + } + for _, issue := range issues { + issue.Dependencies = allDeps[issue.ID] + } + + // Populate labels for all issues + for _, issue := range issues { + labels, err := store.GetLabels(ctx, issue.ID) + if err != nil { + return fmt.Errorf("failed to get labels for %s: %w", issue.ID, err) + } + issue.Labels = labels + } + + // Populate comments for all issues + for _, issue := range issues { + comments, err := store.GetIssueComments(ctx, issue.ID) + if err != nil { + return fmt.Errorf("failed to get comments for %s: %w", issue.ID, err) + } + issue.Comments = comments + } + + // Create temp file for atomic write + dir := filepath.Dir(jsonlPath) + base := filepath.Base(jsonlPath) + tempFile, err := os.CreateTemp(dir, base+".tmp.*") + if err != nil { + return fmt.Errorf("failed to create temp file: %w", err) + } + tempPath := tempFile.Name() + defer func() { + _ = tempFile.Close() + _ = os.Remove(tempPath) + }() + + // Write JSONL + encoder := json.NewEncoder(tempFile) + exportedIDs := make([]string, 0, len(issues)) + for _, issue := range issues { + if err := encoder.Encode(issue); err != nil { + return fmt.Errorf("failed to encode issue %s: %w", issue.ID, err) + } + exportedIDs = append(exportedIDs, issue.ID) + } + + // Close temp file before rename (error checked implicitly by Rename success) + _ = tempFile.Close() + + // Atomic replace + if err := os.Rename(tempPath, jsonlPath); err != nil { + return fmt.Errorf("failed to replace JSONL file: %w", err) + } + + // Set appropriate file permissions (0600: rw-------) + if err := os.Chmod(jsonlPath, 0600); err != nil { + // Non-fatal warning + fmt.Fprintf(os.Stderr, "Warning: failed to set file permissions: %v\n", err) + } + + // Clear dirty flags for exported issues + if err := store.ClearDirtyIssuesByID(ctx, exportedIDs); err != nil { + // Non-fatal warning + fmt.Fprintf(os.Stderr, "Warning: failed to clear dirty flags: %v\n", err) + } + + // Clear auto-flush state + clearAutoFlushState() + + // Update jsonl_content_hash metadata to enable content-based staleness detection (bd-khnb fix) + // After export, database and JSONL are in sync, so update hash to prevent unnecessary auto-import + // Renamed from last_import_hash (bd-39o) - more accurate since updated on both import AND export + if currentHash, err := computeJSONLHash(jsonlPath); err == nil { + if err := store.SetMetadata(ctx, "jsonl_content_hash", currentHash); err != nil { + // Non-fatal warning: Metadata update failures are intentionally non-fatal to prevent blocking + // successful exports. System degrades gracefully to mtime-based staleness detection if metadata + // is unavailable. This ensures export operations always succeed even if metadata storage fails. + fmt.Fprintf(os.Stderr, "Warning: failed to update jsonl_content_hash: %v\n", err) + } + // Use RFC3339Nano for nanosecond precision to avoid race with file mtime (fixes #399) + exportTime := time.Now().Format(time.RFC3339Nano) + if err := store.SetMetadata(ctx, "last_import_time", exportTime); err != nil { + // Non-fatal warning (see above comment about graceful degradation) + fmt.Fprintf(os.Stderr, "Warning: failed to update last_import_time: %v\n", err) + } + // Note: mtime tracking removed in bd-v0y fix (git doesn't preserve mtime) + } + + // Update database mtime to be >= JSONL mtime (fixes #278, #301, #321) + // This prevents validatePreExport from incorrectly blocking on next export + beadsDir := filepath.Dir(jsonlPath) + dbPath := filepath.Join(beadsDir, "beads.db") + if err := TouchDatabaseFile(dbPath, jsonlPath); err != nil { + // Non-fatal warning + fmt.Fprintf(os.Stderr, "Warning: failed to update database mtime: %v\n", err) + } + + return nil +} + +// getCurrentBranch returns the name of the current git branch +// Uses symbolic-ref instead of rev-parse to work in fresh repos without commits (bd-flil) +func getCurrentBranch(ctx context.Context) (string, error) { + cmd := exec.CommandContext(ctx, "git", "symbolic-ref", "--short", "HEAD") + output, err := cmd.Output() + if err != nil { + return "", fmt.Errorf("failed to get current branch: %w", err) + } + return strings.TrimSpace(string(output)), nil +} + +// getSyncBranch returns the configured sync branch name +func getSyncBranch(ctx context.Context) (string, error) { + // Ensure store is initialized + if err := ensureStoreActive(); err != nil { + return "", fmt.Errorf("failed to initialize store: %w", err) + } + + syncBranch, err := syncbranch.Get(ctx, store) + if err != nil { + return "", fmt.Errorf("failed to get sync branch config: %w", err) + } + + if syncBranch == "" { + return "", fmt.Errorf("sync.branch not configured (run 'bd config set sync.branch ')") + } + + return syncBranch, nil +} + +// showSyncStatus shows the diff between sync branch and main branch +func showSyncStatus(ctx context.Context) error { + if !isGitRepo() { + return fmt.Errorf("not in a git repository") + } + + currentBranch, err := getCurrentBranch(ctx) + if err != nil { + return err + } + + syncBranch, err := getSyncBranch(ctx) + if err != nil { + return err + } + + // Check if sync branch exists + checkCmd := exec.CommandContext(ctx, "git", "show-ref", "--verify", "--quiet", "refs/heads/"+syncBranch) + if err := checkCmd.Run(); err != nil { + return fmt.Errorf("sync branch '%s' does not exist", syncBranch) + } + + fmt.Printf("Current branch: %s\n", currentBranch) + fmt.Printf("Sync branch: %s\n\n", syncBranch) + + // Show commit diff + fmt.Println("Commits in sync branch not in main:") + logCmd := exec.CommandContext(ctx, "git", "log", "--oneline", currentBranch+".."+syncBranch) + logOutput, err := logCmd.CombinedOutput() + if err != nil { + return fmt.Errorf("failed to get commit log: %w\n%s", err, logOutput) + } + + if len(strings.TrimSpace(string(logOutput))) == 0 { + fmt.Println(" (none)") + } else { + fmt.Print(string(logOutput)) + } + + fmt.Println("\nCommits in main not in sync branch:") + logCmd = exec.CommandContext(ctx, "git", "log", "--oneline", syncBranch+".."+currentBranch) + logOutput, err = logCmd.CombinedOutput() + if err != nil { + return fmt.Errorf("failed to get commit log: %w\n%s", err, logOutput) + } + + if len(strings.TrimSpace(string(logOutput))) == 0 { + fmt.Println(" (none)") + } else { + fmt.Print(string(logOutput)) + } + + // Show file diff for .beads/issues.jsonl + fmt.Println("\nFile differences in .beads/issues.jsonl:") + diffCmd := exec.CommandContext(ctx, "git", "diff", currentBranch+"..."+syncBranch, "--", ".beads/issues.jsonl") + diffOutput, err := diffCmd.CombinedOutput() + if err != nil { + // diff returns non-zero when there are differences, which is fine + if len(diffOutput) == 0 { + return fmt.Errorf("failed to get diff: %w", err) + } + } + + if len(strings.TrimSpace(string(diffOutput))) == 0 { + fmt.Println(" (no differences)") + } else { + fmt.Print(string(diffOutput)) + } + + return nil +} + +// mergeSyncBranch merges the sync branch back to main +func mergeSyncBranch(ctx context.Context, dryRun bool) error { + if !isGitRepo() { + return fmt.Errorf("not in a git repository") + } + + currentBranch, err := getCurrentBranch(ctx) + if err != nil { + return err + } + + syncBranch, err := getSyncBranch(ctx) + if err != nil { + return err + } + + // Check if sync branch exists + checkCmd := exec.CommandContext(ctx, "git", "show-ref", "--verify", "--quiet", "refs/heads/"+syncBranch) + if err := checkCmd.Run(); err != nil { + return fmt.Errorf("sync branch '%s' does not exist", syncBranch) + } + + // Verify we're on the main branch (not the sync branch) + if currentBranch == syncBranch { + return fmt.Errorf("cannot merge while on sync branch '%s' (checkout main branch first)", syncBranch) + } + + // Check if main branch is clean (excluding .beads/ which is expected to be dirty) + // bd-7b7h fix: The sync.branch workflow copies JSONL to main working dir without committing, + // so .beads/ changes are expected and should not block merge. + statusCmd := exec.CommandContext(ctx, "git", "status", "--porcelain", "--", ":!.beads/") + statusOutput, err := statusCmd.Output() + if err != nil { + return fmt.Errorf("failed to check git status: %w", err) + } + + if len(strings.TrimSpace(string(statusOutput))) > 0 { + return fmt.Errorf("main branch has uncommitted changes outside .beads/, please commit or stash them first") + } + + // bd-7b7h fix: Restore .beads/ to HEAD state before merge + // The uncommitted .beads/ changes came from copyJSONLToMainRepo during bd sync, + // which copied them FROM the sync branch. They're redundant with what we're merging. + // Discarding them prevents "Your local changes would be overwritten by merge" errors. + restoreCmd := exec.CommandContext(ctx, "git", "checkout", "HEAD", "--", ".beads/") + if output, err := restoreCmd.CombinedOutput(); err != nil { + // Not fatal - .beads/ might not exist in HEAD yet + debug.Logf("note: could not restore .beads/ to HEAD: %v (%s)", err, output) + } + + if dryRun { + fmt.Printf("[DRY RUN] Would merge branch '%s' into '%s'\n", syncBranch, currentBranch) + + // Show what would be merged + logCmd := exec.CommandContext(ctx, "git", "log", "--oneline", currentBranch+".."+syncBranch) + logOutput, err := logCmd.CombinedOutput() + if err != nil { + return fmt.Errorf("failed to preview commits: %w", err) + } + + if len(strings.TrimSpace(string(logOutput))) > 0 { + fmt.Println("\nCommits that would be merged:") + fmt.Print(string(logOutput)) + } else { + fmt.Println("\nNo commits to merge (already up to date)") + } + + return nil + } + + // Perform the merge + fmt.Printf("Merging branch '%s' into '%s'...\n", syncBranch, currentBranch) + + mergeCmd := exec.CommandContext(ctx, "git", "merge", "--no-ff", syncBranch, "-m", + fmt.Sprintf("Merge %s into %s", syncBranch, currentBranch)) + mergeOutput, err := mergeCmd.CombinedOutput() + if err != nil { + // Check if it's a merge conflict + if strings.Contains(string(mergeOutput), "CONFLICT") || strings.Contains(string(mergeOutput), "conflict") { + fmt.Fprintf(os.Stderr, "Merge conflict detected:\n%s\n", mergeOutput) + fmt.Fprintf(os.Stderr, "\nTo resolve:\n") + fmt.Fprintf(os.Stderr, "1. Resolve conflicts in the affected files\n") + fmt.Fprintf(os.Stderr, "2. Stage resolved files: git add \n") + fmt.Fprintf(os.Stderr, "3. Complete merge: git commit\n") + fmt.Fprintf(os.Stderr, "4. After merge commit, run 'bd import' to sync database\n") + return fmt.Errorf("merge conflict - see above for resolution steps") + } + return fmt.Errorf("merge failed: %w\n%s", err, mergeOutput) + } + + fmt.Print(string(mergeOutput)) + fmt.Println("\nβœ“ Merge complete") + + // Suggest next steps + fmt.Println("\nNext steps:") + fmt.Println("1. Review the merged changes") + fmt.Println("2. Run 'bd sync --import-only' to sync the database with merged JSONL") + fmt.Println("3. Run 'bd sync' to push changes to remote") + + return nil +} + +// importFromJSONL imports the JSONL file by running the import command +// Optional parameters: noGitHistory, protectLeftSnapshot (bd-sync-deletion fix) +func importFromJSONL(ctx context.Context, jsonlPath string, renameOnImport bool, opts ...bool) error { + // Get current executable path to avoid "./bd" path issues + exe, err := os.Executable() + if err != nil { + return fmt.Errorf("cannot resolve current executable: %w", err) + } + + // Parse optional parameters + noGitHistory := false + protectLeftSnapshot := false + if len(opts) > 0 { + noGitHistory = opts[0] + } + if len(opts) > 1 { + protectLeftSnapshot = opts[1] + } + + // Build args for import command + // Use --no-daemon to ensure subprocess uses direct mode, avoiding daemon connection issues + args := []string{"--no-daemon", "import", "-i", jsonlPath} + if renameOnImport { + args = append(args, "--rename-on-import") + } + if noGitHistory { + args = append(args, "--no-git-history") + } + // Add --protect-left-snapshot flag for post-pull imports (bd-sync-deletion fix) + if protectLeftSnapshot { + args = append(args, "--protect-left-snapshot") + } + + // Run import command + cmd := exec.CommandContext(ctx, exe, args...) // #nosec G204 - bd import command from trusted binary + output, err := cmd.CombinedOutput() + if err != nil { + return fmt.Errorf("import failed: %w\n%s", err, output) + } + + // Show output (import command provides the summary) + if len(output) > 0 { + fmt.Print(string(output)) + } + + return nil +} + +// resolveNoGitHistoryForFromMain returns the resolved noGitHistory value for sync operations. +// When syncing from main (--from-main), noGitHistory is forced to true to prevent creating +// incorrect deletion records for locally-created beads that don't exist on main. +// See: https://github.com/steveyegge/beads/issues/417 +func resolveNoGitHistoryForFromMain(fromMain, noGitHistory bool) bool { + if fromMain { + return true + } + return noGitHistory +} + +// isExternalBeadsDir checks if the beads directory is in a different git repo than cwd. +// This is used to detect when BEADS_DIR points to a separate repository. +// Contributed by dand-oss (https://github.com/steveyegge/beads/pull/533) +func isExternalBeadsDir(ctx context.Context, beadsDir string) bool { + // Get repo root of cwd + cwdRepoRoot, err := syncbranch.GetRepoRoot(ctx) + if err != nil { + return false // Can't determine, assume local + } + + // Get repo root of beads dir + beadsRepoRoot, err := getRepoRootFromPath(ctx, beadsDir) + if err != nil { + return false // Can't determine, assume local + } + + return cwdRepoRoot != beadsRepoRoot +} + +// getRepoRootFromPath returns the git repository root for a given path. +// Unlike syncbranch.GetRepoRoot which uses cwd, this allows getting the repo root +// for any path. +// Contributed by dand-oss (https://github.com/steveyegge/beads/pull/533) +func getRepoRootFromPath(ctx context.Context, path string) (string, error) { + cmd := exec.CommandContext(ctx, "git", "-C", path, "rev-parse", "--show-toplevel") + output, err := cmd.Output() + if err != nil { + return "", fmt.Errorf("failed to get git root for %s: %w", path, err) + } + return strings.TrimSpace(string(output)), nil +} + +// commitToExternalBeadsRepo commits changes directly to an external beads repo. +// Used when BEADS_DIR points to a different git repository than cwd. +// This bypasses the worktree-based sync which fails when beads dir is external. +// Contributed by dand-oss (https://github.com/steveyegge/beads/pull/533) +func commitToExternalBeadsRepo(ctx context.Context, beadsDir, message string, push bool) (bool, error) { + repoRoot, err := getRepoRootFromPath(ctx, beadsDir) + if err != nil { + return false, fmt.Errorf("failed to get repo root: %w", err) + } + + // Stage beads files (use relative path from repo root) + relBeadsDir, err := filepath.Rel(repoRoot, beadsDir) + if err != nil { + relBeadsDir = beadsDir // Fallback to absolute path + } + + addCmd := exec.CommandContext(ctx, "git", "-C", repoRoot, "add", relBeadsDir) + if output, err := addCmd.CombinedOutput(); err != nil { + return false, fmt.Errorf("git add failed: %w\n%s", err, output) + } + + // Check if there are staged changes + diffCmd := exec.CommandContext(ctx, "git", "-C", repoRoot, "diff", "--cached", "--quiet") + if diffCmd.Run() == nil { + return false, nil // No changes to commit + } + + // Commit with config-based author and signing options + if message == "" { + message = fmt.Sprintf("bd sync: %s", time.Now().Format("2006-01-02 15:04:05")) + } + commitArgs := buildGitCommitArgs(repoRoot, message) + commitCmd := exec.CommandContext(ctx, "git", commitArgs...) + if output, err := commitCmd.CombinedOutput(); err != nil { + return false, fmt.Errorf("git commit failed: %w\n%s", err, output) + } + + // Push if requested + if push { + pushCmd := exec.CommandContext(ctx, "git", "-C", repoRoot, "push") + if pushOutput, err := runGitCmdWithTimeoutMsg(ctx, pushCmd, "git push", 5*time.Second); err != nil { + return true, fmt.Errorf("git push failed: %w\n%s", err, pushOutput) + } + } + + return true, nil +} + +// pullFromExternalBeadsRepo pulls changes in an external beads repo. +// Used when BEADS_DIR points to a different git repository than cwd. +// Contributed by dand-oss (https://github.com/steveyegge/beads/pull/533) +func pullFromExternalBeadsRepo(ctx context.Context, beadsDir string) error { + repoRoot, err := getRepoRootFromPath(ctx, beadsDir) + if err != nil { + return fmt.Errorf("failed to get repo root: %w", err) + } + + // Check if remote exists + remoteCmd := exec.CommandContext(ctx, "git", "-C", repoRoot, "remote") + remoteOutput, err := remoteCmd.Output() + if err != nil || len(strings.TrimSpace(string(remoteOutput))) == 0 { + return nil // No remote, skip pull + } + + pullCmd := exec.CommandContext(ctx, "git", "-C", repoRoot, "pull") + if output, err := pullCmd.CombinedOutput(); err != nil { + return fmt.Errorf("git pull failed: %w\n%s", err, output) + } + + return nil +} + +// SyncIntegrityResult contains the results of a pre-sync integrity check. +// bd-hlsw.1: Pre-sync integrity check +type SyncIntegrityResult struct { + ForcedPush *ForcedPushCheck `json:"forced_push,omitempty"` + PrefixMismatch *PrefixMismatch `json:"prefix_mismatch,omitempty"` + OrphanedChildren *OrphanedChildren `json:"orphaned_children,omitempty"` + HasProblems bool `json:"has_problems"` +} + +// ForcedPushCheck detects if sync branch has diverged from remote. +type ForcedPushCheck struct { + Detected bool `json:"detected"` + LocalRef string `json:"local_ref,omitempty"` + RemoteRef string `json:"remote_ref,omitempty"` + Message string `json:"message"` +} + +// PrefixMismatch detects issues with wrong prefix in JSONL. +type PrefixMismatch struct { + ConfiguredPrefix string `json:"configured_prefix"` + MismatchedIDs []string `json:"mismatched_ids,omitempty"` + Count int `json:"count"` +} + +// OrphanedChildren detects issues with parent that doesn't exist. +type OrphanedChildren struct { + OrphanedIDs []string `json:"orphaned_ids,omitempty"` + Count int `json:"count"` +} + +// showSyncIntegrityCheck performs pre-sync integrity checks without modifying state. +// bd-hlsw.1: Detects forced pushes, prefix mismatches, and orphaned children. +// Exits with code 1 if problems are detected. +func showSyncIntegrityCheck(ctx context.Context, jsonlPath string) { + fmt.Println("Sync Integrity Check") + fmt.Println("====================") + + result := &SyncIntegrityResult{} + + // Check 1: Detect forced pushes on sync branch + forcedPush := checkForcedPush(ctx) + result.ForcedPush = forcedPush + if forcedPush.Detected { + result.HasProblems = true + } + printForcedPushResult(forcedPush) + + // Check 2: Detect prefix mismatches in JSONL + prefixMismatch, err := checkPrefixMismatch(ctx, jsonlPath) + if err != nil { + fmt.Fprintf(os.Stderr, "Warning: prefix check failed: %v\n", err) + } else { + result.PrefixMismatch = prefixMismatch + if prefixMismatch != nil && prefixMismatch.Count > 0 { + result.HasProblems = true + } + printPrefixMismatchResult(prefixMismatch) + } + + // Check 3: Detect orphaned children (parent issues that don't exist) + orphaned, err := checkOrphanedChildrenInJSONL(jsonlPath) + if err != nil { + fmt.Fprintf(os.Stderr, "Warning: orphaned check failed: %v\n", err) + } else { + result.OrphanedChildren = orphaned + if orphaned != nil && orphaned.Count > 0 { + result.HasProblems = true + } + printOrphanedChildrenResult(orphaned) + } + + // Summary + fmt.Println("\nSummary") + fmt.Println("-------") + if result.HasProblems { + fmt.Println("Problems detected! Review above and consider:") + if result.ForcedPush != nil && result.ForcedPush.Detected { + fmt.Println(" - Force push: Reset local sync branch or use 'bd sync --from-main'") + } + if result.PrefixMismatch != nil && result.PrefixMismatch.Count > 0 { + fmt.Println(" - Prefix mismatch: Use 'bd import --rename-on-import' to fix") + } + if result.OrphanedChildren != nil && result.OrphanedChildren.Count > 0 { + fmt.Println(" - Orphaned children: Remove parent references or create missing parents") + } + os.Exit(1) + } else { + fmt.Println("No problems detected. Safe to sync.") + } + + if jsonOutput { + data, _ := json.MarshalIndent(result, "", " ") + fmt.Println(string(data)) + } +} + +// checkForcedPush detects if the sync branch has diverged from remote. +// This can happen when someone force-pushes to the sync branch. +func checkForcedPush(ctx context.Context) *ForcedPushCheck { + result := &ForcedPushCheck{ + Detected: false, + Message: "No sync branch configured or no remote", + } + + // Get sync branch name + if err := ensureStoreActive(); err != nil { + return result + } + + syncBranch, _ := syncbranch.Get(ctx, store) + if syncBranch == "" { + return result + } + + // Check if sync branch exists locally + checkLocalCmd := exec.CommandContext(ctx, "git", "show-ref", "--verify", "--quiet", "refs/heads/"+syncBranch) + if checkLocalCmd.Run() != nil { + result.Message = fmt.Sprintf("Sync branch '%s' does not exist locally", syncBranch) + return result + } + + // Get local ref + localRefCmd := exec.CommandContext(ctx, "git", "rev-parse", syncBranch) + localRefOutput, err := localRefCmd.Output() + if err != nil { + result.Message = "Failed to get local sync branch ref" + return result + } + localRef := strings.TrimSpace(string(localRefOutput)) + result.LocalRef = localRef + + // Check if remote tracking branch exists + remote := "origin" + if configuredRemote, err := store.GetConfig(ctx, "sync.remote"); err == nil && configuredRemote != "" { + remote = configuredRemote + } + + // Get remote ref + remoteRefCmd := exec.CommandContext(ctx, "git", "rev-parse", remote+"/"+syncBranch) + remoteRefOutput, err := remoteRefCmd.Output() + if err != nil { + result.Message = fmt.Sprintf("Remote tracking branch '%s/%s' does not exist", remote, syncBranch) + return result + } + remoteRef := strings.TrimSpace(string(remoteRefOutput)) + result.RemoteRef = remoteRef + + // If refs match, no divergence + if localRef == remoteRef { + result.Message = "Sync branch is in sync with remote" + return result + } + + // Check if local is ahead of remote (normal case) + aheadCmd := exec.CommandContext(ctx, "git", "merge-base", "--is-ancestor", remoteRef, localRef) + if aheadCmd.Run() == nil { + result.Message = "Local sync branch is ahead of remote (normal)" + return result + } + + // Check if remote is ahead of local (behind, needs pull) + behindCmd := exec.CommandContext(ctx, "git", "merge-base", "--is-ancestor", localRef, remoteRef) + if behindCmd.Run() == nil { + result.Message = "Local sync branch is behind remote (needs pull)" + return result + } + + // If neither is ancestor, branches have diverged - likely a force push + result.Detected = true + result.Message = fmt.Sprintf("Sync branch has DIVERGED from remote! Local: %s, Remote: %s. This may indicate a force push on the remote.", localRef[:8], remoteRef[:8]) + + return result +} + +func printForcedPushResult(fp *ForcedPushCheck) { + fmt.Println("1. Force Push Detection") + if fp.Detected { + fmt.Printf(" [PROBLEM] %s\n", fp.Message) + } else { + fmt.Printf(" [OK] %s\n", fp.Message) + } + fmt.Println() +} + +// checkPrefixMismatch detects issues in JSONL that don't match the configured prefix. +func checkPrefixMismatch(ctx context.Context, jsonlPath string) (*PrefixMismatch, error) { + result := &PrefixMismatch{ + MismatchedIDs: []string{}, + } + + // Get configured prefix + if err := ensureStoreActive(); err != nil { + return nil, err + } + + prefix, err := store.GetConfig(ctx, "issue_prefix") + if err != nil || prefix == "" { + prefix = "bd" // Default + } + result.ConfiguredPrefix = prefix + + // Read JSONL and check each issue's prefix + f, err := os.Open(jsonlPath) // #nosec G304 - controlled path + if err != nil { + if os.IsNotExist(err) { + return result, nil // No JSONL, no mismatches + } + return nil, fmt.Errorf("failed to open JSONL: %w", err) + } + defer f.Close() + + scanner := bufio.NewScanner(f) + scanner.Buffer(make([]byte, 0, 64*1024), 10*1024*1024) + + for scanner.Scan() { + line := scanner.Bytes() + if len(bytes.TrimSpace(line)) == 0 { + continue + } + + var issue struct { + ID string `json:"id"` + } + if err := json.Unmarshal(line, &issue); err != nil { + continue // Skip malformed lines + } + + // Check if ID starts with configured prefix + if !strings.HasPrefix(issue.ID, prefix+"-") { + result.MismatchedIDs = append(result.MismatchedIDs, issue.ID) + } + } + + if err := scanner.Err(); err != nil { + return nil, fmt.Errorf("failed to read JSONL: %w", err) + } + + result.Count = len(result.MismatchedIDs) + return result, nil +} + +func printPrefixMismatchResult(pm *PrefixMismatch) { + fmt.Println("2. Prefix Mismatch Check") + if pm == nil { + fmt.Println(" [SKIP] Could not check prefix") + fmt.Println() + return + } + + fmt.Printf(" Configured prefix: %s\n", pm.ConfiguredPrefix) + if pm.Count > 0 { + fmt.Printf(" [PROBLEM] Found %d issue(s) with wrong prefix:\n", pm.Count) + // Show first 10 + limit := pm.Count + if limit > 10 { + limit = 10 + } + for i := 0; i < limit; i++ { + fmt.Printf(" - %s\n", pm.MismatchedIDs[i]) + } + if pm.Count > 10 { + fmt.Printf(" ... and %d more\n", pm.Count-10) + } + } else { + fmt.Println(" [OK] All issues have correct prefix") + } + fmt.Println() +} + +// checkOrphanedChildrenInJSONL detects issues with parent references to non-existent issues. +func checkOrphanedChildrenInJSONL(jsonlPath string) (*OrphanedChildren, error) { + result := &OrphanedChildren{ + OrphanedIDs: []string{}, + } + + // Read JSONL and build maps of IDs and parent references + f, err := os.Open(jsonlPath) // #nosec G304 - controlled path + if err != nil { + if os.IsNotExist(err) { + return result, nil + } + return nil, fmt.Errorf("failed to open JSONL: %w", err) + } + defer f.Close() + + existingIDs := make(map[string]bool) + parentRefs := make(map[string]string) // child ID -> parent ID + + scanner := bufio.NewScanner(f) + scanner.Buffer(make([]byte, 0, 64*1024), 10*1024*1024) + + for scanner.Scan() { + line := scanner.Bytes() + if len(bytes.TrimSpace(line)) == 0 { + continue + } + + var issue struct { + ID string `json:"id"` + Parent string `json:"parent,omitempty"` + Status string `json:"status"` + } + if err := json.Unmarshal(line, &issue); err != nil { + continue + } + + // Skip tombstones + if issue.Status == string(types.StatusTombstone) { + continue + } + + existingIDs[issue.ID] = true + if issue.Parent != "" { + parentRefs[issue.ID] = issue.Parent + } + } + + if err := scanner.Err(); err != nil { + return nil, fmt.Errorf("failed to read JSONL: %w", err) + } + + // Find orphaned children (parent doesn't exist) + for childID, parentID := range parentRefs { + if !existingIDs[parentID] { + result.OrphanedIDs = append(result.OrphanedIDs, fmt.Sprintf("%s (parent: %s)", childID, parentID)) + } + } + + result.Count = len(result.OrphanedIDs) + return result, nil +} + +// runGitCmdWithTimeoutMsg runs a git command and prints a helpful message if it takes too long. +// This helps when git operations hang waiting for credential/browser auth. +func runGitCmdWithTimeoutMsg(ctx context.Context, cmd *exec.Cmd, cmdName string, timeoutDelay time.Duration) ([]byte, error) { + // Use done channel to cleanly exit goroutine when command completes + done := make(chan struct{}) + go func() { + select { + case <-time.After(timeoutDelay): + fmt.Fprintf(os.Stderr, "⏳ %s is taking longer than expected (possibly waiting for authentication). If this hangs, check for a browser auth prompt or run 'git status' in another terminal.\n", cmdName) + case <-done: + // Command completed, exit cleanly + case <-ctx.Done(): + // Context canceled, don't print message + } + }() + + output, err := cmd.CombinedOutput() + close(done) + return output, err +} + +func printOrphanedChildrenResult(oc *OrphanedChildren) { + fmt.Println("3. Orphaned Children Check") + if oc == nil { + fmt.Println(" [SKIP] Could not check orphaned children") + fmt.Println() + return + } + + if oc.Count > 0 { + fmt.Printf(" [PROBLEM] Found %d issue(s) with missing parent:\n", oc.Count) + limit := oc.Count + if limit > 10 { + limit = 10 + } + for i := 0; i < limit; i++ { + fmt.Printf(" - %s\n", oc.OrphanedIDs[i]) + } + if oc.Count > 10 { + fmt.Printf(" ... and %d more\n", oc.Count-10) + } + } else { + fmt.Println(" [OK] No orphaned children found") + } + fmt.Println() +} diff --git a/cmd/bd/sync_branch.go b/cmd/bd/sync_branch.go deleted file mode 100644 index db4afb19..00000000 --- a/cmd/bd/sync_branch.go +++ /dev/null @@ -1,285 +0,0 @@ -package main - -import ( - "context" - "fmt" - "os/exec" - "path/filepath" - "strings" - "time" - - "github.com/steveyegge/beads/internal/syncbranch" -) - -// getCurrentBranch returns the name of the current git branch -// Uses symbolic-ref instead of rev-parse to work in fresh repos without commits (bd-flil) -func getCurrentBranch(ctx context.Context) (string, error) { - cmd := exec.CommandContext(ctx, "git", "symbolic-ref", "--short", "HEAD") - output, err := cmd.Output() - if err != nil { - return "", fmt.Errorf("failed to get current branch: %w", err) - } - return strings.TrimSpace(string(output)), nil -} - -// getSyncBranch returns the configured sync branch name -func getSyncBranch(ctx context.Context) (string, error) { - // Ensure store is initialized - if err := ensureStoreActive(); err != nil { - return "", fmt.Errorf("failed to initialize store: %w", err) - } - - syncBranch, err := syncbranch.Get(ctx, store) - if err != nil { - return "", fmt.Errorf("failed to get sync branch config: %w", err) - } - - if syncBranch == "" { - return "", fmt.Errorf("sync.branch not configured (run 'bd config set sync.branch ')") - } - - return syncBranch, nil -} - -// showSyncStatus shows the diff between sync branch and main branch -func showSyncStatus(ctx context.Context) error { - if !isGitRepo() { - return fmt.Errorf("not in a git repository") - } - - currentBranch, err := getCurrentBranch(ctx) - if err != nil { - return err - } - - syncBranch, err := getSyncBranch(ctx) - if err != nil { - return err - } - - // Check if sync branch exists - checkCmd := exec.CommandContext(ctx, "git", "show-ref", "--verify", "--quiet", "refs/heads/"+syncBranch) - if err := checkCmd.Run(); err != nil { - return fmt.Errorf("sync branch '%s' does not exist", syncBranch) - } - - fmt.Printf("Current branch: %s\n", currentBranch) - fmt.Printf("Sync branch: %s\n\n", syncBranch) - - // Show commit diff - fmt.Println("Commits in sync branch not in main:") - logCmd := exec.CommandContext(ctx, "git", "log", "--oneline", currentBranch+".."+syncBranch) - logOutput, err := logCmd.CombinedOutput() - if err != nil { - return fmt.Errorf("failed to get commit log: %w\n%s", err, logOutput) - } - - if len(strings.TrimSpace(string(logOutput))) == 0 { - fmt.Println(" (none)") - } else { - fmt.Print(string(logOutput)) - } - - fmt.Println("\nCommits in main not in sync branch:") - logCmd = exec.CommandContext(ctx, "git", "log", "--oneline", syncBranch+".."+currentBranch) - logOutput, err = logCmd.CombinedOutput() - if err != nil { - return fmt.Errorf("failed to get commit log: %w\n%s", err, logOutput) - } - - if len(strings.TrimSpace(string(logOutput))) == 0 { - fmt.Println(" (none)") - } else { - fmt.Print(string(logOutput)) - } - - // Show file diff for .beads/issues.jsonl - fmt.Println("\nFile differences in .beads/issues.jsonl:") - diffCmd := exec.CommandContext(ctx, "git", "diff", currentBranch+"..."+syncBranch, "--", ".beads/issues.jsonl") - diffOutput, err := diffCmd.CombinedOutput() - if err != nil { - // diff returns non-zero when there are differences, which is fine - if len(diffOutput) == 0 { - return fmt.Errorf("failed to get diff: %w", err) - } - } - - if len(strings.TrimSpace(string(diffOutput))) == 0 { - fmt.Println(" (no differences)") - } else { - fmt.Print(string(diffOutput)) - } - - return nil -} - -// mergeSyncBranch merges the sync branch back to the main branch -func mergeSyncBranch(ctx context.Context, dryRun bool) error { - if !isGitRepo() { - return fmt.Errorf("not in a git repository") - } - - currentBranch, err := getCurrentBranch(ctx) - if err != nil { - return err - } - - syncBranch, err := getSyncBranch(ctx) - if err != nil { - return err - } - - // Check if sync branch exists - checkCmd := exec.CommandContext(ctx, "git", "show-ref", "--verify", "--quiet", "refs/heads/"+syncBranch) - if err := checkCmd.Run(); err != nil { - return fmt.Errorf("sync branch '%s' does not exist", syncBranch) - } - - // Check if there are uncommitted changes - statusCmd := exec.CommandContext(ctx, "git", "status", "--porcelain") - statusOutput, err := statusCmd.Output() - if err != nil { - return fmt.Errorf("failed to check git status: %w", err) - } - if len(strings.TrimSpace(string(statusOutput))) > 0 { - return fmt.Errorf("uncommitted changes detected - commit or stash them first") - } - - fmt.Printf("Merging sync branch '%s' into '%s'...\n", syncBranch, currentBranch) - - if dryRun { - fmt.Println("β†’ [DRY RUN] Would merge sync branch") - // Show what would be merged - logCmd := exec.CommandContext(ctx, "git", "log", "--oneline", currentBranch+".."+syncBranch) - logOutput, _ := logCmd.CombinedOutput() - if len(strings.TrimSpace(string(logOutput))) > 0 { - fmt.Println("\nCommits that would be merged:") - fmt.Print(string(logOutput)) - } else { - fmt.Println("No commits to merge") - } - return nil - } - - // Perform the merge - mergeCmd := exec.CommandContext(ctx, "git", "merge", syncBranch, "-m", fmt.Sprintf("Merge sync branch '%s'", syncBranch)) - mergeOutput, err := mergeCmd.CombinedOutput() - if err != nil { - return fmt.Errorf("merge failed: %w\n%s", err, mergeOutput) - } - - fmt.Print(string(mergeOutput)) - fmt.Println("\nβœ“ Merge complete") - - // Suggest next steps - fmt.Println("\nNext steps:") - fmt.Println("1. Review the merged changes") - fmt.Println("2. Run 'bd sync --import-only' to sync the database with merged JSONL") - fmt.Println("3. Run 'bd sync' to push changes to remote") - - return nil -} - -// isExternalBeadsDir checks if the beads directory is in a different git repo than cwd. -// This is used to detect when BEADS_DIR points to a separate repository. -// Contributed by dand-oss (https://github.com/steveyegge/beads/pull/533) -func isExternalBeadsDir(ctx context.Context, beadsDir string) bool { - // Get repo root of cwd - cwdRepoRoot, err := syncbranch.GetRepoRoot(ctx) - if err != nil { - return false // Can't determine, assume local - } - - // Get repo root of beads dir - beadsRepoRoot, err := getRepoRootFromPath(ctx, beadsDir) - if err != nil { - return false // Can't determine, assume local - } - - return cwdRepoRoot != beadsRepoRoot -} - -// getRepoRootFromPath returns the git repository root for a given path. -// Unlike syncbranch.GetRepoRoot which uses cwd, this allows getting the repo root -// for any path. -// Contributed by dand-oss (https://github.com/steveyegge/beads/pull/533) -func getRepoRootFromPath(ctx context.Context, path string) (string, error) { - cmd := exec.CommandContext(ctx, "git", "-C", path, "rev-parse", "--show-toplevel") - output, err := cmd.Output() - if err != nil { - return "", fmt.Errorf("failed to get git root for %s: %w", path, err) - } - return strings.TrimSpace(string(output)), nil -} - -// commitToExternalBeadsRepo commits changes directly to an external beads repo. -// Used when BEADS_DIR points to a different git repository than cwd. -// This bypasses the worktree-based sync which fails when beads dir is external. -// Contributed by dand-oss (https://github.com/steveyegge/beads/pull/533) -func commitToExternalBeadsRepo(ctx context.Context, beadsDir, message string, push bool) (bool, error) { - repoRoot, err := getRepoRootFromPath(ctx, beadsDir) - if err != nil { - return false, fmt.Errorf("failed to get repo root: %w", err) - } - - // Stage beads files (use relative path from repo root) - relBeadsDir, err := filepath.Rel(repoRoot, beadsDir) - if err != nil { - relBeadsDir = beadsDir // Fallback to absolute path - } - - addCmd := exec.CommandContext(ctx, "git", "-C", repoRoot, "add", relBeadsDir) - if output, err := addCmd.CombinedOutput(); err != nil { - return false, fmt.Errorf("git add failed: %w\n%s", err, output) - } - - // Check if there are staged changes - diffCmd := exec.CommandContext(ctx, "git", "-C", repoRoot, "diff", "--cached", "--quiet") - if diffCmd.Run() == nil { - return false, nil // No changes to commit - } - - // Commit with config-based author and signing options - if message == "" { - message = fmt.Sprintf("bd sync: %s", time.Now().Format("2006-01-02 15:04:05")) - } - commitArgs := buildGitCommitArgs(repoRoot, message) - commitCmd := exec.CommandContext(ctx, "git", commitArgs...) - if output, err := commitCmd.CombinedOutput(); err != nil { - return false, fmt.Errorf("git commit failed: %w\n%s", err, output) - } - - // Push if requested - if push { - pushCmd := exec.CommandContext(ctx, "git", "-C", repoRoot, "push") - if pushOutput, err := runGitCmdWithTimeoutMsg(ctx, pushCmd, "git push", 5*time.Second); err != nil { - return true, fmt.Errorf("git push failed: %w\n%s", err, pushOutput) - } - } - - return true, nil -} - -// pullFromExternalBeadsRepo pulls changes in an external beads repo. -// Used when BEADS_DIR points to a different git repository than cwd. -// Contributed by dand-oss (https://github.com/steveyegge/beads/pull/533) -func pullFromExternalBeadsRepo(ctx context.Context, beadsDir string) error { - repoRoot, err := getRepoRootFromPath(ctx, beadsDir) - if err != nil { - return fmt.Errorf("failed to get repo root: %w", err) - } - - // Check if remote exists - remoteCmd := exec.CommandContext(ctx, "git", "-C", repoRoot, "remote") - remoteOutput, err := remoteCmd.Output() - if err != nil || len(strings.TrimSpace(string(remoteOutput))) == 0 { - return nil // No remote, skip pull - } - - pullCmd := exec.CommandContext(ctx, "git", "-C", repoRoot, "pull") - if output, err := pullCmd.CombinedOutput(); err != nil { - return fmt.Errorf("git pull failed: %w\n%s", err, output) - } - - return nil -} diff --git a/cmd/bd/sync_check.go b/cmd/bd/sync_check.go deleted file mode 100644 index 75b36fd0..00000000 --- a/cmd/bd/sync_check.go +++ /dev/null @@ -1,395 +0,0 @@ -package main - -import ( - "bufio" - "bytes" - "context" - "encoding/json" - "fmt" - "os" - "os/exec" - "strings" - "time" - - "github.com/steveyegge/beads/internal/syncbranch" - "github.com/steveyegge/beads/internal/types" -) - -// SyncIntegrityResult contains the results of a pre-sync integrity check. -// bd-hlsw.1: Pre-sync integrity check -type SyncIntegrityResult struct { - ForcedPush *ForcedPushCheck `json:"forced_push,omitempty"` - PrefixMismatch *PrefixMismatch `json:"prefix_mismatch,omitempty"` - OrphanedChildren *OrphanedChildren `json:"orphaned_children,omitempty"` - HasProblems bool `json:"has_problems"` -} - -// ForcedPushCheck detects if sync branch has diverged from remote. -type ForcedPushCheck struct { - Detected bool `json:"detected"` - LocalRef string `json:"local_ref,omitempty"` - RemoteRef string `json:"remote_ref,omitempty"` - Message string `json:"message"` -} - -// PrefixMismatch detects issues with wrong prefix in JSONL. -type PrefixMismatch struct { - ConfiguredPrefix string `json:"configured_prefix"` - MismatchedIDs []string `json:"mismatched_ids,omitempty"` - Count int `json:"count"` -} - -// OrphanedChildren detects issues with parent that doesn't exist. -type OrphanedChildren struct { - OrphanedIDs []string `json:"orphaned_ids,omitempty"` - Count int `json:"count"` -} - -// showSyncIntegrityCheck performs pre-sync integrity checks without modifying state. -// bd-hlsw.1: Detects forced pushes, prefix mismatches, and orphaned children. -// Exits with code 1 if problems are detected. -func showSyncIntegrityCheck(ctx context.Context, jsonlPath string) { - fmt.Println("Sync Integrity Check") - fmt.Println("====================") - - result := &SyncIntegrityResult{} - - // Check 1: Detect forced pushes on sync branch - forcedPush := checkForcedPush(ctx) - result.ForcedPush = forcedPush - if forcedPush.Detected { - result.HasProblems = true - } - printForcedPushResult(forcedPush) - - // Check 2: Detect prefix mismatches in JSONL - prefixMismatch, err := checkPrefixMismatch(ctx, jsonlPath) - if err != nil { - fmt.Fprintf(os.Stderr, "Warning: prefix check failed: %v\n", err) - } else { - result.PrefixMismatch = prefixMismatch - if prefixMismatch != nil && prefixMismatch.Count > 0 { - result.HasProblems = true - } - printPrefixMismatchResult(prefixMismatch) - } - - // Check 3: Detect orphaned children (parent issues that don't exist) - orphaned, err := checkOrphanedChildrenInJSONL(jsonlPath) - if err != nil { - fmt.Fprintf(os.Stderr, "Warning: orphaned check failed: %v\n", err) - } else { - result.OrphanedChildren = orphaned - if orphaned != nil && orphaned.Count > 0 { - result.HasProblems = true - } - printOrphanedChildrenResult(orphaned) - } - - // Summary - fmt.Println("\nSummary") - fmt.Println("-------") - if result.HasProblems { - fmt.Println("Problems detected! Review above and consider:") - if result.ForcedPush != nil && result.ForcedPush.Detected { - fmt.Println(" - Force push: Reset local sync branch or use 'bd sync --from-main'") - } - if result.PrefixMismatch != nil && result.PrefixMismatch.Count > 0 { - fmt.Println(" - Prefix mismatch: Use 'bd import --rename-on-import' to fix") - } - if result.OrphanedChildren != nil && result.OrphanedChildren.Count > 0 { - fmt.Println(" - Orphaned children: Remove parent references or create missing parents") - } - os.Exit(1) - } else { - fmt.Println("No problems detected. Safe to sync.") - } - - if jsonOutput { - data, _ := json.MarshalIndent(result, "", " ") - fmt.Println(string(data)) - } -} - -// checkForcedPush detects if the sync branch has diverged from remote. -// This can happen when someone force-pushes to the sync branch. -func checkForcedPush(ctx context.Context) *ForcedPushCheck { - result := &ForcedPushCheck{ - Detected: false, - Message: "No sync branch configured or no remote", - } - - // Get sync branch name - if err := ensureStoreActive(); err != nil { - return result - } - - syncBranch, _ := syncbranch.Get(ctx, store) - if syncBranch == "" { - return result - } - - // Check if sync branch exists locally - checkLocalCmd := exec.CommandContext(ctx, "git", "show-ref", "--verify", "--quiet", "refs/heads/"+syncBranch) - if checkLocalCmd.Run() != nil { - result.Message = fmt.Sprintf("Sync branch '%s' does not exist locally", syncBranch) - return result - } - - // Get local ref - localRefCmd := exec.CommandContext(ctx, "git", "rev-parse", syncBranch) - localRefOutput, err := localRefCmd.Output() - if err != nil { - result.Message = "Failed to get local sync branch ref" - return result - } - localRef := strings.TrimSpace(string(localRefOutput)) - result.LocalRef = localRef - - // Check if remote tracking branch exists - remote := "origin" - if configuredRemote, err := store.GetConfig(ctx, "sync.remote"); err == nil && configuredRemote != "" { - remote = configuredRemote - } - - // Get remote ref - remoteRefCmd := exec.CommandContext(ctx, "git", "rev-parse", remote+"/"+syncBranch) - remoteRefOutput, err := remoteRefCmd.Output() - if err != nil { - result.Message = fmt.Sprintf("Remote tracking branch '%s/%s' does not exist", remote, syncBranch) - return result - } - remoteRef := strings.TrimSpace(string(remoteRefOutput)) - result.RemoteRef = remoteRef - - // If refs match, no divergence - if localRef == remoteRef { - result.Message = "Sync branch is in sync with remote" - return result - } - - // Check if local is ahead of remote (normal case) - aheadCmd := exec.CommandContext(ctx, "git", "merge-base", "--is-ancestor", remoteRef, localRef) - if aheadCmd.Run() == nil { - result.Message = "Local sync branch is ahead of remote (normal)" - return result - } - - // Check if remote is ahead of local (behind, needs pull) - behindCmd := exec.CommandContext(ctx, "git", "merge-base", "--is-ancestor", localRef, remoteRef) - if behindCmd.Run() == nil { - result.Message = "Local sync branch is behind remote (needs pull)" - return result - } - - // If neither is ancestor, branches have diverged - likely a force push - result.Detected = true - result.Message = fmt.Sprintf("Sync branch has DIVERGED from remote! Local: %s, Remote: %s. This may indicate a force push on the remote.", localRef[:8], remoteRef[:8]) - - return result -} - -func printForcedPushResult(fp *ForcedPushCheck) { - fmt.Println("1. Force Push Detection") - if fp.Detected { - fmt.Printf(" [PROBLEM] %s\n", fp.Message) - } else { - fmt.Printf(" [OK] %s\n", fp.Message) - } - fmt.Println() -} - -// checkPrefixMismatch detects issues in JSONL that don't match the configured prefix. -func checkPrefixMismatch(ctx context.Context, jsonlPath string) (*PrefixMismatch, error) { - result := &PrefixMismatch{ - MismatchedIDs: []string{}, - } - - // Get configured prefix - if err := ensureStoreActive(); err != nil { - return nil, err - } - - prefix, err := store.GetConfig(ctx, "issue_prefix") - if err != nil || prefix == "" { - prefix = "bd" // Default - } - result.ConfiguredPrefix = prefix - - // Read JSONL and check each issue's prefix - f, err := os.Open(jsonlPath) // #nosec G304 - controlled path - if err != nil { - if os.IsNotExist(err) { - return result, nil // No JSONL, no mismatches - } - return nil, fmt.Errorf("failed to open JSONL: %w", err) - } - defer f.Close() - - scanner := bufio.NewScanner(f) - scanner.Buffer(make([]byte, 0, 64*1024), 10*1024*1024) - - for scanner.Scan() { - line := scanner.Bytes() - if len(bytes.TrimSpace(line)) == 0 { - continue - } - - var issue struct { - ID string `json:"id"` - } - if err := json.Unmarshal(line, &issue); err != nil { - continue // Skip malformed lines - } - - // Check if ID starts with configured prefix - if !strings.HasPrefix(issue.ID, prefix+"-") { - result.MismatchedIDs = append(result.MismatchedIDs, issue.ID) - } - } - - if err := scanner.Err(); err != nil { - return nil, fmt.Errorf("failed to read JSONL: %w", err) - } - - result.Count = len(result.MismatchedIDs) - return result, nil -} - -func printPrefixMismatchResult(pm *PrefixMismatch) { - fmt.Println("2. Prefix Mismatch Check") - if pm == nil { - fmt.Println(" [SKIP] Could not check prefix") - fmt.Println() - return - } - - fmt.Printf(" Configured prefix: %s\n", pm.ConfiguredPrefix) - if pm.Count > 0 { - fmt.Printf(" [PROBLEM] Found %d issue(s) with wrong prefix:\n", pm.Count) - // Show first 10 - limit := pm.Count - if limit > 10 { - limit = 10 - } - for i := 0; i < limit; i++ { - fmt.Printf(" - %s\n", pm.MismatchedIDs[i]) - } - if pm.Count > 10 { - fmt.Printf(" ... and %d more\n", pm.Count-10) - } - } else { - fmt.Println(" [OK] All issues have correct prefix") - } - fmt.Println() -} - -// checkOrphanedChildrenInJSONL detects issues with parent references to non-existent issues. -func checkOrphanedChildrenInJSONL(jsonlPath string) (*OrphanedChildren, error) { - result := &OrphanedChildren{ - OrphanedIDs: []string{}, - } - - // Read JSONL and build maps of IDs and parent references - f, err := os.Open(jsonlPath) // #nosec G304 - controlled path - if err != nil { - if os.IsNotExist(err) { - return result, nil - } - return nil, fmt.Errorf("failed to open JSONL: %w", err) - } - defer f.Close() - - existingIDs := make(map[string]bool) - parentRefs := make(map[string]string) // child ID -> parent ID - - scanner := bufio.NewScanner(f) - scanner.Buffer(make([]byte, 0, 64*1024), 10*1024*1024) - - for scanner.Scan() { - line := scanner.Bytes() - if len(bytes.TrimSpace(line)) == 0 { - continue - } - - var issue struct { - ID string `json:"id"` - Parent string `json:"parent,omitempty"` - Status string `json:"status"` - } - if err := json.Unmarshal(line, &issue); err != nil { - continue - } - - // Skip tombstones - if issue.Status == string(types.StatusTombstone) { - continue - } - - existingIDs[issue.ID] = true - if issue.Parent != "" { - parentRefs[issue.ID] = issue.Parent - } - } - - if err := scanner.Err(); err != nil { - return nil, fmt.Errorf("failed to read JSONL: %w", err) - } - - // Find orphaned children (parent doesn't exist) - for childID, parentID := range parentRefs { - if !existingIDs[parentID] { - result.OrphanedIDs = append(result.OrphanedIDs, fmt.Sprintf("%s (parent: %s)", childID, parentID)) - } - } - - result.Count = len(result.OrphanedIDs) - return result, nil -} - -// runGitCmdWithTimeoutMsg runs a git command and prints a helpful message if it takes too long. -// This helps when git operations hang waiting for credential/browser auth. -func runGitCmdWithTimeoutMsg(ctx context.Context, cmd *exec.Cmd, cmdName string, timeoutDelay time.Duration) ([]byte, error) { - // Use done channel to cleanly exit goroutine when command completes - done := make(chan struct{}) - go func() { - select { - case <-time.After(timeoutDelay): - fmt.Fprintf(os.Stderr, "⏳ %s is taking longer than expected (possibly waiting for authentication). If this hangs, check for a browser auth prompt or run 'git status' in another terminal.\n", cmdName) - case <-done: - // Command completed, exit cleanly - case <-ctx.Done(): - // Context canceled, don't print message - } - }() - - output, err := cmd.CombinedOutput() - close(done) - return output, err -} - -func printOrphanedChildrenResult(oc *OrphanedChildren) { - fmt.Println("3. Orphaned Children Check") - if oc == nil { - fmt.Println(" [SKIP] Could not check orphaned children") - fmt.Println() - return - } - - if oc.Count > 0 { - fmt.Printf(" [PROBLEM] Found %d issue(s) with missing parent:\n", oc.Count) - limit := oc.Count - if limit > 10 { - limit = 10 - } - for i := 0; i < limit; i++ { - fmt.Printf(" - %s\n", oc.OrphanedIDs[i]) - } - if oc.Count > 10 { - fmt.Printf(" ... and %d more\n", oc.Count-10) - } - } else { - fmt.Println(" [OK] No orphaned children found") - } - fmt.Println() -} diff --git a/cmd/bd/sync_export.go b/cmd/bd/sync_export.go deleted file mode 100644 index 26a6ebb7..00000000 --- a/cmd/bd/sync_export.go +++ /dev/null @@ -1,170 +0,0 @@ -package main - -import ( - "cmp" - "context" - "encoding/json" - "fmt" - "os" - "path/filepath" - "slices" - "time" - - "github.com/steveyegge/beads/internal/rpc" - "github.com/steveyegge/beads/internal/types" -) - -// exportToJSONL exports the database to JSONL format -func exportToJSONL(ctx context.Context, jsonlPath string) error { - // If daemon is running, use RPC - if daemonClient != nil { - exportArgs := &rpc.ExportArgs{ - JSONLPath: jsonlPath, - } - resp, err := daemonClient.Export(exportArgs) - if err != nil { - return fmt.Errorf("daemon export failed: %w", err) - } - if !resp.Success { - return fmt.Errorf("daemon export error: %s", resp.Error) - } - return nil - } - - // Direct mode: access store directly - // Ensure store is initialized - if err := ensureStoreActive(); err != nil { - return fmt.Errorf("failed to initialize store: %w", err) - } - - // Get all issues including tombstones for sync propagation (bd-rp4o fix) - // Tombstones must be exported so they propagate to other clones and prevent resurrection - issues, err := store.SearchIssues(ctx, "", types.IssueFilter{IncludeTombstones: true}) - if err != nil { - return fmt.Errorf("failed to get issues: %w", err) - } - - // Safety check: prevent exporting empty database over non-empty JSONL - // Note: The main bd-53c protection is the reverse ZFC check earlier in sync.go - // which runs BEFORE export. Here we only block the most catastrophic case (empty DB) - // to allow legitimate deletions. - if len(issues) == 0 { - existingCount, countErr := countIssuesInJSONL(jsonlPath) - if countErr != nil { - // If we can't read the file, it might not exist yet, which is fine - if !os.IsNotExist(countErr) { - fmt.Fprintf(os.Stderr, "Warning: failed to read existing JSONL: %v\n", countErr) - } - } else if existingCount > 0 { - return fmt.Errorf("refusing to export empty database over non-empty JSONL file (database: 0 issues, JSONL: %d issues)", existingCount) - } - } - - // Sort by ID for consistent output - slices.SortFunc(issues, func(a, b *types.Issue) int { - return cmp.Compare(a.ID, b.ID) - }) - - // Populate dependencies for all issues (avoid N+1) - allDeps, err := store.GetAllDependencyRecords(ctx) - if err != nil { - return fmt.Errorf("failed to get dependencies: %w", err) - } - for _, issue := range issues { - issue.Dependencies = allDeps[issue.ID] - } - - // Populate labels for all issues - for _, issue := range issues { - labels, err := store.GetLabels(ctx, issue.ID) - if err != nil { - return fmt.Errorf("failed to get labels for %s: %w", issue.ID, err) - } - issue.Labels = labels - } - - // Populate comments for all issues - for _, issue := range issues { - comments, err := store.GetIssueComments(ctx, issue.ID) - if err != nil { - return fmt.Errorf("failed to get comments for %s: %w", issue.ID, err) - } - issue.Comments = comments - } - - // Create temp file for atomic write - dir := filepath.Dir(jsonlPath) - base := filepath.Base(jsonlPath) - tempFile, err := os.CreateTemp(dir, base+".tmp.*") - if err != nil { - return fmt.Errorf("failed to create temp file: %w", err) - } - tempPath := tempFile.Name() - defer func() { - _ = tempFile.Close() - _ = os.Remove(tempPath) - }() - - // Write JSONL - encoder := json.NewEncoder(tempFile) - exportedIDs := make([]string, 0, len(issues)) - for _, issue := range issues { - if err := encoder.Encode(issue); err != nil { - return fmt.Errorf("failed to encode issue %s: %w", issue.ID, err) - } - exportedIDs = append(exportedIDs, issue.ID) - } - - // Close temp file before rename (error checked implicitly by Rename success) - _ = tempFile.Close() - - // Atomic replace - if err := os.Rename(tempPath, jsonlPath); err != nil { - return fmt.Errorf("failed to replace JSONL file: %w", err) - } - - // Set appropriate file permissions (0600: rw-------) - if err := os.Chmod(jsonlPath, 0600); err != nil { - // Non-fatal warning - fmt.Fprintf(os.Stderr, "Warning: failed to set file permissions: %v\n", err) - } - - // Clear dirty flags for exported issues - if err := store.ClearDirtyIssuesByID(ctx, exportedIDs); err != nil { - // Non-fatal warning - fmt.Fprintf(os.Stderr, "Warning: failed to clear dirty flags: %v\n", err) - } - - // Clear auto-flush state - clearAutoFlushState() - - // Update jsonl_content_hash metadata to enable content-based staleness detection (bd-khnb fix) - // After export, database and JSONL are in sync, so update hash to prevent unnecessary auto-import - // Renamed from last_import_hash (bd-39o) - more accurate since updated on both import AND export - if currentHash, err := computeJSONLHash(jsonlPath); err == nil { - if err := store.SetMetadata(ctx, "jsonl_content_hash", currentHash); err != nil { - // Non-fatal warning: Metadata update failures are intentionally non-fatal to prevent blocking - // successful exports. System degrades gracefully to mtime-based staleness detection if metadata - // is unavailable. This ensures export operations always succeed even if metadata storage fails. - fmt.Fprintf(os.Stderr, "Warning: failed to update jsonl_content_hash: %v\n", err) - } - // Use RFC3339Nano for nanosecond precision to avoid race with file mtime (fixes #399) - exportTime := time.Now().Format(time.RFC3339Nano) - if err := store.SetMetadata(ctx, "last_import_time", exportTime); err != nil { - // Non-fatal warning (see above comment about graceful degradation) - fmt.Fprintf(os.Stderr, "Warning: failed to update last_import_time: %v\n", err) - } - // Note: mtime tracking removed in bd-v0y fix (git doesn't preserve mtime) - } - - // Update database mtime to be >= JSONL mtime (fixes #278, #301, #321) - // This prevents validatePreExport from incorrectly blocking on next export - beadsDir := filepath.Dir(jsonlPath) - dbPath := filepath.Join(beadsDir, "beads.db") - if err := TouchDatabaseFile(dbPath, jsonlPath); err != nil { - // Non-fatal warning - fmt.Fprintf(os.Stderr, "Warning: failed to update database mtime: %v\n", err) - } - - return nil -} diff --git a/cmd/bd/sync_import.go b/cmd/bd/sync_import.go deleted file mode 100644 index 98de5a62..00000000 --- a/cmd/bd/sync_import.go +++ /dev/null @@ -1,132 +0,0 @@ -package main - -import ( - "context" - "fmt" - "os" - "os/exec" -) - -// importFromJSONL imports the JSONL file by running the import command -// Optional parameters: noGitHistory, protectLeftSnapshot (bd-sync-deletion fix) -func importFromJSONL(ctx context.Context, jsonlPath string, renameOnImport bool, opts ...bool) error { - // Get current executable path to avoid "./bd" path issues - exe, err := os.Executable() - if err != nil { - return fmt.Errorf("cannot resolve current executable: %w", err) - } - - // Parse optional parameters - noGitHistory := false - protectLeftSnapshot := false - if len(opts) > 0 { - noGitHistory = opts[0] - } - if len(opts) > 1 { - protectLeftSnapshot = opts[1] - } - - // Build args for import command - // Use --no-daemon to ensure subprocess uses direct mode, avoiding daemon connection issues - args := []string{"--no-daemon", "import", "-i", jsonlPath} - if renameOnImport { - args = append(args, "--rename-on-import") - } - if noGitHistory { - args = append(args, "--no-git-history") - } - // Add --protect-left-snapshot flag for post-pull imports (bd-sync-deletion fix) - if protectLeftSnapshot { - args = append(args, "--protect-left-snapshot") - } - - // Run import command - cmd := exec.CommandContext(ctx, exe, args...) // #nosec G204 - bd import command from trusted binary - output, err := cmd.CombinedOutput() - if err != nil { - return fmt.Errorf("import failed: %w\n%s", err, output) - } - - // Show output (import command provides the summary) - if len(output) > 0 { - fmt.Print(string(output)) - } - - return nil -} - -// resolveNoGitHistoryForFromMain returns the resolved noGitHistory value for sync operations. -// When syncing from main (--from-main), noGitHistory is forced to true to prevent creating -// incorrect deletion records for locally-created beads that don't exist on main. -// See: https://github.com/steveyegge/beads/issues/417 -func resolveNoGitHistoryForFromMain(fromMain, noGitHistory bool) bool { - if fromMain { - return true - } - return noGitHistory -} - -// doSyncFromMain performs a one-way sync from the default branch (main/master) -// Used for ephemeral branches without upstream tracking (gt-ick9) -// This fetches beads from main and imports them, discarding local beads changes. -// If sync.remote is configured (e.g., "upstream" for fork workflows), uses that remote -// instead of "origin" (bd-bx9). -func doSyncFromMain(ctx context.Context, jsonlPath string, renameOnImport bool, dryRun bool, noGitHistory bool) error { - // Determine which remote to use (default: origin, but can be configured via sync.remote) - remote := "origin" - if err := ensureStoreActive(); err == nil && store != nil { - if configuredRemote, err := store.GetConfig(ctx, "sync.remote"); err == nil && configuredRemote != "" { - remote = configuredRemote - } - } - - if dryRun { - fmt.Println("β†’ [DRY RUN] Would sync beads from main branch") - fmt.Printf(" 1. Fetch %s main\n", remote) - fmt.Printf(" 2. Checkout .beads/ from %s/main\n", remote) - fmt.Println(" 3. Import JSONL into database") - fmt.Println("\nβœ“ Dry run complete (no changes made)") - return nil - } - - // Check if we're in a git repository - if !isGitRepo() { - return fmt.Errorf("not in a git repository") - } - - // Check if remote exists - if !hasGitRemote(ctx) { - return fmt.Errorf("no git remote configured") - } - - // Verify the configured remote exists - checkRemoteCmd := exec.CommandContext(ctx, "git", "remote", "get-url", remote) - if err := checkRemoteCmd.Run(); err != nil { - return fmt.Errorf("configured sync.remote '%s' does not exist (run 'git remote add %s ')", remote, remote) - } - - defaultBranch := getDefaultBranchForRemote(ctx, remote) - - // Step 1: Fetch from main - fmt.Printf("β†’ Fetching from %s/%s...\n", remote, defaultBranch) - fetchCmd := exec.CommandContext(ctx, "git", "fetch", remote, defaultBranch) - if output, err := fetchCmd.CombinedOutput(); err != nil { - return fmt.Errorf("git fetch %s %s failed: %w\n%s", remote, defaultBranch, err, output) - } - - // Step 2: Checkout .beads/ directory from main - fmt.Printf("β†’ Checking out beads from %s/%s...\n", remote, defaultBranch) - checkoutCmd := exec.CommandContext(ctx, "git", "checkout", fmt.Sprintf("%s/%s", remote, defaultBranch), "--", ".beads/") - if output, err := checkoutCmd.CombinedOutput(); err != nil { - return fmt.Errorf("git checkout .beads/ from %s/%s failed: %w\n%s", remote, defaultBranch, err, output) - } - - // Step 3: Import JSONL - fmt.Println("β†’ Importing JSONL...") - if err := importFromJSONL(ctx, jsonlPath, renameOnImport, noGitHistory); err != nil { - return fmt.Errorf("import failed: %w", err) - } - - fmt.Println("\nβœ“ Sync from main complete") - return nil -} diff --git a/cmd/bd/testdata/close_resolution_alias.txt b/cmd/bd/testdata/close_resolution_alias.txt new file mode 100644 index 00000000..fe48f164 --- /dev/null +++ b/cmd/bd/testdata/close_resolution_alias.txt @@ -0,0 +1,16 @@ +# Test bd close --resolution alias (GH#721) +# Jira CLI convention: --resolution instead of --reason +bd init --prefix test + +# Create issue +bd create 'Issue to close with resolution' +cp stdout issue.txt +exec sh -c 'grep -oE "test-[a-z0-9]+" issue.txt > issue_id.txt' + +# Close using --resolution alias +exec sh -c 'bd close $(cat issue_id.txt) --resolution "Fixed via resolution alias"' +stdout 'Closed test-' + +# Verify close_reason is set correctly +exec sh -c 'bd show $(cat issue_id.txt) --json' +stdout 'Fixed via resolution alias' diff --git a/docs/CONFIG.md b/docs/CONFIG.md index 292143ee..ad958142 100644 --- a/docs/CONFIG.md +++ b/docs/CONFIG.md @@ -104,6 +104,73 @@ external_projects: gastown: /path/to/gastown ``` +### Hooks Configuration + +bd supports config-based hooks for automation and notifications. Currently, close hooks are implemented. + +#### Close Hooks + +Close hooks run after an issue is successfully closed via `bd close`. They execute synchronously but failures are logged as warnings and don't block the close operation. + +**Configuration:** + +```yaml +# .beads/config.yaml +hooks: + on_close: + - name: show-next + command: bd ready --limit 1 + - name: context-check + command: echo "Issue $BEAD_ID closed. Check context if nearing limit." + - command: notify-team.sh # name is optional +``` + +**Environment Variables:** + +Hook commands receive issue data via environment variables: + +| Variable | Description | +|----------|-------------| +| `BEAD_ID` | Issue ID (e.g., `bd-abc1`) | +| `BEAD_TITLE` | Issue title | +| `BEAD_TYPE` | Issue type (`task`, `bug`, `feature`, etc.) | +| `BEAD_PRIORITY` | Priority (0-4) | +| `BEAD_CLOSE_REASON` | Close reason if provided | + +**Example Use Cases:** + +1. **Show next work item:** + ```yaml + hooks: + on_close: + - name: next-task + command: bd ready --limit 1 + ``` + +2. **Context check reminder:** + ```yaml + hooks: + on_close: + - name: context-check + command: | + echo "Issue $BEAD_ID ($BEAD_TITLE) closed." + echo "Priority was P$BEAD_PRIORITY. Reason: $BEAD_CLOSE_REASON" + ``` + +3. **Integration with external tools:** + ```yaml + hooks: + on_close: + - name: slack-notify + command: curl -X POST "$SLACK_WEBHOOK" -d "{\"text\":\"Closed: $BEAD_ID - $BEAD_TITLE\"}" + ``` + +**Notes:** +- Hooks have a 10-second timeout +- Hook failures log warnings but don't fail the close operation +- Commands run via `sh -c`, so shell features like pipes and redirects work +- Both script-based hooks (`.beads/hooks/on_close`) and config-based hooks run + ### Why Two Systems? **Tool settings (Viper)** are user preferences: diff --git a/internal/beads/beads_test.go b/internal/beads/beads_test.go index bcbd54d4..6a7df462 100644 --- a/internal/beads/beads_test.go +++ b/internal/beads/beads_test.go @@ -1427,6 +1427,237 @@ func TestIsWispDatabase(t *testing.T) { } } +// TestFindDatabaseInBeadsDir tests the database discovery within a .beads directory +func TestFindDatabaseInBeadsDir(t *testing.T) { + tests := []struct { + name string + files []string + configJSON string + expectDB string + warnOnIssues bool + }{ + { + name: "canonical beads.db only", + files: []string{"beads.db"}, + expectDB: "beads.db", + }, + { + name: "legacy bd.db only", + files: []string{"bd.db"}, + expectDB: "bd.db", + }, + { + name: "prefers beads.db over other db files", + files: []string{"custom.db", "beads.db", "other.db"}, + expectDB: "beads.db", + }, + { + name: "skips backup files", + files: []string{"beads.backup.db", "real.db"}, + expectDB: "real.db", + }, + { + name: "skips vc.db", + files: []string{"vc.db", "beads.db"}, + expectDB: "beads.db", + }, + { + name: "no db files returns empty", + files: []string{"readme.txt", "config.yaml"}, + expectDB: "", + }, + { + name: "only backup files returns empty", + files: []string{"beads.backup.db", "vc.db"}, + expectDB: "", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + tmpDir, err := os.MkdirTemp("", "beads-findindir-test-*") + if err != nil { + t.Fatal(err) + } + defer os.RemoveAll(tmpDir) + + // Create test files + for _, file := range tt.files { + path := filepath.Join(tmpDir, file) + if err := os.WriteFile(path, []byte{}, 0644); err != nil { + t.Fatal(err) + } + } + + // Write config.json if specified + if tt.configJSON != "" { + configPath := filepath.Join(tmpDir, "config.json") + if err := os.WriteFile(configPath, []byte(tt.configJSON), 0644); err != nil { + t.Fatal(err) + } + } + + result := findDatabaseInBeadsDir(tmpDir, tt.warnOnIssues) + + if tt.expectDB == "" { + if result != "" { + t.Errorf("findDatabaseInBeadsDir() = %q, want empty string", result) + } + } else { + expected := filepath.Join(tmpDir, tt.expectDB) + if result != expected { + t.Errorf("findDatabaseInBeadsDir() = %q, want %q", result, expected) + } + } + }) + } +} + +// TestFindAllDatabases tests the multi-database discovery +func TestFindAllDatabases(t *testing.T) { + // Save original state + originalEnv := os.Getenv("BEADS_DIR") + defer func() { + if originalEnv != "" { + os.Setenv("BEADS_DIR", originalEnv) + } else { + os.Unsetenv("BEADS_DIR") + } + }() + os.Unsetenv("BEADS_DIR") + + // Create temp directory structure + tmpDir, err := os.MkdirTemp("", "beads-findall-test-*") + if err != nil { + t.Fatal(err) + } + defer os.RemoveAll(tmpDir) + + // Create .beads directory with database + beadsDir := filepath.Join(tmpDir, ".beads") + if err := os.MkdirAll(beadsDir, 0755); err != nil { + t.Fatal(err) + } + dbPath := filepath.Join(beadsDir, "beads.db") + if err := os.WriteFile(dbPath, []byte{}, 0644); err != nil { + t.Fatal(err) + } + + // Create subdirectory and change to it + subDir := filepath.Join(tmpDir, "sub", "nested") + if err := os.MkdirAll(subDir, 0755); err != nil { + t.Fatal(err) + } + + t.Chdir(subDir) + + // FindAllDatabases should find the parent .beads + result := FindAllDatabases() + + if len(result) == 0 { + t.Error("FindAllDatabases() returned empty slice, expected at least one database") + } else { + // Verify the path matches + resultResolved, _ := filepath.EvalSymlinks(result[0].Path) + dbPathResolved, _ := filepath.EvalSymlinks(dbPath) + if resultResolved != dbPathResolved { + t.Errorf("FindAllDatabases()[0].Path = %q, want %q", result[0].Path, dbPath) + } + } +} + +// TestFindAllDatabases_NoDatabase tests FindAllDatabases when no database exists +func TestFindAllDatabases_NoDatabase(t *testing.T) { + // Save original state + originalEnv := os.Getenv("BEADS_DIR") + defer func() { + if originalEnv != "" { + os.Setenv("BEADS_DIR", originalEnv) + } else { + os.Unsetenv("BEADS_DIR") + } + }() + os.Unsetenv("BEADS_DIR") + + // Create temp directory without .beads + tmpDir, err := os.MkdirTemp("", "beads-findall-nodb-*") + if err != nil { + t.Fatal(err) + } + defer os.RemoveAll(tmpDir) + + t.Chdir(tmpDir) + + // FindAllDatabases should return empty slice (not nil) + result := FindAllDatabases() + + if result == nil { + t.Error("FindAllDatabases() returned nil, expected empty slice") + } + if len(result) != 0 { + t.Errorf("FindAllDatabases() returned %d databases, expected 0", len(result)) + } +} + +// TestFindAllDatabases_StopsAtFirst tests that FindAllDatabases stops at first .beads found +func TestFindAllDatabases_StopsAtFirst(t *testing.T) { + // Save original state + originalEnv := os.Getenv("BEADS_DIR") + defer func() { + if originalEnv != "" { + os.Setenv("BEADS_DIR", originalEnv) + } else { + os.Unsetenv("BEADS_DIR") + } + }() + os.Unsetenv("BEADS_DIR") + + // Create temp directory structure with nested .beads dirs + tmpDir, err := os.MkdirTemp("", "beads-findall-nested-*") + if err != nil { + t.Fatal(err) + } + defer os.RemoveAll(tmpDir) + + // Create parent .beads + parentBeadsDir := filepath.Join(tmpDir, ".beads") + if err := os.MkdirAll(parentBeadsDir, 0755); err != nil { + t.Fatal(err) + } + if err := os.WriteFile(filepath.Join(parentBeadsDir, "beads.db"), []byte{}, 0644); err != nil { + t.Fatal(err) + } + + // Create child project with its own .beads + childDir := filepath.Join(tmpDir, "child") + childBeadsDir := filepath.Join(childDir, ".beads") + if err := os.MkdirAll(childBeadsDir, 0755); err != nil { + t.Fatal(err) + } + childDBPath := filepath.Join(childBeadsDir, "beads.db") + if err := os.WriteFile(childDBPath, []byte{}, 0644); err != nil { + t.Fatal(err) + } + + // Change to child directory + t.Chdir(childDir) + + // FindAllDatabases should return only the child's database (stops at first) + result := FindAllDatabases() + + if len(result) != 1 { + t.Errorf("FindAllDatabases() returned %d databases, expected 1 (should stop at first)", len(result)) + } + + if len(result) > 0 { + resultResolved, _ := filepath.EvalSymlinks(result[0].Path) + childDBResolved, _ := filepath.EvalSymlinks(childDBPath) + if resultResolved != childDBResolved { + t.Errorf("FindAllDatabases() found %q, expected child database %q", result[0].Path, childDBPath) + } + } +} + // TestEnsureWispGitignore tests that EnsureWispGitignore correctly // adds the wisp directory to .gitignore func TestEnsureWispGitignore(t *testing.T) { diff --git a/internal/beads/fingerprint_test.go b/internal/beads/fingerprint_test.go new file mode 100644 index 00000000..807b0357 --- /dev/null +++ b/internal/beads/fingerprint_test.go @@ -0,0 +1,507 @@ +package beads + +import ( + "os" + "os/exec" + "path/filepath" + "strings" + "testing" +) + +// TestCanonicalizeGitURL tests URL normalization for various git URL formats +func TestCanonicalizeGitURL(t *testing.T) { + tests := []struct { + name string + input string + expected string + }{ + // HTTPS URLs + { + name: "https basic", + input: "https://github.com/user/repo", + expected: "github.com/user/repo", + }, + { + name: "https with .git suffix", + input: "https://github.com/user/repo.git", + expected: "github.com/user/repo", + }, + { + name: "https with trailing slash", + input: "https://github.com/user/repo/", + expected: "github.com/user/repo", + }, + { + name: "https uppercase host", + input: "https://GitHub.COM/User/Repo.git", + expected: "github.com/User/Repo", + }, + { + name: "https with port 443", + input: "https://github.com:443/user/repo.git", + expected: "github.com/user/repo", + }, + { + name: "https with custom port", + input: "https://gitlab.company.com:8443/user/repo.git", + expected: "gitlab.company.com:8443/user/repo", + }, + + // SSH URLs (protocol style) + { + name: "ssh protocol basic", + input: "ssh://git@github.com/user/repo.git", + expected: "github.com/user/repo", + }, + { + name: "ssh with port 22", + input: "ssh://git@github.com:22/user/repo.git", + expected: "github.com/user/repo", + }, + { + name: "ssh with custom port", + input: "ssh://git@gitlab.company.com:2222/user/repo.git", + expected: "gitlab.company.com:2222/user/repo", + }, + + // SCP-style URLs (git@host:path) + { + name: "scp style basic", + input: "git@github.com:user/repo.git", + expected: "github.com/user/repo", + }, + { + name: "scp style without .git", + input: "git@github.com:user/repo", + expected: "github.com/user/repo", + }, + { + name: "scp style uppercase host", + input: "git@GITHUB.COM:User/Repo.git", + expected: "github.com/User/Repo", + }, + { + name: "scp style with trailing slash", + input: "git@github.com:user/repo/", + expected: "github.com/user/repo", + }, + { + name: "scp style deep path", + input: "git@gitlab.com:org/team/project/repo.git", + expected: "gitlab.com/org/team/project/repo", + }, + + // HTTP URLs (less common but valid) + { + name: "http basic", + input: "http://github.com/user/repo.git", + expected: "github.com/user/repo", + }, + { + name: "http with port 80", + input: "http://github.com:80/user/repo.git", + expected: "github.com/user/repo", + }, + + // Git protocol + { + name: "git protocol", + input: "git://github.com/user/repo.git", + expected: "github.com/user/repo", + }, + + // Whitespace handling + { + name: "with leading whitespace", + input: " https://github.com/user/repo.git", + expected: "github.com/user/repo", + }, + { + name: "with trailing whitespace", + input: "https://github.com/user/repo.git ", + expected: "github.com/user/repo", + }, + { + name: "with newline", + input: "https://github.com/user/repo.git\n", + expected: "github.com/user/repo", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result, err := canonicalizeGitURL(tt.input) + if err != nil { + t.Fatalf("canonicalizeGitURL(%q) error = %v", tt.input, err) + } + if result != tt.expected { + t.Errorf("canonicalizeGitURL(%q) = %q, want %q", tt.input, result, tt.expected) + } + }) + } +} + +// TestCanonicalizeGitURL_LocalPath tests that local paths are handled +func TestCanonicalizeGitURL_LocalPath(t *testing.T) { + // Create a temp directory to use as a "local path" + tmpDir := t.TempDir() + + // Local absolute path + result, err := canonicalizeGitURL(tmpDir) + if err != nil { + t.Fatalf("canonicalizeGitURL(%q) error = %v", tmpDir, err) + } + + // Should return a forward-slash path + if strings.Contains(result, "\\") { + t.Errorf("canonicalizeGitURL(%q) = %q, should use forward slashes", tmpDir, result) + } +} + +// TestCanonicalizeGitURL_WindowsPath tests Windows path detection +func TestCanonicalizeGitURL_WindowsPath(t *testing.T) { + // This tests the Windows path detection logic (C:/) + // The function should NOT treat "C:/foo/bar" as an scp-style URL + tests := []struct { + input string + expected string + }{ + // These are NOT scp-style URLs - they're Windows paths + {"C:/Users/test/repo", "C:/Users/test/repo"}, + {"D:/projects/myrepo", "D:/projects/myrepo"}, + } + + for _, tt := range tests { + result, err := canonicalizeGitURL(tt.input) + if err != nil { + t.Fatalf("canonicalizeGitURL(%q) error = %v", tt.input, err) + } + // Should preserve the Windows path structure (forward slashes) + if !strings.Contains(result, "/") { + t.Errorf("canonicalizeGitURL(%q) = %q, expected path with slashes", tt.input, result) + } + } +} + +// TestComputeRepoID_WithRemote tests ComputeRepoID when remote.origin.url exists +func TestComputeRepoID_WithRemote(t *testing.T) { + // Create temporary directory for test repo + tmpDir := t.TempDir() + + // Initialize git repo + cmd := exec.Command("git", "init") + cmd.Dir = tmpDir + if err := cmd.Run(); err != nil { + t.Skipf("git not available: %v", err) + } + + // Configure git user + cmd = exec.Command("git", "config", "user.email", "test@example.com") + cmd.Dir = tmpDir + _ = cmd.Run() + cmd = exec.Command("git", "config", "user.name", "Test User") + cmd.Dir = tmpDir + _ = cmd.Run() + + // Set remote.origin.url + cmd = exec.Command("git", "remote", "add", "origin", "https://github.com/user/test-repo.git") + cmd.Dir = tmpDir + if err := cmd.Run(); err != nil { + t.Fatalf("git remote add failed: %v", err) + } + + // Change to repo dir + t.Chdir(tmpDir) + + // ComputeRepoID should return a consistent hash + result1, err := ComputeRepoID() + if err != nil { + t.Fatalf("ComputeRepoID() error = %v", err) + } + + // Should be a 32-character hex string (16 bytes) + if len(result1) != 32 { + t.Errorf("ComputeRepoID() = %q, expected 32 character hex string", result1) + } + + // Should be consistent across calls + result2, err := ComputeRepoID() + if err != nil { + t.Fatalf("ComputeRepoID() second call error = %v", err) + } + if result1 != result2 { + t.Errorf("ComputeRepoID() not consistent: %q vs %q", result1, result2) + } +} + +// TestComputeRepoID_NoRemote tests ComputeRepoID when no remote exists +func TestComputeRepoID_NoRemote(t *testing.T) { + // Create temporary directory for test repo + tmpDir := t.TempDir() + + // Initialize git repo (no remote) + cmd := exec.Command("git", "init") + cmd.Dir = tmpDir + if err := cmd.Run(); err != nil { + t.Skipf("git not available: %v", err) + } + + // Change to repo dir + t.Chdir(tmpDir) + + // ComputeRepoID should fall back to using the local path + result, err := ComputeRepoID() + if err != nil { + t.Fatalf("ComputeRepoID() error = %v", err) + } + + // Should still return a 32-character hex string + if len(result) != 32 { + t.Errorf("ComputeRepoID() = %q, expected 32 character hex string", result) + } +} + +// TestComputeRepoID_NotGitRepo tests ComputeRepoID when not in a git repo +func TestComputeRepoID_NotGitRepo(t *testing.T) { + // Create temporary directory that is NOT a git repo + tmpDir := t.TempDir() + + t.Chdir(tmpDir) + + // ComputeRepoID should return an error + _, err := ComputeRepoID() + if err == nil { + t.Error("ComputeRepoID() expected error for non-git directory, got nil") + } + if !strings.Contains(err.Error(), "not a git repository") { + t.Errorf("ComputeRepoID() error = %q, expected 'not a git repository'", err.Error()) + } +} + +// TestComputeRepoID_DifferentRemotesSameCanonical tests that different URL formats +// for the same repo produce the same ID +func TestComputeRepoID_DifferentRemotesSameCanonical(t *testing.T) { + remotes := []string{ + "https://github.com/user/repo.git", + "git@github.com:user/repo.git", + "ssh://git@github.com/user/repo.git", + } + + var ids []string + + for _, remote := range remotes { + tmpDir := t.TempDir() + + // Initialize git repo + cmd := exec.Command("git", "init") + cmd.Dir = tmpDir + if err := cmd.Run(); err != nil { + t.Skipf("git not available: %v", err) + } + + // Set remote + cmd = exec.Command("git", "remote", "add", "origin", remote) + cmd.Dir = tmpDir + if err := cmd.Run(); err != nil { + t.Fatalf("git remote add failed for %q: %v", remote, err) + } + + t.Chdir(tmpDir) + + id, err := ComputeRepoID() + if err != nil { + t.Fatalf("ComputeRepoID() for remote %q error = %v", remote, err) + } + ids = append(ids, id) + } + + // All IDs should be the same since they point to the same canonical repo + for i := 1; i < len(ids); i++ { + if ids[i] != ids[0] { + t.Errorf("ComputeRepoID() produced different IDs for same repo:\n remote[0]=%q id=%s\n remote[%d]=%q id=%s", + remotes[0], ids[0], i, remotes[i], ids[i]) + } + } +} + +// TestGetCloneID_Basic tests GetCloneID returns a consistent ID +func TestGetCloneID_Basic(t *testing.T) { + // Create temporary directory for test repo + tmpDir := t.TempDir() + + // Initialize git repo + cmd := exec.Command("git", "init") + cmd.Dir = tmpDir + if err := cmd.Run(); err != nil { + t.Skipf("git not available: %v", err) + } + + t.Chdir(tmpDir) + + // GetCloneID should return a consistent hash + result1, err := GetCloneID() + if err != nil { + t.Fatalf("GetCloneID() error = %v", err) + } + + // Should be a 16-character hex string (8 bytes) + if len(result1) != 16 { + t.Errorf("GetCloneID() = %q, expected 16 character hex string", result1) + } + + // Should be consistent across calls + result2, err := GetCloneID() + if err != nil { + t.Fatalf("GetCloneID() second call error = %v", err) + } + if result1 != result2 { + t.Errorf("GetCloneID() not consistent: %q vs %q", result1, result2) + } +} + +// TestGetCloneID_DifferentDirs tests GetCloneID produces different IDs for different clones +func TestGetCloneID_DifferentDirs(t *testing.T) { + ids := make(map[string]string) + + for i := 0; i < 3; i++ { + tmpDir := t.TempDir() + + // Initialize git repo + cmd := exec.Command("git", "init") + cmd.Dir = tmpDir + if err := cmd.Run(); err != nil { + t.Skipf("git not available: %v", err) + } + + t.Chdir(tmpDir) + + id, err := GetCloneID() + if err != nil { + t.Fatalf("GetCloneID() error = %v", err) + } + + // Each clone should have a unique ID + if prev, exists := ids[id]; exists { + t.Errorf("GetCloneID() produced duplicate ID %q for dirs %q and %q", id, prev, tmpDir) + } + ids[id] = tmpDir + } +} + +// TestGetCloneID_NotGitRepo tests GetCloneID when not in a git repo +func TestGetCloneID_NotGitRepo(t *testing.T) { + // Create temporary directory that is NOT a git repo + tmpDir := t.TempDir() + + t.Chdir(tmpDir) + + // GetCloneID should return an error + _, err := GetCloneID() + if err == nil { + t.Error("GetCloneID() expected error for non-git directory, got nil") + } + if !strings.Contains(err.Error(), "not a git repository") { + t.Errorf("GetCloneID() error = %q, expected 'not a git repository'", err.Error()) + } +} + +// TestGetCloneID_IncludesHostname tests that GetCloneID includes hostname +// to differentiate the same path on different machines +func TestGetCloneID_IncludesHostname(t *testing.T) { + // This test verifies the concept - we can't actually test different hostnames + // but we can verify that the same path produces the same ID on this machine + tmpDir := t.TempDir() + + // Initialize git repo + cmd := exec.Command("git", "init") + cmd.Dir = tmpDir + if err := cmd.Run(); err != nil { + t.Skipf("git not available: %v", err) + } + + t.Chdir(tmpDir) + + hostname, _ := os.Hostname() + id, err := GetCloneID() + if err != nil { + t.Fatalf("GetCloneID() error = %v", err) + } + + // Just verify we got a valid ID - we can't test different hostnames + // but the implementation includes hostname in the hash + if len(id) != 16 { + t.Errorf("GetCloneID() = %q, expected 16 character hex string (hostname=%s)", id, hostname) + } +} + +// TestGetCloneID_Worktree tests GetCloneID in a worktree +func TestGetCloneID_Worktree(t *testing.T) { + // Create temporary directory for test + tmpDir := t.TempDir() + + // Initialize main git repo + mainRepoDir := filepath.Join(tmpDir, "main-repo") + if err := os.MkdirAll(mainRepoDir, 0755); err != nil { + t.Fatal(err) + } + + cmd := exec.Command("git", "init") + cmd.Dir = mainRepoDir + if err := cmd.Run(); err != nil { + t.Skipf("git not available: %v", err) + } + + // Configure git user + cmd = exec.Command("git", "config", "user.email", "test@example.com") + cmd.Dir = mainRepoDir + _ = cmd.Run() + cmd = exec.Command("git", "config", "user.name", "Test User") + cmd.Dir = mainRepoDir + _ = cmd.Run() + + // Create initial commit (required for worktree) + dummyFile := filepath.Join(mainRepoDir, "README.md") + if err := os.WriteFile(dummyFile, []byte("# Test\n"), 0644); err != nil { + t.Fatal(err) + } + cmd = exec.Command("git", "add", "README.md") + cmd.Dir = mainRepoDir + _ = cmd.Run() + cmd = exec.Command("git", "commit", "-m", "Initial commit") + cmd.Dir = mainRepoDir + if err := cmd.Run(); err != nil { + t.Fatalf("git commit failed: %v", err) + } + + // Create a worktree + worktreeDir := filepath.Join(tmpDir, "worktree") + cmd = exec.Command("git", "worktree", "add", worktreeDir, "HEAD") + cmd.Dir = mainRepoDir + if err := cmd.Run(); err != nil { + t.Fatalf("git worktree add failed: %v", err) + } + defer func() { + cmd := exec.Command("git", "worktree", "remove", worktreeDir) + cmd.Dir = mainRepoDir + _ = cmd.Run() + }() + + // Get IDs from both locations + t.Chdir(mainRepoDir) + mainID, err := GetCloneID() + if err != nil { + t.Fatalf("GetCloneID() in main repo error = %v", err) + } + + t.Chdir(worktreeDir) + worktreeID, err := GetCloneID() + if err != nil { + t.Fatalf("GetCloneID() in worktree error = %v", err) + } + + // Worktree should have a DIFFERENT ID than main repo + // because they're different paths (different clones conceptually) + if mainID == worktreeID { + t.Errorf("GetCloneID() returned same ID for main repo and worktree - should be different") + } +} diff --git a/internal/compact/compactor_unit_test.go b/internal/compact/compactor_unit_test.go new file mode 100644 index 00000000..f1a85069 --- /dev/null +++ b/internal/compact/compactor_unit_test.go @@ -0,0 +1,732 @@ +package compact + +import ( + "context" + "encoding/json" + "net/http" + "net/http/httptest" + "strings" + "testing" + "time" + + "github.com/anthropics/anthropic-sdk-go/option" + "github.com/steveyegge/beads/internal/storage/sqlite" + "github.com/steveyegge/beads/internal/types" +) + +// setupTestStore creates a test SQLite store for unit tests +func setupTestStore(t *testing.T) *sqlite.SQLiteStorage { + t.Helper() + + tmpDB := t.TempDir() + "/test.db" + store, err := sqlite.New(context.Background(), tmpDB) + if err != nil { + t.Fatalf("failed to create storage: %v", err) + } + + ctx := context.Background() + // Set issue_prefix to prevent "database not initialized" errors + if err := store.SetConfig(ctx, "issue_prefix", "bd"); err != nil { + t.Fatalf("failed to set issue_prefix: %v", err) + } + // Use 7 days minimum for Tier 1 compaction + if err := store.SetConfig(ctx, "compact_tier1_days", "7"); err != nil { + t.Fatalf("failed to set config: %v", err) + } + if err := store.SetConfig(ctx, "compact_tier1_dep_levels", "2"); err != nil { + t.Fatalf("failed to set config: %v", err) + } + + return store +} + +// createTestIssue creates a closed issue eligible for compaction +func createTestIssue(t *testing.T, store *sqlite.SQLiteStorage, id string) *types.Issue { + t.Helper() + + ctx := context.Background() + prefix, _ := store.GetConfig(ctx, "issue_prefix") + if prefix == "" { + prefix = "bd" + } + + now := time.Now() + // Issue closed 8 days ago (beyond 7-day threshold for Tier 1) + closedAt := now.Add(-8 * 24 * time.Hour) + issue := &types.Issue{ + ID: id, + Title: "Test Issue", + Description: `Implemented a comprehensive authentication system for the application. + +The system includes JWT token generation, refresh token handling, password hashing with bcrypt, +rate limiting on login attempts, and session management.`, + Design: `Authentication Flow: +1. User submits credentials +2. Server validates against database +3. On success, generate JWT with user claims`, + Notes: "Performance considerations and testing strategy notes.", + AcceptanceCriteria: "- Users can register\n- Users can login\n- Protected endpoints work", + Status: types.StatusClosed, + Priority: 2, + IssueType: types.TypeTask, + CreatedAt: now.Add(-48 * time.Hour), + UpdatedAt: now.Add(-24 * time.Hour), + ClosedAt: &closedAt, + } + + if err := store.CreateIssue(ctx, issue, prefix); err != nil { + t.Fatalf("failed to create issue: %v", err) + } + + return issue +} + +func TestNew_WithConfig(t *testing.T) { + store := setupTestStore(t) + defer store.Close() + + config := &Config{ + Concurrency: 10, + DryRun: true, + } + c, err := New(store, "", config) + if err != nil { + t.Fatalf("failed to create compactor: %v", err) + } + if c.config.Concurrency != 10 { + t.Errorf("expected concurrency 10, got %d", c.config.Concurrency) + } + if !c.config.DryRun { + t.Error("expected DryRun to be true") + } +} + +func TestNew_DefaultConcurrency(t *testing.T) { + store := setupTestStore(t) + defer store.Close() + + c, err := New(store, "", nil) + if err != nil { + t.Fatalf("failed to create compactor: %v", err) + } + if c.config.Concurrency != defaultConcurrency { + t.Errorf("expected default concurrency %d, got %d", defaultConcurrency, c.config.Concurrency) + } +} + +func TestNew_ZeroConcurrency(t *testing.T) { + store := setupTestStore(t) + defer store.Close() + + config := &Config{ + Concurrency: 0, + DryRun: true, + } + c, err := New(store, "", config) + if err != nil { + t.Fatalf("failed to create compactor: %v", err) + } + // Zero concurrency should be replaced with default + if c.config.Concurrency != defaultConcurrency { + t.Errorf("expected default concurrency %d, got %d", defaultConcurrency, c.config.Concurrency) + } +} + +func TestNew_NegativeConcurrency(t *testing.T) { + store := setupTestStore(t) + defer store.Close() + + config := &Config{ + Concurrency: -5, + DryRun: true, + } + c, err := New(store, "", config) + if err != nil { + t.Fatalf("failed to create compactor: %v", err) + } + // Negative concurrency should be replaced with default + if c.config.Concurrency != defaultConcurrency { + t.Errorf("expected default concurrency %d, got %d", defaultConcurrency, c.config.Concurrency) + } +} + +func TestNew_WithAPIKey(t *testing.T) { + store := setupTestStore(t) + defer store.Close() + + // Clear env var to test explicit key + t.Setenv("ANTHROPIC_API_KEY", "") + + config := &Config{ + DryRun: true, // DryRun so we don't actually need a valid key + } + c, err := New(store, "test-api-key", config) + if err != nil { + t.Fatalf("failed to create compactor: %v", err) + } + if c.config.APIKey != "test-api-key" { + t.Errorf("expected api key 'test-api-key', got '%s'", c.config.APIKey) + } +} + +func TestNew_NoAPIKeyFallsToDryRun(t *testing.T) { + store := setupTestStore(t) + defer store.Close() + + // Clear env var + t.Setenv("ANTHROPIC_API_KEY", "") + + config := &Config{ + DryRun: false, // Try to create real client + } + c, err := New(store, "", config) + if err != nil { + t.Fatalf("failed to create compactor: %v", err) + } + // Should fall back to DryRun when no API key + if !c.config.DryRun { + t.Error("expected DryRun to be true when no API key provided") + } +} + +func TestNew_AuditSettings(t *testing.T) { + store := setupTestStore(t) + defer store.Close() + + t.Setenv("ANTHROPIC_API_KEY", "test-key") + + config := &Config{ + AuditEnabled: true, + Actor: "test-actor", + } + c, err := New(store, "", config) + if err != nil { + t.Fatalf("failed to create compactor: %v", err) + } + if c.haiku == nil { + t.Fatal("expected haiku client to be created") + } + if !c.haiku.auditEnabled { + t.Error("expected auditEnabled to be true") + } + if c.haiku.auditActor != "test-actor" { + t.Errorf("expected auditActor 'test-actor', got '%s'", c.haiku.auditActor) + } +} + +func TestCompactTier1_DryRun(t *testing.T) { + store := setupTestStore(t) + defer store.Close() + + issue := createTestIssue(t, store, "bd-1") + + config := &Config{DryRun: true} + c, err := New(store, "", config) + if err != nil { + t.Fatalf("failed to create compactor: %v", err) + } + + ctx := context.Background() + err = c.CompactTier1(ctx, issue.ID) + if err == nil { + t.Fatal("expected dry-run error, got nil") + } + if !strings.HasPrefix(err.Error(), "dry-run:") { + t.Errorf("expected dry-run error prefix, got: %v", err) + } + + // Verify issue was not modified + afterIssue, err := store.GetIssue(ctx, issue.ID) + if err != nil { + t.Fatalf("failed to get issue: %v", err) + } + if afterIssue.Description != issue.Description { + t.Error("dry-run should not modify issue") + } +} + +func TestCompactTier1_IneligibleOpenIssue(t *testing.T) { + store := setupTestStore(t) + defer store.Close() + + ctx := context.Background() + prefix, _ := store.GetConfig(ctx, "issue_prefix") + if prefix == "" { + prefix = "bd" + } + + now := time.Now() + issue := &types.Issue{ + ID: "bd-open", + Title: "Open Issue", + Description: "Should not be compacted", + Status: types.StatusOpen, + Priority: 2, + IssueType: types.TypeTask, + CreatedAt: now, + UpdatedAt: now, + } + if err := store.CreateIssue(ctx, issue, prefix); err != nil { + t.Fatalf("failed to create issue: %v", err) + } + + config := &Config{DryRun: true} + c, err := New(store, "", config) + if err != nil { + t.Fatalf("failed to create compactor: %v", err) + } + + err = c.CompactTier1(ctx, issue.ID) + if err == nil { + t.Fatal("expected error for ineligible issue, got nil") + } + if !strings.Contains(err.Error(), "not eligible") { + t.Errorf("expected 'not eligible' error, got: %v", err) + } +} + +func TestCompactTier1_NonexistentIssue(t *testing.T) { + store := setupTestStore(t) + defer store.Close() + + config := &Config{DryRun: true} + c, err := New(store, "", config) + if err != nil { + t.Fatalf("failed to create compactor: %v", err) + } + + ctx := context.Background() + err = c.CompactTier1(ctx, "bd-nonexistent") + if err == nil { + t.Fatal("expected error for nonexistent issue") + } +} + +func TestCompactTier1_ContextCanceled(t *testing.T) { + store := setupTestStore(t) + defer store.Close() + + issue := createTestIssue(t, store, "bd-cancel") + + config := &Config{DryRun: true} + c, err := New(store, "", config) + if err != nil { + t.Fatalf("failed to create compactor: %v", err) + } + + ctx, cancel := context.WithCancel(context.Background()) + cancel() // Cancel immediately + + err = c.CompactTier1(ctx, issue.ID) + if err == nil { + t.Fatal("expected error for canceled context") + } + if err != context.Canceled { + t.Errorf("expected context.Canceled, got: %v", err) + } +} + +func TestCompactTier1Batch_EmptyList(t *testing.T) { + store := setupTestStore(t) + defer store.Close() + + config := &Config{DryRun: true} + c, err := New(store, "", config) + if err != nil { + t.Fatalf("failed to create compactor: %v", err) + } + + ctx := context.Background() + results, err := c.CompactTier1Batch(ctx, []string{}) + if err != nil { + t.Fatalf("unexpected error: %v", err) + } + if results != nil { + t.Errorf("expected nil results for empty list, got: %v", results) + } +} + +func TestCompactTier1Batch_DryRun(t *testing.T) { + store := setupTestStore(t) + defer store.Close() + + issue1 := createTestIssue(t, store, "bd-batch-1") + issue2 := createTestIssue(t, store, "bd-batch-2") + + config := &Config{DryRun: true, Concurrency: 2} + c, err := New(store, "", config) + if err != nil { + t.Fatalf("failed to create compactor: %v", err) + } + + ctx := context.Background() + results, err := c.CompactTier1Batch(ctx, []string{issue1.ID, issue2.ID}) + if err != nil { + t.Fatalf("failed to batch compact: %v", err) + } + + if len(results) != 2 { + t.Fatalf("expected 2 results, got %d", len(results)) + } + + for _, result := range results { + if result.Err != nil { + t.Errorf("unexpected error for %s: %v", result.IssueID, result.Err) + } + if result.OriginalSize == 0 { + t.Errorf("expected non-zero original size for %s", result.IssueID) + } + } +} + +func TestCompactTier1Batch_MixedEligibility(t *testing.T) { + store := setupTestStore(t) + defer store.Close() + + closedIssue := createTestIssue(t, store, "bd-closed") + + ctx := context.Background() + prefix, _ := store.GetConfig(ctx, "issue_prefix") + if prefix == "" { + prefix = "bd" + } + + now := time.Now() + openIssue := &types.Issue{ + ID: "bd-open", + Title: "Open Issue", + Description: "Should not be compacted", + Status: types.StatusOpen, + Priority: 2, + IssueType: types.TypeTask, + CreatedAt: now, + UpdatedAt: now, + } + if err := store.CreateIssue(ctx, openIssue, prefix); err != nil { + t.Fatalf("failed to create issue: %v", err) + } + + config := &Config{DryRun: true, Concurrency: 2} + c, err := New(store, "", config) + if err != nil { + t.Fatalf("failed to create compactor: %v", err) + } + + results, err := c.CompactTier1Batch(ctx, []string{closedIssue.ID, openIssue.ID}) + if err != nil { + t.Fatalf("failed to batch compact: %v", err) + } + + if len(results) != 2 { + t.Fatalf("expected 2 results, got %d", len(results)) + } + + var foundClosed, foundOpen bool + for _, result := range results { + switch result.IssueID { + case openIssue.ID: + foundOpen = true + if result.Err == nil { + t.Error("expected error for ineligible issue") + } + case closedIssue.ID: + foundClosed = true + if result.Err != nil { + t.Errorf("unexpected error for eligible issue: %v", result.Err) + } + } + } + if !foundClosed || !foundOpen { + t.Error("missing expected results") + } +} + +func TestCompactTier1Batch_NonexistentIssue(t *testing.T) { + store := setupTestStore(t) + defer store.Close() + + closedIssue := createTestIssue(t, store, "bd-closed") + + config := &Config{DryRun: true, Concurrency: 2} + c, err := New(store, "", config) + if err != nil { + t.Fatalf("failed to create compactor: %v", err) + } + + ctx := context.Background() + results, err := c.CompactTier1Batch(ctx, []string{closedIssue.ID, "bd-nonexistent"}) + if err != nil { + t.Fatalf("batch operation failed: %v", err) + } + + if len(results) != 2 { + t.Fatalf("expected 2 results, got %d", len(results)) + } + + var successCount, errorCount int + for _, r := range results { + if r.Err == nil { + successCount++ + } else { + errorCount++ + } + } + + if successCount != 1 { + t.Errorf("expected 1 success, got %d", successCount) + } + if errorCount != 1 { + t.Errorf("expected 1 error, got %d", errorCount) + } +} + +func TestCompactTier1_WithMockAPI(t *testing.T) { + store := setupTestStore(t) + defer store.Close() + + issue := createTestIssue(t, store, "bd-mock-api") + + // Create mock server that returns a short summary + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.Header().Set("Content-Type", "application/json") + json.NewEncoder(w).Encode(map[string]interface{}{ + "id": "msg_test123", + "type": "message", + "role": "assistant", + "model": "claude-3-5-haiku-20241022", + "content": []map[string]interface{}{ + { + "type": "text", + "text": "**Summary:** Short summary.\n\n**Key Decisions:** None.\n\n**Resolution:** Done.", + }, + }, + }) + })) + defer server.Close() + + t.Setenv("ANTHROPIC_API_KEY", "test-key") + + // Create compactor with mock API + config := &Config{Concurrency: 1} + c, err := New(store, "", config) + if err != nil { + t.Fatalf("failed to create compactor: %v", err) + } + + // Replace the haiku client with one pointing to mock server + c.haiku, err = NewHaikuClient("test-key", option.WithBaseURL(server.URL), option.WithMaxRetries(0)) + if err != nil { + t.Fatalf("failed to create mock haiku client: %v", err) + } + + ctx := context.Background() + err = c.CompactTier1(ctx, issue.ID) + if err != nil { + t.Fatalf("unexpected error: %v", err) + } + + // Verify issue was updated + afterIssue, err := store.GetIssue(ctx, issue.ID) + if err != nil { + t.Fatalf("failed to get issue: %v", err) + } + + if afterIssue.Description == issue.Description { + t.Error("description should have been updated") + } + if afterIssue.Design != "" { + t.Error("design should be cleared") + } + if afterIssue.Notes != "" { + t.Error("notes should be cleared") + } + if afterIssue.AcceptanceCriteria != "" { + t.Error("acceptance criteria should be cleared") + } +} + +func TestCompactTier1_SummaryNotShorter(t *testing.T) { + store := setupTestStore(t) + defer store.Close() + + // Create issue with very short content + ctx := context.Background() + prefix, _ := store.GetConfig(ctx, "issue_prefix") + if prefix == "" { + prefix = "bd" + } + + now := time.Now() + closedAt := now.Add(-8 * 24 * time.Hour) + issue := &types.Issue{ + ID: "bd-short", + Title: "Short", + Description: "X", // Very short description + Status: types.StatusClosed, + Priority: 2, + IssueType: types.TypeTask, + CreatedAt: now.Add(-48 * time.Hour), + UpdatedAt: now.Add(-24 * time.Hour), + ClosedAt: &closedAt, + } + if err := store.CreateIssue(ctx, issue, prefix); err != nil { + t.Fatalf("failed to create issue: %v", err) + } + + // Create mock server that returns a longer summary + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.Header().Set("Content-Type", "application/json") + json.NewEncoder(w).Encode(map[string]interface{}{ + "id": "msg_test123", + "type": "message", + "role": "assistant", + "model": "claude-3-5-haiku-20241022", + "content": []map[string]interface{}{ + { + "type": "text", + "text": "**Summary:** This is a much longer summary that exceeds the original content length.\n\n**Key Decisions:** Multiple decisions.\n\n**Resolution:** Complete.", + }, + }, + }) + })) + defer server.Close() + + t.Setenv("ANTHROPIC_API_KEY", "test-key") + + config := &Config{Concurrency: 1} + c, err := New(store, "", config) + if err != nil { + t.Fatalf("failed to create compactor: %v", err) + } + + c.haiku, err = NewHaikuClient("test-key", option.WithBaseURL(server.URL), option.WithMaxRetries(0)) + if err != nil { + t.Fatalf("failed to create mock haiku client: %v", err) + } + + err = c.CompactTier1(ctx, issue.ID) + if err == nil { + t.Fatal("expected error when summary is longer") + } + if !strings.Contains(err.Error(), "would increase size") { + t.Errorf("expected 'would increase size' error, got: %v", err) + } + + // Verify issue was NOT modified (kept original) + afterIssue, err := store.GetIssue(ctx, issue.ID) + if err != nil { + t.Fatalf("failed to get issue: %v", err) + } + if afterIssue.Description != issue.Description { + t.Error("description should not have been modified when summary is longer") + } +} + +func TestCompactTier1Batch_WithMockAPI(t *testing.T) { + store := setupTestStore(t) + defer store.Close() + + issue1 := createTestIssue(t, store, "bd-batch-mock-1") + issue2 := createTestIssue(t, store, "bd-batch-mock-2") + + // Create mock server + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.Header().Set("Content-Type", "application/json") + json.NewEncoder(w).Encode(map[string]interface{}{ + "id": "msg_test123", + "type": "message", + "role": "assistant", + "model": "claude-3-5-haiku-20241022", + "content": []map[string]interface{}{ + { + "type": "text", + "text": "**Summary:** Compacted.\n\n**Key Decisions:** None.\n\n**Resolution:** Done.", + }, + }, + }) + })) + defer server.Close() + + t.Setenv("ANTHROPIC_API_KEY", "test-key") + + config := &Config{Concurrency: 2} + c, err := New(store, "", config) + if err != nil { + t.Fatalf("failed to create compactor: %v", err) + } + + c.haiku, err = NewHaikuClient("test-key", option.WithBaseURL(server.URL), option.WithMaxRetries(0)) + if err != nil { + t.Fatalf("failed to create mock haiku client: %v", err) + } + + ctx := context.Background() + results, err := c.CompactTier1Batch(ctx, []string{issue1.ID, issue2.ID}) + if err != nil { + t.Fatalf("failed to batch compact: %v", err) + } + + if len(results) != 2 { + t.Fatalf("expected 2 results, got %d", len(results)) + } + + for _, result := range results { + if result.Err != nil { + t.Errorf("unexpected error for %s: %v", result.IssueID, result.Err) + } + if result.CompactedSize == 0 { + t.Errorf("expected non-zero compacted size for %s", result.IssueID) + } + if result.CompactedSize >= result.OriginalSize { + t.Errorf("expected size reduction for %s: %d β†’ %d", result.IssueID, result.OriginalSize, result.CompactedSize) + } + } +} + +func TestResult_Fields(t *testing.T) { + r := &Result{ + IssueID: "bd-1", + OriginalSize: 100, + CompactedSize: 50, + Err: nil, + } + + if r.IssueID != "bd-1" { + t.Errorf("expected IssueID 'bd-1', got '%s'", r.IssueID) + } + if r.OriginalSize != 100 { + t.Errorf("expected OriginalSize 100, got %d", r.OriginalSize) + } + if r.CompactedSize != 50 { + t.Errorf("expected CompactedSize 50, got %d", r.CompactedSize) + } + if r.Err != nil { + t.Errorf("expected nil Err, got %v", r.Err) + } +} + +func TestConfig_Fields(t *testing.T) { + c := &Config{ + APIKey: "test-key", + Concurrency: 10, + DryRun: true, + AuditEnabled: true, + Actor: "test-actor", + } + + if c.APIKey != "test-key" { + t.Errorf("expected APIKey 'test-key', got '%s'", c.APIKey) + } + if c.Concurrency != 10 { + t.Errorf("expected Concurrency 10, got %d", c.Concurrency) + } + if !c.DryRun { + t.Error("expected DryRun true") + } + if !c.AuditEnabled { + t.Error("expected AuditEnabled true") + } + if c.Actor != "test-actor" { + t.Errorf("expected Actor 'test-actor', got '%s'", c.Actor) + } +} diff --git a/internal/compact/git_test.go b/internal/compact/git_test.go new file mode 100644 index 00000000..6077ac56 --- /dev/null +++ b/internal/compact/git_test.go @@ -0,0 +1,171 @@ +package compact + +import ( + "os" + "os/exec" + "path/filepath" + "regexp" + "testing" +) + +func TestGetCurrentCommitHash_InGitRepo(t *testing.T) { + // This test runs in the actual beads repo, so it should return a valid hash + hash := GetCurrentCommitHash() + + // Should be a 40-character hex string + if len(hash) != 40 { + t.Errorf("expected 40-char hash, got %d chars: %s", len(hash), hash) + } + + // Should be valid hex + matched, err := regexp.MatchString("^[0-9a-f]{40}$", hash) + if err != nil { + t.Fatalf("regex error: %v", err) + } + if !matched { + t.Errorf("expected hex hash, got: %s", hash) + } +} + +func TestGetCurrentCommitHash_NotInGitRepo(t *testing.T) { + // Save current directory + originalDir, err := os.Getwd() + if err != nil { + t.Fatalf("failed to get cwd: %v", err) + } + + // Create a temporary directory that is NOT a git repo + tmpDir := t.TempDir() + + // Change to the temp directory + if err := os.Chdir(tmpDir); err != nil { + t.Fatalf("failed to chdir to temp dir: %v", err) + } + defer func() { + // Restore original directory + if err := os.Chdir(originalDir); err != nil { + t.Fatalf("failed to restore cwd: %v", err) + } + }() + + // Should return empty string when not in a git repo + hash := GetCurrentCommitHash() + if hash != "" { + t.Errorf("expected empty string outside git repo, got: %s", hash) + } +} + +func TestGetCurrentCommitHash_NewGitRepo(t *testing.T) { + // Save current directory + originalDir, err := os.Getwd() + if err != nil { + t.Fatalf("failed to get cwd: %v", err) + } + + // Create a temporary directory + tmpDir := t.TempDir() + + // Initialize a new git repo + cmd := exec.Command("git", "init") + cmd.Dir = tmpDir + if err := cmd.Run(); err != nil { + t.Fatalf("failed to init git repo: %v", err) + } + + // Configure git user for the commit + cmd = exec.Command("git", "config", "user.email", "test@test.com") + cmd.Dir = tmpDir + if err := cmd.Run(); err != nil { + t.Fatalf("failed to set git email: %v", err) + } + + cmd = exec.Command("git", "config", "user.name", "Test User") + cmd.Dir = tmpDir + if err := cmd.Run(); err != nil { + t.Fatalf("failed to set git name: %v", err) + } + + // Create a file and commit it + testFile := filepath.Join(tmpDir, "test.txt") + if err := os.WriteFile(testFile, []byte("test"), 0644); err != nil { + t.Fatalf("failed to write test file: %v", err) + } + + cmd = exec.Command("git", "add", ".") + cmd.Dir = tmpDir + if err := cmd.Run(); err != nil { + t.Fatalf("failed to git add: %v", err) + } + + cmd = exec.Command("git", "commit", "-m", "test commit") + cmd.Dir = tmpDir + if err := cmd.Run(); err != nil { + t.Fatalf("failed to git commit: %v", err) + } + + // Change to the new git repo + if err := os.Chdir(tmpDir); err != nil { + t.Fatalf("failed to chdir to git repo: %v", err) + } + defer func() { + // Restore original directory + if err := os.Chdir(originalDir); err != nil { + t.Fatalf("failed to restore cwd: %v", err) + } + }() + + // Should return a valid hash + hash := GetCurrentCommitHash() + if len(hash) != 40 { + t.Errorf("expected 40-char hash, got %d chars: %s", len(hash), hash) + } + + // Verify it matches git rev-parse output + cmd = exec.Command("git", "rev-parse", "HEAD") + cmd.Dir = tmpDir + out, err := cmd.Output() + if err != nil { + t.Fatalf("failed to run git rev-parse: %v", err) + } + + expected := string(out) + expected = expected[:len(expected)-1] // trim newline + if hash != expected { + t.Errorf("hash mismatch: got %s, expected %s", hash, expected) + } +} + +func TestGetCurrentCommitHash_EmptyGitRepo(t *testing.T) { + // Save current directory + originalDir, err := os.Getwd() + if err != nil { + t.Fatalf("failed to get cwd: %v", err) + } + + // Create a temporary directory + tmpDir := t.TempDir() + + // Initialize a new git repo but don't commit anything + cmd := exec.Command("git", "init") + cmd.Dir = tmpDir + if err := cmd.Run(); err != nil { + t.Fatalf("failed to init git repo: %v", err) + } + + // Change to the empty git repo + if err := os.Chdir(tmpDir); err != nil { + t.Fatalf("failed to chdir to git repo: %v", err) + } + defer func() { + // Restore original directory + if err := os.Chdir(originalDir); err != nil { + t.Fatalf("failed to restore cwd: %v", err) + } + }() + + // Should return empty string for repo with no commits + hash := GetCurrentCommitHash() + if hash != "" { + t.Errorf("expected empty string for empty git repo, got: %s", hash) + } +} diff --git a/internal/compact/haiku.go b/internal/compact/haiku.go index 58eec341..4d2dd9f0 100644 --- a/internal/compact/haiku.go +++ b/internal/compact/haiku.go @@ -38,7 +38,7 @@ type HaikuClient struct { } // NewHaikuClient creates a new Haiku API client. Env var ANTHROPIC_API_KEY takes precedence over explicit apiKey. -func NewHaikuClient(apiKey string) (*HaikuClient, error) { +func NewHaikuClient(apiKey string, opts ...option.RequestOption) (*HaikuClient, error) { envKey := os.Getenv("ANTHROPIC_API_KEY") if envKey != "" { apiKey = envKey @@ -47,7 +47,10 @@ func NewHaikuClient(apiKey string) (*HaikuClient, error) { return nil, fmt.Errorf("%w: set ANTHROPIC_API_KEY environment variable or provide via config", ErrAPIKeyRequired) } - client := anthropic.NewClient(option.WithAPIKey(apiKey)) + // Build options: API key first, then any additional options (for testing) + allOpts := []option.RequestOption{option.WithAPIKey(apiKey)} + allOpts = append(allOpts, opts...) + client := anthropic.NewClient(allOpts...) tier1Tmpl, err := template.New("tier1").Parse(tier1PromptTemplate) if err != nil { diff --git a/internal/compact/haiku_test.go b/internal/compact/haiku_test.go index 11de2827..035638dd 100644 --- a/internal/compact/haiku_test.go +++ b/internal/compact/haiku_test.go @@ -2,11 +2,18 @@ package compact import ( "context" + "encoding/json" "errors" + "net" + "net/http" + "net/http/httptest" "strings" + "sync/atomic" "testing" "time" + "github.com/anthropics/anthropic-sdk-go" + "github.com/anthropics/anthropic-sdk-go/option" "github.com/steveyegge/beads/internal/types" ) @@ -189,3 +196,399 @@ func TestIsRetryable(t *testing.T) { }) } } + +// mockTimeoutError implements net.Error for timeout testing +type mockTimeoutError struct { + timeout bool +} + +func (e *mockTimeoutError) Error() string { return "mock timeout error" } +func (e *mockTimeoutError) Timeout() bool { return e.timeout } +func (e *mockTimeoutError) Temporary() bool { return false } + +func TestIsRetryable_NetworkTimeout(t *testing.T) { + // Network timeout should be retryable + timeoutErr := &mockTimeoutError{timeout: true} + if !isRetryable(timeoutErr) { + t.Error("network timeout error should be retryable") + } + + // Non-timeout network error should not be retryable + nonTimeoutErr := &mockTimeoutError{timeout: false} + if isRetryable(nonTimeoutErr) { + t.Error("non-timeout network error should not be retryable") + } +} + +func TestIsRetryable_APIErrors(t *testing.T) { + tests := []struct { + name string + statusCode int + expected bool + }{ + {"rate limit 429", 429, true}, + {"server error 500", 500, true}, + {"server error 502", 502, true}, + {"server error 503", 503, true}, + {"bad request 400", 400, false}, + {"unauthorized 401", 401, false}, + {"forbidden 403", 403, false}, + {"not found 404", 404, false}, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + apiErr := &anthropic.Error{StatusCode: tt.statusCode} + got := isRetryable(apiErr) + if got != tt.expected { + t.Errorf("isRetryable(API error %d) = %v, want %v", tt.statusCode, got, tt.expected) + } + }) + } +} + +// createMockAnthropicServer creates a mock server that returns Anthropic API responses +func createMockAnthropicServer(handler http.HandlerFunc) *httptest.Server { + return httptest.NewServer(handler) +} + +// mockAnthropicResponse creates a valid Anthropic Messages API response +func mockAnthropicResponse(text string) map[string]interface{} { + return map[string]interface{}{ + "id": "msg_test123", + "type": "message", + "role": "assistant", + "model": "claude-3-5-haiku-20241022", + "stop_reason": "end_turn", + "stop_sequence": nil, + "usage": map[string]int{ + "input_tokens": 100, + "output_tokens": 50, + }, + "content": []map[string]interface{}{ + { + "type": "text", + "text": text, + }, + }, + } +} + +func TestSummarizeTier1_MockAPI(t *testing.T) { + // Create mock server that returns a valid summary + server := createMockAnthropicServer(func(w http.ResponseWriter, r *http.Request) { + // Verify request method and path + if r.Method != "POST" { + t.Errorf("expected POST, got %s", r.Method) + } + if !strings.HasSuffix(r.URL.Path, "/messages") { + t.Errorf("expected /messages path, got %s", r.URL.Path) + } + + w.Header().Set("Content-Type", "application/json") + resp := mockAnthropicResponse("**Summary:** Fixed auth bug.\n\n**Key Decisions:** Used OAuth.\n\n**Resolution:** Complete.") + json.NewEncoder(w).Encode(resp) + }) + defer server.Close() + + client, err := NewHaikuClient("test-key", option.WithBaseURL(server.URL)) + if err != nil { + t.Fatalf("failed to create client: %v", err) + } + + issue := &types.Issue{ + ID: "bd-1", + Title: "Fix authentication bug", + Description: "OAuth login was broken", + Status: types.StatusClosed, + } + + ctx := context.Background() + result, err := client.SummarizeTier1(ctx, issue) + if err != nil { + t.Fatalf("unexpected error: %v", err) + } + + if !strings.Contains(result, "**Summary:**") { + t.Error("result should contain Summary section") + } + if !strings.Contains(result, "Fixed auth bug") { + t.Error("result should contain summary text") + } +} + +func TestSummarizeTier1_APIError(t *testing.T) { + // Create mock server that returns an error + server := createMockAnthropicServer(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusBadRequest) + json.NewEncoder(w).Encode(map[string]interface{}{ + "type": "error", + "error": map[string]interface{}{ + "type": "invalid_request_error", + "message": "Invalid API key", + }, + }) + }) + defer server.Close() + + client, err := NewHaikuClient("test-key", option.WithBaseURL(server.URL)) + if err != nil { + t.Fatalf("failed to create client: %v", err) + } + + issue := &types.Issue{ + ID: "bd-1", + Title: "Test", + Description: "Test", + Status: types.StatusClosed, + } + + ctx := context.Background() + _, err = client.SummarizeTier1(ctx, issue) + if err == nil { + t.Fatal("expected error from API") + } + if !strings.Contains(err.Error(), "non-retryable") { + t.Errorf("expected non-retryable error, got: %v", err) + } +} + +func TestCallWithRetry_RetriesOn429(t *testing.T) { + var attempts int32 + + server := createMockAnthropicServer(func(w http.ResponseWriter, r *http.Request) { + attempt := atomic.AddInt32(&attempts, 1) + if attempt <= 2 { + // First two attempts return 429 + w.WriteHeader(http.StatusTooManyRequests) + json.NewEncoder(w).Encode(map[string]interface{}{ + "type": "error", + "error": map[string]interface{}{ + "type": "rate_limit_error", + "message": "Rate limited", + }, + }) + return + } + // Third attempt succeeds + w.Header().Set("Content-Type", "application/json") + json.NewEncoder(w).Encode(mockAnthropicResponse("Success after retries")) + }) + defer server.Close() + + // Disable SDK's internal retries to test our retry logic only + client, err := NewHaikuClient("test-key", option.WithBaseURL(server.URL), option.WithMaxRetries(0)) + if err != nil { + t.Fatalf("failed to create client: %v", err) + } + // Use short backoff for testing + client.initialBackoff = 10 * time.Millisecond + + ctx := context.Background() + result, err := client.callWithRetry(ctx, "test prompt") + if err != nil { + t.Fatalf("expected success after retries, got: %v", err) + } + if result != "Success after retries" { + t.Errorf("expected 'Success after retries', got: %s", result) + } + if attempts != 3 { + t.Errorf("expected 3 attempts, got: %d", attempts) + } +} + +func TestCallWithRetry_RetriesOn500(t *testing.T) { + var attempts int32 + + server := createMockAnthropicServer(func(w http.ResponseWriter, r *http.Request) { + attempt := atomic.AddInt32(&attempts, 1) + if attempt == 1 { + // First attempt returns 500 + w.WriteHeader(http.StatusInternalServerError) + json.NewEncoder(w).Encode(map[string]interface{}{ + "type": "error", + "error": map[string]interface{}{ + "type": "api_error", + "message": "Internal server error", + }, + }) + return + } + // Second attempt succeeds + w.Header().Set("Content-Type", "application/json") + json.NewEncoder(w).Encode(mockAnthropicResponse("Recovered from 500")) + }) + defer server.Close() + + // Disable SDK's internal retries to test our retry logic only + client, err := NewHaikuClient("test-key", option.WithBaseURL(server.URL), option.WithMaxRetries(0)) + if err != nil { + t.Fatalf("failed to create client: %v", err) + } + client.initialBackoff = 10 * time.Millisecond + + ctx := context.Background() + result, err := client.callWithRetry(ctx, "test prompt") + if err != nil { + t.Fatalf("expected success after retry, got: %v", err) + } + if result != "Recovered from 500" { + t.Errorf("expected 'Recovered from 500', got: %s", result) + } +} + +func TestCallWithRetry_ExhaustsRetries(t *testing.T) { + var attempts int32 + + server := createMockAnthropicServer(func(w http.ResponseWriter, r *http.Request) { + atomic.AddInt32(&attempts, 1) + // Always return 429 + w.WriteHeader(http.StatusTooManyRequests) + json.NewEncoder(w).Encode(map[string]interface{}{ + "type": "error", + "error": map[string]interface{}{ + "type": "rate_limit_error", + "message": "Rate limited", + }, + }) + }) + defer server.Close() + + // Disable SDK's internal retries to test our retry logic only + client, err := NewHaikuClient("test-key", option.WithBaseURL(server.URL), option.WithMaxRetries(0)) + if err != nil { + t.Fatalf("failed to create client: %v", err) + } + client.initialBackoff = 1 * time.Millisecond + client.maxRetries = 2 + + ctx := context.Background() + _, err = client.callWithRetry(ctx, "test prompt") + if err == nil { + t.Fatal("expected error after exhausting retries") + } + if !strings.Contains(err.Error(), "failed after") { + t.Errorf("expected 'failed after' error, got: %v", err) + } + // Initial attempt + 2 retries = 3 total + if attempts != 3 { + t.Errorf("expected 3 attempts, got: %d", attempts) + } +} + +func TestCallWithRetry_NoRetryOn400(t *testing.T) { + var attempts int32 + + server := createMockAnthropicServer(func(w http.ResponseWriter, r *http.Request) { + atomic.AddInt32(&attempts, 1) + w.WriteHeader(http.StatusBadRequest) + json.NewEncoder(w).Encode(map[string]interface{}{ + "type": "error", + "error": map[string]interface{}{ + "type": "invalid_request_error", + "message": "Bad request", + }, + }) + }) + defer server.Close() + + client, err := NewHaikuClient("test-key", option.WithBaseURL(server.URL)) + if err != nil { + t.Fatalf("failed to create client: %v", err) + } + client.initialBackoff = 10 * time.Millisecond + + ctx := context.Background() + _, err = client.callWithRetry(ctx, "test prompt") + if err == nil { + t.Fatal("expected error for bad request") + } + if !strings.Contains(err.Error(), "non-retryable") { + t.Errorf("expected non-retryable error, got: %v", err) + } + if attempts != 1 { + t.Errorf("expected only 1 attempt for non-retryable error, got: %d", attempts) + } +} + +func TestCallWithRetry_ContextTimeout(t *testing.T) { + server := createMockAnthropicServer(func(w http.ResponseWriter, r *http.Request) { + // Delay longer than context timeout + time.Sleep(200 * time.Millisecond) + w.Header().Set("Content-Type", "application/json") + json.NewEncoder(w).Encode(mockAnthropicResponse("too late")) + }) + defer server.Close() + + client, err := NewHaikuClient("test-key", option.WithBaseURL(server.URL)) + if err != nil { + t.Fatalf("failed to create client: %v", err) + } + + ctx, cancel := context.WithTimeout(context.Background(), 50*time.Millisecond) + defer cancel() + + _, err = client.callWithRetry(ctx, "test prompt") + if err == nil { + t.Fatal("expected timeout error") + } + if !errors.Is(err, context.DeadlineExceeded) { + t.Errorf("expected context.DeadlineExceeded, got: %v", err) + } +} + +func TestCallWithRetry_EmptyContent(t *testing.T) { + server := createMockAnthropicServer(func(w http.ResponseWriter, r *http.Request) { + w.Header().Set("Content-Type", "application/json") + // Return response with empty content array + json.NewEncoder(w).Encode(map[string]interface{}{ + "id": "msg_test123", + "type": "message", + "role": "assistant", + "model": "claude-3-5-haiku-20241022", + "content": []map[string]interface{}{}, + }) + }) + defer server.Close() + + client, err := NewHaikuClient("test-key", option.WithBaseURL(server.URL)) + if err != nil { + t.Fatalf("failed to create client: %v", err) + } + + ctx := context.Background() + _, err = client.callWithRetry(ctx, "test prompt") + if err == nil { + t.Fatal("expected error for empty content") + } + if !strings.Contains(err.Error(), "no content blocks") { + t.Errorf("expected 'no content blocks' error, got: %v", err) + } +} + +func TestBytesWriter(t *testing.T) { + w := &bytesWriter{} + + n, err := w.Write([]byte("hello")) + if err != nil { + t.Fatalf("unexpected error: %v", err) + } + if n != 5 { + t.Errorf("expected n=5, got %d", n) + } + + n, err = w.Write([]byte(" world")) + if err != nil { + t.Fatalf("unexpected error: %v", err) + } + if n != 6 { + t.Errorf("expected n=6, got %d", n) + } + + if string(w.buf) != "hello world" { + t.Errorf("expected 'hello world', got '%s'", string(w.buf)) + } +} + +// Verify net.Error interface is properly satisfied for test mocks +var _ net.Error = (*mockTimeoutError)(nil) diff --git a/internal/config/config.go b/internal/config/config.go index 74484a29..46b9a48f 100644 --- a/internal/config/config.go +++ b/internal/config/config.go @@ -306,6 +306,43 @@ func ResolveExternalProjectPath(projectName string) string { return path } +// HookEntry represents a single config-based hook +type HookEntry struct { + Command string `yaml:"command" mapstructure:"command"` // Shell command to run + Name string `yaml:"name" mapstructure:"name"` // Optional display name +} + +// GetCloseHooks returns the on_close hooks from config +func GetCloseHooks() []HookEntry { + if v == nil { + return nil + } + var hooks []HookEntry + raw := v.Get("hooks.on_close") + if raw == nil { + return nil + } + + // Handle slice of maps (from YAML parsing) + if rawSlice, ok := raw.([]interface{}); ok { + for _, item := range rawSlice { + if m, ok := item.(map[string]interface{}); ok { + entry := HookEntry{} + if cmd, ok := m["command"].(string); ok { + entry.Command = cmd + } + if name, ok := m["name"].(string); ok { + entry.Name = name + } + if entry.Command != "" { + hooks = append(hooks, entry) + } + } + } + } + return hooks +} + // GetIdentity resolves the user's identity for messaging. // Priority chain: // 1. flagValue (if non-empty, from --identity flag) diff --git a/internal/config/yaml_config.go b/internal/config/yaml_config.go deleted file mode 100644 index f0b8027a..00000000 --- a/internal/config/yaml_config.go +++ /dev/null @@ -1,245 +0,0 @@ -package config - -import ( - "bufio" - "fmt" - "os" - "path/filepath" - "regexp" - "strings" -) - -// YamlOnlyKeys are configuration keys that must be stored in config.yaml -// rather than the SQLite database. These are "startup" settings that are -// read before the database is opened. -// -// This fixes GH#536: users were confused when `bd config set no-db true` -// appeared to succeed but had no effect (because no-db is read from yaml -// at startup, not from SQLite). -var YamlOnlyKeys = map[string]bool{ - // Bootstrap flags (affect how bd starts) - "no-db": true, - "no-daemon": true, - "no-auto-flush": true, - "no-auto-import": true, - "json": true, - "auto-start-daemon": true, - - // Database and identity - "db": true, - "actor": true, - "identity": true, - - // Timing settings - "flush-debounce": true, - "lock-timeout": true, - "remote-sync-interval": true, - - // Git settings - "git.author": true, - "git.no-gpg-sign": true, - "no-push": true, - - // Sync settings - "sync-branch": true, - "sync.branch": true, - "sync.require_confirmation_on_mass_delete": true, - - // Routing settings - "routing.mode": true, - "routing.default": true, - "routing.maintainer": true, - "routing.contributor": true, - - // Create command settings - "create.require-description": true, -} - -// IsYamlOnlyKey returns true if the given key should be stored in config.yaml -// rather than the SQLite database. -func IsYamlOnlyKey(key string) bool { - // Check exact match - if YamlOnlyKeys[key] { - return true - } - - // Check prefix matches for nested keys - prefixes := []string{"routing.", "sync.", "git.", "directory.", "repos.", "external_projects."} - for _, prefix := range prefixes { - if strings.HasPrefix(key, prefix) { - return true - } - } - - return false -} - -// SetYamlConfig sets a configuration value in the project's config.yaml file. -// It handles both adding new keys and updating existing (possibly commented) keys. -func SetYamlConfig(key, value string) error { - configPath, err := findProjectConfigYaml() - if err != nil { - return err - } - - // Read existing config - content, err := os.ReadFile(configPath) - if err != nil { - return fmt.Errorf("failed to read config.yaml: %w", err) - } - - // Update or add the key - newContent, err := updateYamlKey(string(content), key, value) - if err != nil { - return err - } - - // Write back - if err := os.WriteFile(configPath, []byte(newContent), 0644); err != nil { - return fmt.Errorf("failed to write config.yaml: %w", err) - } - - return nil -} - -// GetYamlConfig gets a configuration value from config.yaml. -// Returns empty string if key is not found or is commented out. -func GetYamlConfig(key string) string { - if v == nil { - return "" - } - return v.GetString(key) -} - -// findProjectConfigYaml finds the project's .beads/config.yaml file. -func findProjectConfigYaml() (string, error) { - cwd, err := os.Getwd() - if err != nil { - return "", fmt.Errorf("failed to get working directory: %w", err) - } - - // Walk up parent directories to find .beads/config.yaml - for dir := cwd; dir != filepath.Dir(dir); dir = filepath.Dir(dir) { - configPath := filepath.Join(dir, ".beads", "config.yaml") - if _, err := os.Stat(configPath); err == nil { - return configPath, nil - } - } - - return "", fmt.Errorf("no .beads/config.yaml found (run 'bd init' first)") -} - -// updateYamlKey updates a key in yaml content, handling commented-out keys. -// If the key exists (commented or not), it updates it in place. -// If the key doesn't exist, it appends it at the end. -func updateYamlKey(content, key, value string) (string, error) { - // Format the value appropriately - formattedValue := formatYamlValue(value) - newLine := fmt.Sprintf("%s: %s", key, formattedValue) - - // Build regex to match the key (commented or not) - // Matches: "key: value" or "# key: value" with optional leading whitespace - keyPattern := regexp.MustCompile(`^(\s*)(#\s*)?` + regexp.QuoteMeta(key) + `\s*:`) - - found := false - var result []string - - scanner := bufio.NewScanner(strings.NewReader(content)) - for scanner.Scan() { - line := scanner.Text() - if keyPattern.MatchString(line) { - // Found the key - replace with new value (uncommented) - // Preserve leading whitespace - matches := keyPattern.FindStringSubmatch(line) - indent := "" - if len(matches) > 1 { - indent = matches[1] - } - result = append(result, indent+newLine) - found = true - } else { - result = append(result, line) - } - } - - if !found { - // Key not found - append at end - // Add blank line before if content doesn't end with one - if len(result) > 0 && result[len(result)-1] != "" { - result = append(result, "") - } - result = append(result, newLine) - } - - return strings.Join(result, "\n"), nil -} - -// formatYamlValue formats a value appropriately for YAML. -func formatYamlValue(value string) string { - // Boolean values - lower := strings.ToLower(value) - if lower == "true" || lower == "false" { - return lower - } - - // Numeric values - return as-is - if isNumeric(value) { - return value - } - - // Duration values (like "30s", "5m") - return as-is - if isDuration(value) { - return value - } - - // String values that need quoting - if needsQuoting(value) { - return fmt.Sprintf("%q", value) - } - - return value -} - -func isNumeric(s string) bool { - if s == "" { - return false - } - for i, c := range s { - if c == '-' && i == 0 { - continue - } - if c == '.' { - continue - } - if c < '0' || c > '9' { - return false - } - } - return true -} - -func isDuration(s string) bool { - if len(s) < 2 { - return false - } - suffix := s[len(s)-1] - if suffix != 's' && suffix != 'm' && suffix != 'h' { - return false - } - return isNumeric(s[:len(s)-1]) -} - -func needsQuoting(s string) bool { - // Quote if contains special YAML characters - special := []string{":", "#", "[", "]", "{", "}", ",", "&", "*", "!", "|", ">", "'", "\"", "%", "@", "`"} - for _, c := range special { - if strings.Contains(s, c) { - return true - } - } - // Quote if starts/ends with whitespace - if strings.TrimSpace(s) != s { - return true - } - return false -} diff --git a/internal/config/yaml_config_test.go b/internal/config/yaml_config_test.go deleted file mode 100644 index 6fefe8f3..00000000 --- a/internal/config/yaml_config_test.go +++ /dev/null @@ -1,206 +0,0 @@ -package config - -import ( - "os" - "path/filepath" - "strings" - "testing" -) - -func TestIsYamlOnlyKey(t *testing.T) { - tests := []struct { - key string - expected bool - }{ - // Exact matches - {"no-db", true}, - {"no-daemon", true}, - {"no-auto-flush", true}, - {"json", true}, - {"auto-start-daemon", true}, - {"flush-debounce", true}, - {"git.author", true}, - {"git.no-gpg-sign", true}, - - // Prefix matches - {"routing.mode", true}, - {"routing.custom-key", true}, - {"sync.branch", true}, - {"sync.require_confirmation_on_mass_delete", true}, - {"directory.labels", true}, - {"repos.primary", true}, - {"external_projects.beads", true}, - - // SQLite keys (should return false) - {"jira.url", false}, - {"jira.project", false}, - {"linear.api_key", false}, - {"github.org", false}, - {"custom.setting", false}, - {"status.custom", false}, - {"issue_prefix", false}, - } - - for _, tt := range tests { - t.Run(tt.key, func(t *testing.T) { - got := IsYamlOnlyKey(tt.key) - if got != tt.expected { - t.Errorf("IsYamlOnlyKey(%q) = %v, want %v", tt.key, got, tt.expected) - } - }) - } -} - -func TestUpdateYamlKey(t *testing.T) { - tests := []struct { - name string - content string - key string - value string - expected string - }{ - { - name: "update commented key", - content: "# no-db: false\nother: value", - key: "no-db", - value: "true", - expected: "no-db: true\nother: value", - }, - { - name: "update existing key", - content: "no-db: false\nother: value", - key: "no-db", - value: "true", - expected: "no-db: true\nother: value", - }, - { - name: "add new key", - content: "other: value", - key: "no-db", - value: "true", - expected: "other: value\n\nno-db: true", - }, - { - name: "preserve indentation", - content: " # no-db: false\nother: value", - key: "no-db", - value: "true", - expected: " no-db: true\nother: value", - }, - { - name: "handle string value", - content: "# actor: \"\"\nother: value", - key: "actor", - value: "steve", - expected: "actor: steve\nother: value", - }, - { - name: "handle duration value", - content: "# flush-debounce: \"5s\"", - key: "flush-debounce", - value: "30s", - expected: "flush-debounce: 30s", - }, - { - name: "quote special characters", - content: "other: value", - key: "actor", - value: "user: name", - expected: "other: value\n\nactor: \"user: name\"", - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - got, err := updateYamlKey(tt.content, tt.key, tt.value) - if err != nil { - t.Fatalf("updateYamlKey() error = %v", err) - } - if got != tt.expected { - t.Errorf("updateYamlKey() =\n%q\nwant:\n%q", got, tt.expected) - } - }) - } -} - -func TestFormatYamlValue(t *testing.T) { - tests := []struct { - value string - expected string - }{ - {"true", "true"}, - {"false", "false"}, - {"TRUE", "true"}, - {"FALSE", "false"}, - {"123", "123"}, - {"3.14", "3.14"}, - {"30s", "30s"}, - {"5m", "5m"}, - {"simple", "simple"}, - {"has space", "has space"}, - {"has:colon", "\"has:colon\""}, - {"has#hash", "\"has#hash\""}, - {" leading", "\" leading\""}, - } - - for _, tt := range tests { - t.Run(tt.value, func(t *testing.T) { - got := formatYamlValue(tt.value) - if got != tt.expected { - t.Errorf("formatYamlValue(%q) = %q, want %q", tt.value, got, tt.expected) - } - }) - } -} - -func TestSetYamlConfig(t *testing.T) { - // Create a temp directory with .beads/config.yaml - tmpDir, err := os.MkdirTemp("", "beads-yaml-test-*") - if err != nil { - t.Fatalf("Failed to create temp dir: %v", err) - } - defer os.RemoveAll(tmpDir) - - beadsDir := filepath.Join(tmpDir, ".beads") - if err := os.MkdirAll(beadsDir, 0755); err != nil { - t.Fatalf("Failed to create .beads dir: %v", err) - } - - configPath := filepath.Join(beadsDir, "config.yaml") - initialConfig := `# Beads Config -# no-db: false -other-setting: value -` - if err := os.WriteFile(configPath, []byte(initialConfig), 0644); err != nil { - t.Fatalf("Failed to write config.yaml: %v", err) - } - - // Change to temp directory for the test - oldWd, _ := os.Getwd() - if err := os.Chdir(tmpDir); err != nil { - t.Fatalf("Failed to chdir: %v", err) - } - defer os.Chdir(oldWd) - - // Test SetYamlConfig - if err := SetYamlConfig("no-db", "true"); err != nil { - t.Fatalf("SetYamlConfig() error = %v", err) - } - - // Read back and verify - content, err := os.ReadFile(configPath) - if err != nil { - t.Fatalf("Failed to read config.yaml: %v", err) - } - - contentStr := string(content) - if !strings.Contains(contentStr, "no-db: true") { - t.Errorf("config.yaml should contain 'no-db: true', got:\n%s", contentStr) - } - if strings.Contains(contentStr, "# no-db") { - t.Errorf("config.yaml should not have commented no-db, got:\n%s", contentStr) - } - if !strings.Contains(contentStr, "other-setting: value") { - t.Errorf("config.yaml should preserve other settings, got:\n%s", contentStr) - } -} diff --git a/internal/hooks/config_hooks.go b/internal/hooks/config_hooks.go new file mode 100644 index 00000000..a54ce8b8 --- /dev/null +++ b/internal/hooks/config_hooks.go @@ -0,0 +1,66 @@ +// Package hooks provides a hook system for extensibility. +// This file implements config-based hooks defined in .beads/config.yaml. + +package hooks + +import ( + "context" + "fmt" + "os" + "os/exec" + "strconv" + "time" + + "github.com/steveyegge/beads/internal/config" + "github.com/steveyegge/beads/internal/types" +) + +// RunConfigCloseHooks executes all on_close hooks from config.yaml. +// Hook commands receive issue data via environment variables: +// - BEAD_ID: Issue ID (e.g., bd-abc1) +// - BEAD_TITLE: Issue title +// - BEAD_TYPE: Issue type (task, bug, feature, etc.) +// - BEAD_PRIORITY: Priority (0-4) +// - BEAD_CLOSE_REASON: Close reason if provided +// +// Hooks run synchronously but failures are logged as warnings and don't +// block the close operation. +func RunConfigCloseHooks(ctx context.Context, issue *types.Issue) { + hooks := config.GetCloseHooks() + if len(hooks) == 0 { + return + } + + // Build environment variables for hooks + env := append(os.Environ(), + "BEAD_ID="+issue.ID, + "BEAD_TITLE="+issue.Title, + "BEAD_TYPE="+string(issue.IssueType), + "BEAD_PRIORITY="+strconv.Itoa(issue.Priority), + "BEAD_CLOSE_REASON="+issue.CloseReason, + ) + + timeout := 10 * time.Second + + for _, hook := range hooks { + hookCtx, cancel := context.WithTimeout(ctx, timeout) + + // #nosec G204 -- command comes from user's config file + cmd := exec.CommandContext(hookCtx, "sh", "-c", hook.Command) + cmd.Env = env + cmd.Stdout = os.Stdout + cmd.Stderr = os.Stderr + + err := cmd.Run() + cancel() + + if err != nil { + // Log warning but don't fail the close + name := hook.Name + if name == "" { + name = hook.Command + } + fmt.Fprintf(os.Stderr, "Warning: close hook %q failed: %v\n", name, err) + } + } +} diff --git a/internal/hooks/config_hooks_test.go b/internal/hooks/config_hooks_test.go new file mode 100644 index 00000000..48def26a --- /dev/null +++ b/internal/hooks/config_hooks_test.go @@ -0,0 +1,271 @@ +package hooks + +import ( + "context" + "os" + "path/filepath" + "strings" + "testing" + "time" + + "github.com/steveyegge/beads/internal/config" + "github.com/steveyegge/beads/internal/types" +) + +func TestRunConfigCloseHooks_NoHooks(t *testing.T) { + // Create a temp dir without any config + tmpDir := t.TempDir() + beadsDir := filepath.Join(tmpDir, ".beads") + if err := os.MkdirAll(beadsDir, 0755); err != nil { + t.Fatalf("Failed to create .beads dir: %v", err) + } + + // Change to the temp dir and initialize config + oldWd, _ := os.Getwd() + defer func() { _ = os.Chdir(oldWd) }() + if err := os.Chdir(tmpDir); err != nil { + t.Fatalf("Failed to chdir: %v", err) + } + + // Re-initialize config + if err := config.Initialize(); err != nil { + t.Fatalf("Failed to initialize config: %v", err) + } + + issue := &types.Issue{ID: "bd-test", Title: "Test Issue"} + ctx := context.Background() + + // Should not panic with no hooks + RunConfigCloseHooks(ctx, issue) +} + +func TestRunConfigCloseHooks_ExecutesCommand(t *testing.T) { + tmpDir := t.TempDir() + beadsDir := filepath.Join(tmpDir, ".beads") + if err := os.MkdirAll(beadsDir, 0755); err != nil { + t.Fatalf("Failed to create .beads dir: %v", err) + } + + outputFile := filepath.Join(tmpDir, "hook_output.txt") + + // Create config.yaml with a close hook + configContent := `hooks: + on_close: + - name: test-hook + command: echo "$BEAD_ID $BEAD_TITLE" > ` + outputFile + ` +` + configPath := filepath.Join(beadsDir, "config.yaml") + if err := os.WriteFile(configPath, []byte(configContent), 0644); err != nil { + t.Fatalf("Failed to write config: %v", err) + } + + // Change to the temp dir and initialize config + oldWd, _ := os.Getwd() + defer func() { _ = os.Chdir(oldWd) }() + if err := os.Chdir(tmpDir); err != nil { + t.Fatalf("Failed to chdir: %v", err) + } + + // Re-initialize config + if err := config.Initialize(); err != nil { + t.Fatalf("Failed to initialize config: %v", err) + } + + issue := &types.Issue{ + ID: "bd-abc1", + Title: "Test Issue", + IssueType: types.TypeBug, + Priority: 1, + CloseReason: "Fixed", + } + ctx := context.Background() + + RunConfigCloseHooks(ctx, issue) + + // Wait for hook to complete + time.Sleep(100 * time.Millisecond) + + // Verify output + output, err := os.ReadFile(outputFile) + if err != nil { + t.Fatalf("Failed to read output file: %v", err) + } + + expected := "bd-abc1 Test Issue" + if !strings.Contains(string(output), expected) { + t.Errorf("Hook output = %q, want to contain %q", string(output), expected) + } +} + +func TestRunConfigCloseHooks_EnvVars(t *testing.T) { + tmpDir := t.TempDir() + beadsDir := filepath.Join(tmpDir, ".beads") + if err := os.MkdirAll(beadsDir, 0755); err != nil { + t.Fatalf("Failed to create .beads dir: %v", err) + } + + outputFile := filepath.Join(tmpDir, "env_output.txt") + + // Create config.yaml with a close hook that outputs all env vars + configContent := `hooks: + on_close: + - name: env-check + command: echo "ID=$BEAD_ID TYPE=$BEAD_TYPE PRIORITY=$BEAD_PRIORITY REASON=$BEAD_CLOSE_REASON" > ` + outputFile + ` +` + configPath := filepath.Join(beadsDir, "config.yaml") + if err := os.WriteFile(configPath, []byte(configContent), 0644); err != nil { + t.Fatalf("Failed to write config: %v", err) + } + + // Change to the temp dir and initialize config + oldWd, _ := os.Getwd() + defer func() { _ = os.Chdir(oldWd) }() + if err := os.Chdir(tmpDir); err != nil { + t.Fatalf("Failed to chdir: %v", err) + } + + // Re-initialize config + if err := config.Initialize(); err != nil { + t.Fatalf("Failed to initialize config: %v", err) + } + + issue := &types.Issue{ + ID: "bd-xyz9", + Title: "Bug Fix", + IssueType: types.TypeFeature, + Priority: 2, + CloseReason: "Completed", + } + ctx := context.Background() + + RunConfigCloseHooks(ctx, issue) + + // Wait for hook to complete + time.Sleep(100 * time.Millisecond) + + // Verify output contains all env vars + output, err := os.ReadFile(outputFile) + if err != nil { + t.Fatalf("Failed to read output file: %v", err) + } + + outputStr := string(output) + checks := []string{ + "ID=bd-xyz9", + "TYPE=feature", + "PRIORITY=2", + "REASON=Completed", + } + + for _, check := range checks { + if !strings.Contains(outputStr, check) { + t.Errorf("Hook output = %q, want to contain %q", outputStr, check) + } + } +} + +func TestRunConfigCloseHooks_HookFailure(t *testing.T) { + tmpDir := t.TempDir() + beadsDir := filepath.Join(tmpDir, ".beads") + if err := os.MkdirAll(beadsDir, 0755); err != nil { + t.Fatalf("Failed to create .beads dir: %v", err) + } + + successFile := filepath.Join(tmpDir, "success.txt") + + // Create config.yaml with a failing hook followed by a succeeding one + configContent := `hooks: + on_close: + - name: failing-hook + command: exit 1 + - name: success-hook + command: echo "success" > ` + successFile + ` +` + configPath := filepath.Join(beadsDir, "config.yaml") + if err := os.WriteFile(configPath, []byte(configContent), 0644); err != nil { + t.Fatalf("Failed to write config: %v", err) + } + + // Change to the temp dir and initialize config + oldWd, _ := os.Getwd() + defer func() { _ = os.Chdir(oldWd) }() + if err := os.Chdir(tmpDir); err != nil { + t.Fatalf("Failed to chdir: %v", err) + } + + // Re-initialize config + if err := config.Initialize(); err != nil { + t.Fatalf("Failed to initialize config: %v", err) + } + + issue := &types.Issue{ID: "bd-test", Title: "Test"} + ctx := context.Background() + + // Should not panic even with failing hook + RunConfigCloseHooks(ctx, issue) + + // Wait for hooks to complete + time.Sleep(100 * time.Millisecond) + + // Verify second hook still ran + output, err := os.ReadFile(successFile) + if err != nil { + t.Fatalf("Second hook should have run despite first failing: %v", err) + } + + if !strings.Contains(string(output), "success") { + t.Error("Second hook did not produce expected output") + } +} + +func TestGetCloseHooks(t *testing.T) { + tmpDir := t.TempDir() + beadsDir := filepath.Join(tmpDir, ".beads") + if err := os.MkdirAll(beadsDir, 0755); err != nil { + t.Fatalf("Failed to create .beads dir: %v", err) + } + + // Create config.yaml with multiple hooks + configContent := `hooks: + on_close: + - name: first-hook + command: echo first + - name: second-hook + command: echo second + - command: echo unnamed +` + configPath := filepath.Join(beadsDir, "config.yaml") + if err := os.WriteFile(configPath, []byte(configContent), 0644); err != nil { + t.Fatalf("Failed to write config: %v", err) + } + + // Change to the temp dir and initialize config + oldWd, _ := os.Getwd() + defer func() { _ = os.Chdir(oldWd) }() + if err := os.Chdir(tmpDir); err != nil { + t.Fatalf("Failed to chdir: %v", err) + } + + // Re-initialize config + if err := config.Initialize(); err != nil { + t.Fatalf("Failed to initialize config: %v", err) + } + + hooks := config.GetCloseHooks() + + if len(hooks) != 3 { + t.Fatalf("Expected 3 hooks, got %d", len(hooks)) + } + + if hooks[0].Name != "first-hook" || hooks[0].Command != "echo first" { + t.Errorf("First hook = %+v, want name=first-hook, command=echo first", hooks[0]) + } + + if hooks[1].Name != "second-hook" || hooks[1].Command != "echo second" { + t.Errorf("Second hook = %+v, want name=second-hook, command=echo second", hooks[1]) + } + + if hooks[2].Name != "" || hooks[2].Command != "echo unnamed" { + t.Errorf("Third hook = %+v, want name='', command=echo unnamed", hooks[2]) + } +} diff --git a/internal/importer/importer.go b/internal/importer/importer.go index 6adb527a..47ecb8f0 100644 --- a/internal/importer/importer.go +++ b/internal/importer/importer.go @@ -231,13 +231,8 @@ func handlePrefixMismatch(ctx context.Context, sqliteStore *sqlite.SQLiteStorage var tombstonesToRemove []string for _, issue := range issues { - // GH#422: Check if issue ID starts with configured prefix directly - // rather than extracting/guessing. This handles multi-hyphen prefixes - // like "asianops-audit-" correctly. - prefixMatches := strings.HasPrefix(issue.ID, configuredPrefix+"-") - if !prefixMatches { - // Extract prefix for error reporting (best effort) - prefix := utils.ExtractIssuePrefix(issue.ID) + prefix := utils.ExtractIssuePrefix(issue.ID) + if !allowedPrefixes[prefix] { if issue.IsTombstone() { tombstoneMismatchPrefixes[prefix]++ tombstonesToRemove = append(tombstonesToRemove, issue.ID) @@ -572,11 +567,8 @@ func upsertIssues(ctx context.Context, sqliteStore *sqlite.SQLiteStorage, issues updates["acceptance_criteria"] = incoming.AcceptanceCriteria updates["notes"] = incoming.Notes updates["closed_at"] = incoming.ClosedAt - // Pinned field (bd-phtv): Only update if explicitly true in JSONL - // (omitempty means false values are absent, so false = don't change existing) - if incoming.Pinned { - updates["pinned"] = incoming.Pinned - } + // Pinned field (bd-7h5) + updates["pinned"] = incoming.Pinned if incoming.Assignee != "" { updates["assignee"] = incoming.Assignee @@ -670,11 +662,8 @@ func upsertIssues(ctx context.Context, sqliteStore *sqlite.SQLiteStorage, issues updates["acceptance_criteria"] = incoming.AcceptanceCriteria updates["notes"] = incoming.Notes updates["closed_at"] = incoming.ClosedAt - // Pinned field (bd-phtv): Only update if explicitly true in JSONL - // (omitempty means false values are absent, so false = don't change existing) - if incoming.Pinned { - updates["pinned"] = incoming.Pinned - } + // Pinned field (bd-7h5) + updates["pinned"] = incoming.Pinned if incoming.Assignee != "" { updates["assignee"] = incoming.Assignee diff --git a/internal/importer/importer_test.go b/internal/importer/importer_test.go index 6ad7e7f0..e11634b0 100644 --- a/internal/importer/importer_test.go +++ b/internal/importer/importer_test.go @@ -1479,151 +1479,7 @@ func TestImportMixedPrefixMismatch(t *testing.T) { } } -// TestImportPreservesPinnedField tests that importing from JSONL (which has omitempty -// for the pinned field) does NOT reset an existing pinned=true issue to pinned=false. -// -// Bug scenario (bd-phtv): -// 1. User runs `bd pin ` which sets pinned=true in SQLite -// 2. Any subsequent bd command (e.g., `bd show`) triggers auto-import from JSONL -// 3. JSONL has pinned=false due to omitempty (field absent means false in Go) -// 4. Import overwrites pinned=true with pinned=false, losing the pinned state -// -// Expected: Import should preserve existing pinned=true when incoming pinned=false -// (since false just means "field was absent in JSONL due to omitempty"). -func TestImportPreservesPinnedField(t *testing.T) { - ctx := context.Background() - - tmpDB := t.TempDir() + "/test.db" - store, err := sqlite.New(context.Background(), tmpDB) - if err != nil { - t.Fatalf("Failed to create store: %v", err) - } - defer store.Close() - - if err := store.SetConfig(ctx, "issue_prefix", "test"); err != nil { - t.Fatalf("Failed to set prefix: %v", err) - } - - // Create an issue with pinned=true (simulates `bd pin` command) - pinnedIssue := &types.Issue{ - ID: "test-abc123", - Title: "Pinned Issue", - Status: types.StatusOpen, - Priority: 2, - IssueType: types.TypeTask, - Pinned: true, // This is set by `bd pin` - CreatedAt: time.Now().Add(-time.Hour), - UpdatedAt: time.Now().Add(-time.Hour), - } - pinnedIssue.ContentHash = pinnedIssue.ComputeContentHash() - if err := store.CreateIssue(ctx, pinnedIssue, "test-setup"); err != nil { - t.Fatalf("Failed to create pinned issue: %v", err) - } - - // Verify issue is pinned before import - before, err := store.GetIssue(ctx, "test-abc123") - if err != nil { - t.Fatalf("Failed to get issue before import: %v", err) - } - if !before.Pinned { - t.Fatal("Issue should be pinned before import") - } - - // Import same issue from JSONL (simulates auto-import after git pull) - // JSONL has pinned=false because omitempty means absent fields are false - importedIssue := &types.Issue{ - ID: "test-abc123", - Title: "Pinned Issue", // Same content - Status: types.StatusOpen, - Priority: 2, - IssueType: types.TypeTask, - Pinned: false, // This is what JSONL deserialization produces due to omitempty - CreatedAt: time.Now().Add(-time.Hour), - UpdatedAt: time.Now(), // Newer timestamp to trigger update - } - importedIssue.ContentHash = importedIssue.ComputeContentHash() - - result, err := ImportIssues(ctx, tmpDB, store, []*types.Issue{importedIssue}, Options{}) - if err != nil { - t.Fatalf("Import failed: %v", err) - } - - // Import should recognize this as an update (same ID, different timestamp) - // The unchanged count may vary based on whether other fields changed - t.Logf("Import result: Created=%d Updated=%d Unchanged=%d", result.Created, result.Updated, result.Unchanged) - - // CRITICAL: Verify pinned field was preserved - after, err := store.GetIssue(ctx, "test-abc123") - if err != nil { - t.Fatalf("Failed to get issue after import: %v", err) - } - if !after.Pinned { - t.Error("FAIL (bd-phtv): pinned=true was reset to false by import. " + - "Import should preserve existing pinned field when incoming is false (omitempty).") - } -} - -// TestImportSetsPinnedTrue tests that importing an issue with pinned=true -// correctly sets the pinned field in the database. -func TestImportSetsPinnedTrue(t *testing.T) { - ctx := context.Background() - - tmpDB := t.TempDir() + "/test.db" - store, err := sqlite.New(context.Background(), tmpDB) - if err != nil { - t.Fatalf("Failed to create store: %v", err) - } - defer store.Close() - - if err := store.SetConfig(ctx, "issue_prefix", "test"); err != nil { - t.Fatalf("Failed to set prefix: %v", err) - } - - // Create an unpinned issue - unpinnedIssue := &types.Issue{ - ID: "test-abc123", - Title: "Unpinned Issue", - Status: types.StatusOpen, - Priority: 2, - IssueType: types.TypeTask, - Pinned: false, - CreatedAt: time.Now().Add(-time.Hour), - UpdatedAt: time.Now().Add(-time.Hour), - } - unpinnedIssue.ContentHash = unpinnedIssue.ComputeContentHash() - if err := store.CreateIssue(ctx, unpinnedIssue, "test-setup"); err != nil { - t.Fatalf("Failed to create issue: %v", err) - } - - // Import with pinned=true (from JSONL that explicitly has "pinned": true) - importedIssue := &types.Issue{ - ID: "test-abc123", - Title: "Unpinned Issue", - Status: types.StatusOpen, - Priority: 2, - IssueType: types.TypeTask, - Pinned: true, // Explicitly set to true in JSONL - CreatedAt: time.Now().Add(-time.Hour), - UpdatedAt: time.Now(), // Newer timestamp - } - importedIssue.ContentHash = importedIssue.ComputeContentHash() - - result, err := ImportIssues(ctx, tmpDB, store, []*types.Issue{importedIssue}, Options{}) - if err != nil { - t.Fatalf("Import failed: %v", err) - } - t.Logf("Import result: Created=%d Updated=%d Unchanged=%d", result.Created, result.Updated, result.Unchanged) - - // Verify pinned field was set to true - after, err := store.GetIssue(ctx, "test-abc123") - if err != nil { - t.Fatalf("Failed to get issue after import: %v", err) - } - if !after.Pinned { - t.Error("FAIL: pinned=true from JSONL should set the field to true in database") - } -} - +// TestMultiRepoPrefixValidation tests GH#686: multi-repo allows foreign prefixes. func TestMultiRepoPrefixValidation(t *testing.T) { if err := config.Initialize(); err != nil { t.Fatalf("Failed to initialize config: %v", err) diff --git a/internal/rpc/client.go b/internal/rpc/client.go index b6c9156b..0c70f0cd 100644 --- a/internal/rpc/client.go +++ b/internal/rpc/client.go @@ -395,48 +395,6 @@ func (c *Client) EpicStatus(args *EpicStatusArgs) (*Response, error) { return c.Execute(OpEpicStatus, args) } -// Gate operations (bd-likt) - -// GateCreate creates a gate via the daemon -func (c *Client) GateCreate(args *GateCreateArgs) (*Response, error) { - return c.Execute(OpGateCreate, args) -} - -// GateList lists gates via the daemon -func (c *Client) GateList(args *GateListArgs) (*Response, error) { - return c.Execute(OpGateList, args) -} - -// GateShow shows a gate via the daemon -func (c *Client) GateShow(args *GateShowArgs) (*Response, error) { - return c.Execute(OpGateShow, args) -} - -// GateClose closes a gate via the daemon -func (c *Client) GateClose(args *GateCloseArgs) (*Response, error) { - return c.Execute(OpGateClose, args) -} - -// GateWait adds waiters to a gate via the daemon -func (c *Client) GateWait(args *GateWaitArgs) (*Response, error) { - return c.Execute(OpGateWait, args) -} - -// GetWorkerStatus retrieves worker status via the daemon -func (c *Client) GetWorkerStatus(args *GetWorkerStatusArgs) (*GetWorkerStatusResponse, error) { - resp, err := c.Execute(OpGetWorkerStatus, args) - if err != nil { - return nil, err - } - - var result GetWorkerStatusResponse - if err := json.Unmarshal(resp.Data, &result); err != nil { - return nil, fmt.Errorf("failed to unmarshal worker status response: %w", err) - } - - return &result, nil -} - // cleanupStaleDaemonArtifacts removes stale daemon.pid file when socket is missing and lock is free. // This prevents stale artifacts from accumulating after daemon crashes. // Only removes pid file - lock file is managed by OS (released on process exit). diff --git a/internal/rpc/protocol.go b/internal/rpc/protocol.go index 8575fddf..c92d92de 100644 --- a/internal/rpc/protocol.go +++ b/internal/rpc/protocol.go @@ -2,7 +2,6 @@ package rpc import ( "encoding/json" - "time" ) // Operation constants for all bd commands @@ -35,18 +34,9 @@ const ( OpExport = "export" OpImport = "import" OpEpicStatus = "epic_status" - OpGetMutations = "get_mutations" - OpGetMoleculeProgress = "get_molecule_progress" - OpShutdown = "shutdown" - OpDelete = "delete" - OpGetWorkerStatus = "get_worker_status" - - // Gate operations (bd-likt) - OpGateCreate = "gate_create" - OpGateList = "gate_list" - OpGateShow = "gate_show" - OpGateClose = "gate_close" - OpGateWait = "gate_wait" + OpGetMutations = "get_mutations" + OpShutdown = "shutdown" + OpDelete = "delete" ) // Request represents an RPC request from client to daemon @@ -423,92 +413,3 @@ type ImportArgs struct { type GetMutationsArgs struct { Since int64 `json:"since"` // Unix timestamp in milliseconds (0 for all recent) } - -// Gate operations (bd-likt) - -// GateCreateArgs represents arguments for creating a gate -type GateCreateArgs struct { - Title string `json:"title"` - AwaitType string `json:"await_type"` // gh:run, gh:pr, timer, human, mail - AwaitID string `json:"await_id"` // ID/value for the await type - Timeout time.Duration `json:"timeout"` // Timeout duration - Waiters []string `json:"waiters"` // Mail addresses to notify when gate clears -} - -// GateCreateResult represents the result of creating a gate -type GateCreateResult struct { - ID string `json:"id"` // Created gate ID -} - -// GateListArgs represents arguments for listing gates -type GateListArgs struct { - All bool `json:"all"` // Include closed gates -} - -// GateShowArgs represents arguments for showing a gate -type GateShowArgs struct { - ID string `json:"id"` // Gate ID (partial or full) -} - -// GateCloseArgs represents arguments for closing a gate -type GateCloseArgs struct { - ID string `json:"id"` // Gate ID (partial or full) - Reason string `json:"reason,omitempty"` // Close reason -} - -// GateWaitArgs represents arguments for adding waiters to a gate -type GateWaitArgs struct { - ID string `json:"id"` // Gate ID (partial or full) - Waiters []string `json:"waiters"` // Additional waiters to add -} - -// GateWaitResult represents the result of adding waiters -type GateWaitResult struct { - AddedCount int `json:"added_count"` // Number of new waiters added -} - -// GetWorkerStatusArgs represents arguments for retrieving worker status -type GetWorkerStatusArgs struct { - // Assignee filters to a specific worker (optional, empty = all workers) - Assignee string `json:"assignee,omitempty"` -} - -// WorkerStatus represents the status of a single worker and their current work -type WorkerStatus struct { - Assignee string `json:"assignee"` // Worker identifier - MoleculeID string `json:"molecule_id,omitempty"` // Parent molecule/epic ID (if working on a step) - MoleculeTitle string `json:"molecule_title,omitempty"` // Parent molecule/epic title - CurrentStep int `json:"current_step,omitempty"` // Current step number (1-indexed) - TotalSteps int `json:"total_steps,omitempty"` // Total number of steps in molecule - StepID string `json:"step_id,omitempty"` // Current step issue ID - StepTitle string `json:"step_title,omitempty"` // Current step issue title - LastActivity string `json:"last_activity"` // ISO 8601 timestamp of last update - Status string `json:"status"` // Current work status (in_progress, blocked, etc.) -} - -// GetWorkerStatusResponse is the response for get_worker_status operation -type GetWorkerStatusResponse struct { - Workers []WorkerStatus `json:"workers"` -} - -// GetMoleculeProgressArgs represents arguments for the get_molecule_progress operation -type GetMoleculeProgressArgs struct { - MoleculeID string `json:"molecule_id"` // The ID of the molecule (parent issue) -} - -// MoleculeStep represents a single step within a molecule -type MoleculeStep struct { - ID string `json:"id"` - Title string `json:"title"` - Status string `json:"status"` // "done", "current", "ready", "blocked" - StartTime *string `json:"start_time"` // ISO 8601 timestamp when step was created - CloseTime *string `json:"close_time"` // ISO 8601 timestamp when step was closed (if done) -} - -// MoleculeProgress represents the progress of a molecule (parent issue with steps) -type MoleculeProgress struct { - MoleculeID string `json:"molecule_id"` - Title string `json:"title"` - Assignee string `json:"assignee"` - Steps []MoleculeStep `json:"steps"` -} diff --git a/internal/rpc/server_core.go b/internal/rpc/server_core.go index 5fc6aee0..27c1b751 100644 --- a/internal/rpc/server_core.go +++ b/internal/rpc/server_core.go @@ -1,7 +1,6 @@ package rpc import ( - "context" "encoding/json" "fmt" "net" @@ -11,7 +10,6 @@ import ( "time" "github.com/steveyegge/beads/internal/storage" - "github.com/steveyegge/beads/internal/types" ) // ServerVersion is the version of this RPC server @@ -82,8 +80,6 @@ const ( type MutationEvent struct { Type string // One of the Mutation* constants IssueID string // e.g., "bd-42" - Title string // Issue title for display context (may be empty for some operations) - Assignee string // Issue assignee for display context (may be empty) Timestamp time.Time // Optional metadata for richer events (used by status, bonded, etc.) OldStatus string `json:"old_status,omitempty"` // Previous status (for status events) @@ -142,13 +138,10 @@ func NewServer(socketPath string, store storage.Storage, workspacePath string, d // emitMutation sends a mutation event to the daemon's event-driven loop. // Non-blocking: drops event if channel is full (sync will happen eventually). // Also stores in recent mutations buffer for polling. -// Title and assignee provide context for activity feeds; pass empty strings if unknown. -func (s *Server) emitMutation(eventType, issueID, title, assignee string) { +func (s *Server) emitMutation(eventType, issueID string) { s.emitRichMutation(MutationEvent{ - Type: eventType, - IssueID: issueID, - Title: title, - Assignee: assignee, + Type: eventType, + IssueID: issueID, }) } @@ -234,120 +227,3 @@ func (s *Server) handleGetMutations(req *Request) Response { Data: data, } } - -// handleGetMoleculeProgress handles the get_molecule_progress RPC operation -// Returns detailed progress for a molecule (parent issue with child steps) -func (s *Server) handleGetMoleculeProgress(req *Request) Response { - var args GetMoleculeProgressArgs - if err := json.Unmarshal(req.Args, &args); err != nil { - return Response{ - Success: false, - Error: fmt.Sprintf("invalid arguments: %v", err), - } - } - - store := s.storage - if store == nil { - return Response{ - Success: false, - Error: "storage not available", - } - } - - ctx := s.reqCtx(req) - - // Get the molecule (parent issue) - molecule, err := store.GetIssue(ctx, args.MoleculeID) - if err != nil { - return Response{ - Success: false, - Error: fmt.Sprintf("failed to get molecule: %v", err), - } - } - if molecule == nil { - return Response{ - Success: false, - Error: fmt.Sprintf("molecule not found: %s", args.MoleculeID), - } - } - - // Get children (issues that have parent-child dependency on this molecule) - var children []*types.IssueWithDependencyMetadata - if sqliteStore, ok := store.(interface { - GetDependentsWithMetadata(ctx context.Context, issueID string) ([]*types.IssueWithDependencyMetadata, error) - }); ok { - allDependents, err := sqliteStore.GetDependentsWithMetadata(ctx, args.MoleculeID) - if err != nil { - return Response{ - Success: false, - Error: fmt.Sprintf("failed to get molecule children: %v", err), - } - } - // Filter for parent-child relationships only - for _, dep := range allDependents { - if dep.DependencyType == types.DepParentChild { - children = append(children, dep) - } - } - } - - // Get blocked issue IDs for status computation - blockedIDs := make(map[string]bool) - if sqliteStore, ok := store.(interface { - GetBlockedIssueIDs(ctx context.Context) ([]string, error) - }); ok { - ids, err := sqliteStore.GetBlockedIssueIDs(ctx) - if err == nil { - for _, id := range ids { - blockedIDs[id] = true - } - } - } - - // Build steps from children - steps := make([]MoleculeStep, 0, len(children)) - for _, child := range children { - step := MoleculeStep{ - ID: child.ID, - Title: child.Title, - } - - // Compute step status - switch child.Status { - case types.StatusClosed: - step.Status = "done" - case types.StatusInProgress: - step.Status = "current" - default: // open, blocked, etc. - if blockedIDs[child.ID] { - step.Status = "blocked" - } else { - step.Status = "ready" - } - } - - // Set timestamps - startTime := child.CreatedAt.Format(time.RFC3339) - step.StartTime = &startTime - - if child.ClosedAt != nil { - closeTime := child.ClosedAt.Format(time.RFC3339) - step.CloseTime = &closeTime - } - - steps = append(steps, step) - } - - progress := MoleculeProgress{ - MoleculeID: molecule.ID, - Title: molecule.Title, - Assignee: molecule.Assignee, - Steps: steps, - } - - data, _ := json.Marshal(progress) - return Response{ - Success: true, - Data: data, - } -} diff --git a/internal/rpc/server_issues_epics.go b/internal/rpc/server_issues_epics.go index 7a680962..22c2471a 100644 --- a/internal/rpc/server_issues_epics.go +++ b/internal/rpc/server_issues_epics.go @@ -350,7 +350,7 @@ func (s *Server) handleCreate(req *Request) Response { } // Emit mutation event for event-driven daemon - s.emitMutation(MutationCreate, issue.ID, issue.Title, issue.Assignee) + s.emitMutation(MutationCreate, issue.ID) data, _ := json.Marshal(issue) return Response{ @@ -470,13 +470,11 @@ func (s *Server) handleUpdate(req *Request) Response { s.emitRichMutation(MutationEvent{ Type: MutationStatus, IssueID: updateArgs.ID, - Title: issue.Title, - Assignee: issue.Assignee, OldStatus: string(issue.Status), NewStatus: *updateArgs.Status, }) } else { - s.emitMutation(MutationUpdate, updateArgs.ID, issue.Title, issue.Assignee) + s.emitMutation(MutationUpdate, updateArgs.ID) } } @@ -546,8 +544,6 @@ func (s *Server) handleClose(req *Request) Response { s.emitRichMutation(MutationEvent{ Type: MutationStatus, IssueID: closeArgs.ID, - Title: issue.Title, - Assignee: issue.Assignee, OldStatus: oldStatus, NewStatus: "closed", }) @@ -644,7 +640,7 @@ func (s *Server) handleDelete(req *Request) Response { } // Emit mutation event for event-driven daemon - s.emitMutation(MutationDelete, issueID, issue.Title, issue.Assignee) + s.emitMutation(MutationDelete, issueID) deletedCount++ } @@ -1377,341 +1373,3 @@ func (s *Server) handleEpicStatus(req *Request) Response { Data: data, } } - -// Gate handlers (bd-likt) - -func (s *Server) handleGateCreate(req *Request) Response { - var args GateCreateArgs - if err := json.Unmarshal(req.Args, &args); err != nil { - return Response{ - Success: false, - Error: fmt.Sprintf("invalid gate create args: %v", err), - } - } - - store := s.storage - if store == nil { - return Response{ - Success: false, - Error: "storage not available", - } - } - - ctx := s.reqCtx(req) - now := time.Now() - - // Create gate issue - gate := &types.Issue{ - Title: args.Title, - IssueType: types.TypeGate, - Status: types.StatusOpen, - Priority: 1, // Gates are typically high priority - Assignee: "deacon/", - Wisp: true, // Gates are wisps (ephemeral) - AwaitType: args.AwaitType, - AwaitID: args.AwaitID, - Timeout: args.Timeout, - Waiters: args.Waiters, - CreatedAt: now, - UpdatedAt: now, - } - gate.ContentHash = gate.ComputeContentHash() - - if err := store.CreateIssue(ctx, gate, s.reqActor(req)); err != nil { - return Response{ - Success: false, - Error: fmt.Sprintf("failed to create gate: %v", err), - } - } - - // Emit mutation event - s.emitMutation(MutationCreate, gate.ID, gate.Title, gate.Assignee) - - data, _ := json.Marshal(GateCreateResult{ID: gate.ID}) - return Response{ - Success: true, - Data: data, - } -} - -func (s *Server) handleGateList(req *Request) Response { - var args GateListArgs - if err := json.Unmarshal(req.Args, &args); err != nil { - return Response{ - Success: false, - Error: fmt.Sprintf("invalid gate list args: %v", err), - } - } - - store := s.storage - if store == nil { - return Response{ - Success: false, - Error: "storage not available", - } - } - - ctx := s.reqCtx(req) - - // Build filter for gates - gateType := types.TypeGate - filter := types.IssueFilter{ - IssueType: &gateType, - } - if !args.All { - openStatus := types.StatusOpen - filter.Status = &openStatus - } - - gates, err := store.SearchIssues(ctx, "", filter) - if err != nil { - return Response{ - Success: false, - Error: fmt.Sprintf("failed to list gates: %v", err), - } - } - - data, _ := json.Marshal(gates) - return Response{ - Success: true, - Data: data, - } -} - -func (s *Server) handleGateShow(req *Request) Response { - var args GateShowArgs - if err := json.Unmarshal(req.Args, &args); err != nil { - return Response{ - Success: false, - Error: fmt.Sprintf("invalid gate show args: %v", err), - } - } - - store := s.storage - if store == nil { - return Response{ - Success: false, - Error: "storage not available", - } - } - - ctx := s.reqCtx(req) - - // Resolve partial ID - gateID, err := utils.ResolvePartialID(ctx, store, args.ID) - if err != nil { - return Response{ - Success: false, - Error: fmt.Sprintf("failed to resolve gate ID: %v", err), - } - } - - gate, err := store.GetIssue(ctx, gateID) - if err != nil { - return Response{ - Success: false, - Error: fmt.Sprintf("failed to get gate: %v", err), - } - } - if gate == nil { - return Response{ - Success: false, - Error: fmt.Sprintf("gate %s not found", gateID), - } - } - if gate.IssueType != types.TypeGate { - return Response{ - Success: false, - Error: fmt.Sprintf("%s is not a gate (type: %s)", gateID, gate.IssueType), - } - } - - data, _ := json.Marshal(gate) - return Response{ - Success: true, - Data: data, - } -} - -func (s *Server) handleGateClose(req *Request) Response { - var args GateCloseArgs - if err := json.Unmarshal(req.Args, &args); err != nil { - return Response{ - Success: false, - Error: fmt.Sprintf("invalid gate close args: %v", err), - } - } - - store := s.storage - if store == nil { - return Response{ - Success: false, - Error: "storage not available", - } - } - - ctx := s.reqCtx(req) - - // Resolve partial ID - gateID, err := utils.ResolvePartialID(ctx, store, args.ID) - if err != nil { - return Response{ - Success: false, - Error: fmt.Sprintf("failed to resolve gate ID: %v", err), - } - } - - // Verify it's a gate - gate, err := store.GetIssue(ctx, gateID) - if err != nil { - return Response{ - Success: false, - Error: fmt.Sprintf("failed to get gate: %v", err), - } - } - if gate == nil { - return Response{ - Success: false, - Error: fmt.Sprintf("gate %s not found", gateID), - } - } - if gate.IssueType != types.TypeGate { - return Response{ - Success: false, - Error: fmt.Sprintf("%s is not a gate (type: %s)", gateID, gate.IssueType), - } - } - - reason := args.Reason - if reason == "" { - reason = "Gate closed" - } - - oldStatus := string(gate.Status) - - if err := store.CloseIssue(ctx, gateID, reason, s.reqActor(req)); err != nil { - return Response{ - Success: false, - Error: fmt.Sprintf("failed to close gate: %v", err), - } - } - - // Emit rich status change event - s.emitRichMutation(MutationEvent{ - Type: MutationStatus, - IssueID: gateID, - OldStatus: oldStatus, - NewStatus: "closed", - }) - - closedGate, _ := store.GetIssue(ctx, gateID) - data, _ := json.Marshal(closedGate) - return Response{ - Success: true, - Data: data, - } -} - -func (s *Server) handleGateWait(req *Request) Response { - var args GateWaitArgs - if err := json.Unmarshal(req.Args, &args); err != nil { - return Response{ - Success: false, - Error: fmt.Sprintf("invalid gate wait args: %v", err), - } - } - - store := s.storage - if store == nil { - return Response{ - Success: false, - Error: "storage not available", - } - } - - ctx := s.reqCtx(req) - - // Resolve partial ID - gateID, err := utils.ResolvePartialID(ctx, store, args.ID) - if err != nil { - return Response{ - Success: false, - Error: fmt.Sprintf("failed to resolve gate ID: %v", err), - } - } - - // Get existing gate - gate, err := store.GetIssue(ctx, gateID) - if err != nil { - return Response{ - Success: false, - Error: fmt.Sprintf("failed to get gate: %v", err), - } - } - if gate == nil { - return Response{ - Success: false, - Error: fmt.Sprintf("gate %s not found", gateID), - } - } - if gate.IssueType != types.TypeGate { - return Response{ - Success: false, - Error: fmt.Sprintf("%s is not a gate (type: %s)", gateID, gate.IssueType), - } - } - if gate.Status == types.StatusClosed { - return Response{ - Success: false, - Error: fmt.Sprintf("gate %s is already closed", gateID), - } - } - - // Add new waiters (avoiding duplicates) - waiterSet := make(map[string]bool) - for _, w := range gate.Waiters { - waiterSet[w] = true - } - newWaiters := []string{} - for _, addr := range args.Waiters { - if !waiterSet[addr] { - newWaiters = append(newWaiters, addr) - waiterSet[addr] = true - } - } - - addedCount := len(newWaiters) - - if addedCount > 0 { - // Update waiters using SQLite directly - sqliteStore, ok := store.(*sqlite.SQLiteStorage) - if !ok { - return Response{ - Success: false, - Error: "gate wait requires SQLite storage", - } - } - - allWaiters := append(gate.Waiters, newWaiters...) - waitersJSON, _ := json.Marshal(allWaiters) - - // Use raw SQL to update the waiters field - _, err = sqliteStore.UnderlyingDB().ExecContext(ctx, `UPDATE issues SET waiters = ?, updated_at = ? WHERE id = ?`, - string(waitersJSON), time.Now(), gateID) - if err != nil { - return Response{ - Success: false, - Error: fmt.Sprintf("failed to add waiters: %v", err), - } - } - - // Emit mutation event - s.emitMutation(MutationUpdate, gateID, gate.Title, gate.Assignee) - } - - data, _ := json.Marshal(GateWaitResult{AddedCount: addedCount}) - return Response{ - Success: true, - Data: data, - } -} diff --git a/internal/rpc/server_labels_deps_comments.go b/internal/rpc/server_labels_deps_comments.go index f0510131..e48f90ef 100644 --- a/internal/rpc/server_labels_deps_comments.go +++ b/internal/rpc/server_labels_deps_comments.go @@ -41,8 +41,7 @@ func (s *Server) handleDepAdd(req *Request) Response { } // Emit mutation event for event-driven daemon - // Title/assignee empty for dependency operations (would require extra lookup) - s.emitMutation(MutationUpdate, depArgs.FromID, "", "") + s.emitMutation(MutationUpdate, depArgs.FromID) return Response{Success: true} } @@ -74,8 +73,7 @@ func (s *Server) handleSimpleStoreOp(req *Request, argsPtr interface{}, argDesc } // Emit mutation event for event-driven daemon - // Title/assignee empty for simple store operations (would require extra lookup) - s.emitMutation(MutationUpdate, issueID, "", "") + s.emitMutation(MutationUpdate, issueID) return Response{Success: true} } @@ -149,8 +147,7 @@ func (s *Server) handleCommentAdd(req *Request) Response { } // Emit mutation event for event-driven daemon - // Title/assignee empty for comment operations (would require extra lookup) - s.emitMutation(MutationComment, commentArgs.ID, "", "") + s.emitMutation(MutationComment, commentArgs.ID) data, _ := json.Marshal(comment) return Response{ diff --git a/internal/rpc/server_mutations_test.go b/internal/rpc/server_mutations_test.go index 4f111773..2b2c269d 100644 --- a/internal/rpc/server_mutations_test.go +++ b/internal/rpc/server_mutations_test.go @@ -13,7 +13,7 @@ func TestEmitMutation(t *testing.T) { server := NewServer("/tmp/test.sock", store, "/tmp", "/tmp/test.db") // Emit a mutation - server.emitMutation(MutationCreate, "bd-123", "Test Issue", "") + server.emitMutation(MutationCreate, "bd-123") // Check that mutation was stored in buffer mutations := server.GetRecentMutations(0) @@ -45,14 +45,14 @@ func TestGetRecentMutations_TimestampFiltering(t *testing.T) { server := NewServer("/tmp/test.sock", store, "/tmp", "/tmp/test.db") // Emit mutations with delays - server.emitMutation(MutationCreate, "bd-1", "Issue 1", "") + server.emitMutation(MutationCreate, "bd-1") time.Sleep(10 * time.Millisecond) checkpoint := time.Now().UnixMilli() time.Sleep(10 * time.Millisecond) - server.emitMutation(MutationUpdate, "bd-2", "Issue 2", "") - server.emitMutation(MutationUpdate, "bd-3", "Issue 3", "") + server.emitMutation(MutationUpdate, "bd-2") + server.emitMutation(MutationUpdate, "bd-3") // Get mutations after checkpoint mutations := server.GetRecentMutations(checkpoint) @@ -82,7 +82,7 @@ func TestGetRecentMutations_CircularBuffer(t *testing.T) { // Emit more than maxMutationBuffer (100) mutations for i := 0; i < 150; i++ { - server.emitMutation(MutationCreate, "bd-"+string(rune(i)), "", "") + server.emitMutation(MutationCreate, "bd-"+string(rune(i))) time.Sleep(time.Millisecond) // Ensure different timestamps } @@ -110,7 +110,7 @@ func TestGetRecentMutations_ConcurrentAccess(t *testing.T) { // Writer goroutine go func() { for i := 0; i < 50; i++ { - server.emitMutation(MutationUpdate, "bd-write", "", "") + server.emitMutation(MutationUpdate, "bd-write") time.Sleep(time.Millisecond) } done <- true @@ -141,11 +141,11 @@ func TestHandleGetMutations(t *testing.T) { server := NewServer("/tmp/test.sock", store, "/tmp", "/tmp/test.db") // Emit some mutations - server.emitMutation(MutationCreate, "bd-1", "Issue 1", "") + server.emitMutation(MutationCreate, "bd-1") time.Sleep(10 * time.Millisecond) checkpoint := time.Now().UnixMilli() time.Sleep(10 * time.Millisecond) - server.emitMutation(MutationUpdate, "bd-2", "Issue 2", "") + server.emitMutation(MutationUpdate, "bd-2") // Create RPC request args := GetMutationsArgs{Since: checkpoint} @@ -213,7 +213,7 @@ func TestMutationEventTypes(t *testing.T) { } for _, mutationType := range types { - server.emitMutation(mutationType, "bd-test", "", "") + server.emitMutation(mutationType, "bd-test") } mutations := server.GetRecentMutations(0) @@ -305,7 +305,7 @@ func TestMutationTimestamps(t *testing.T) { server := NewServer("/tmp/test.sock", store, "/tmp", "/tmp/test.db") before := time.Now() - server.emitMutation(MutationCreate, "bd-123", "Test Issue", "") + server.emitMutation(MutationCreate, "bd-123") after := time.Now() mutations := server.GetRecentMutations(0) @@ -327,7 +327,7 @@ func TestEmitMutation_NonBlocking(t *testing.T) { // Fill the buffer (default size is 512 from BEADS_MUTATION_BUFFER or default) for i := 0; i < 600; i++ { // This should not block even when channel is full - server.emitMutation(MutationCreate, "bd-test", "", "") + server.emitMutation(MutationCreate, "bd-test") } // Verify mutations were still stored in recent buffer diff --git a/internal/rpc/server_routing_validation_diagnostics.go b/internal/rpc/server_routing_validation_diagnostics.go index fc99b0e4..d8965100 100644 --- a/internal/rpc/server_routing_validation_diagnostics.go +++ b/internal/rpc/server_routing_validation_diagnostics.go @@ -219,23 +219,8 @@ func (s *Server) handleRequest(req *Request) Response { resp = s.handleEpicStatus(req) case OpGetMutations: resp = s.handleGetMutations(req) - case OpGetMoleculeProgress: - resp = s.handleGetMoleculeProgress(req) - case OpGetWorkerStatus: - resp = s.handleGetWorkerStatus(req) case OpShutdown: resp = s.handleShutdown(req) - // Gate operations (bd-likt) - case OpGateCreate: - resp = s.handleGateCreate(req) - case OpGateList: - resp = s.handleGateList(req) - case OpGateShow: - resp = s.handleGateShow(req) - case OpGateClose: - resp = s.handleGateClose(req) - case OpGateWait: - resp = s.handleGateWait(req) default: s.metrics.RecordError(req.Operation) return Response{ @@ -394,107 +379,3 @@ func (s *Server) handleMetrics(_ *Request) Response { Data: data, } } - -func (s *Server) handleGetWorkerStatus(req *Request) Response { - ctx := s.reqCtx(req) - - // Parse optional args - var args GetWorkerStatusArgs - if len(req.Args) > 0 { - if err := json.Unmarshal(req.Args, &args); err != nil { - return Response{ - Success: false, - Error: fmt.Sprintf("invalid args: %v", err), - } - } - } - - // Build filter: find all in_progress issues with assignees - filter := types.IssueFilter{ - Status: func() *types.Status { s := types.StatusInProgress; return &s }(), - } - if args.Assignee != "" { - filter.Assignee = &args.Assignee - } - - // Get all in_progress issues (potential workers) - issues, err := s.storage.SearchIssues(ctx, "", filter) - if err != nil { - return Response{ - Success: false, - Error: fmt.Sprintf("failed to search issues: %v", err), - } - } - - var workers []WorkerStatus - for _, issue := range issues { - // Skip issues without assignees - if issue.Assignee == "" { - continue - } - - worker := WorkerStatus{ - Assignee: issue.Assignee, - LastActivity: issue.UpdatedAt.Format(time.RFC3339), - Status: string(issue.Status), - } - - // Check if this issue is a child of a molecule/epic (has parent-child dependency) - deps, err := s.storage.GetDependencyRecords(ctx, issue.ID) - if err == nil { - for _, dep := range deps { - if dep.Type == types.DepParentChild { - // This issue is a child - get the parent molecule - parentIssue, err := s.storage.GetIssue(ctx, dep.DependsOnID) - if err == nil && parentIssue != nil { - worker.MoleculeID = parentIssue.ID - worker.MoleculeTitle = parentIssue.Title - worker.StepID = issue.ID - worker.StepTitle = issue.Title - - // Count total steps and determine current step number - // by getting all children of the molecule - children, err := s.storage.GetDependents(ctx, parentIssue.ID) - if err == nil { - // Filter to only parent-child dependencies - var steps []*types.Issue - for _, child := range children { - childDeps, err := s.storage.GetDependencyRecords(ctx, child.ID) - if err == nil { - for _, childDep := range childDeps { - if childDep.Type == types.DepParentChild && childDep.DependsOnID == parentIssue.ID { - steps = append(steps, child) - break - } - } - } - } - worker.TotalSteps = len(steps) - - // Find current step number (1-indexed) - for i, step := range steps { - if step.ID == issue.ID { - worker.CurrentStep = i + 1 - break - } - } - } - } - break // Found the parent, no need to check other deps - } - } - } - - workers = append(workers, worker) - } - - resp := GetWorkerStatusResponse{ - Workers: workers, - } - - data, _ := json.Marshal(resp) - return Response{ - Success: true, - Data: data, - } -} diff --git a/internal/rpc/worker_status_test.go b/internal/rpc/worker_status_test.go deleted file mode 100644 index 7adf284b..00000000 --- a/internal/rpc/worker_status_test.go +++ /dev/null @@ -1,314 +0,0 @@ -package rpc - -import ( - "context" - "testing" - "time" - - "github.com/steveyegge/beads/internal/types" -) - -func TestGetWorkerStatus_NoWorkers(t *testing.T) { - _, client, cleanup := setupTestServer(t) - defer cleanup() - - // With no in_progress issues assigned, should return empty list - result, err := client.GetWorkerStatus(&GetWorkerStatusArgs{}) - if err != nil { - t.Fatalf("GetWorkerStatus failed: %v", err) - } - - if len(result.Workers) != 0 { - t.Errorf("expected 0 workers, got %d", len(result.Workers)) - } -} - -func TestGetWorkerStatus_SingleWorker(t *testing.T) { - server, client, cleanup := setupTestServer(t) - defer cleanup() - - ctx := context.Background() - - // Create an in_progress issue with an assignee - issue := &types.Issue{ - ID: "bd-test1", - Title: "Test task", - Status: types.StatusInProgress, - IssueType: types.TypeTask, - Priority: 2, - Assignee: "worker1", - CreatedAt: time.Now(), - UpdatedAt: time.Now(), - } - - if err := server.storage.CreateIssue(ctx, issue, "test"); err != nil { - t.Fatalf("failed to create issue: %v", err) - } - - // Query worker status - result, err := client.GetWorkerStatus(&GetWorkerStatusArgs{}) - if err != nil { - t.Fatalf("GetWorkerStatus failed: %v", err) - } - - if len(result.Workers) != 1 { - t.Fatalf("expected 1 worker, got %d", len(result.Workers)) - } - - worker := result.Workers[0] - if worker.Assignee != "worker1" { - t.Errorf("expected assignee 'worker1', got '%s'", worker.Assignee) - } - if worker.Status != "in_progress" { - t.Errorf("expected status 'in_progress', got '%s'", worker.Status) - } - if worker.LastActivity == "" { - t.Error("expected last activity to be set") - } - // Not part of a molecule, so these should be empty - if worker.MoleculeID != "" { - t.Errorf("expected empty molecule ID, got '%s'", worker.MoleculeID) - } -} - -func TestGetWorkerStatus_WithMolecule(t *testing.T) { - server, client, cleanup := setupTestServer(t) - defer cleanup() - - ctx := context.Background() - - // Create a molecule (epic) - molecule := &types.Issue{ - ID: "bd-mol1", - Title: "Test Molecule", - Status: types.StatusOpen, - IssueType: types.TypeEpic, - Priority: 2, - CreatedAt: time.Now(), - UpdatedAt: time.Now(), - } - - if err := server.storage.CreateIssue(ctx, molecule, "test"); err != nil { - t.Fatalf("failed to create molecule: %v", err) - } - - // Create step 1 (completed) - step1 := &types.Issue{ - ID: "bd-step1", - Title: "Step 1: Setup", - Status: types.StatusClosed, - IssueType: types.TypeTask, - Priority: 2, - Assignee: "worker1", - CreatedAt: time.Now(), - UpdatedAt: time.Now(), - ClosedAt: func() *time.Time { t := time.Now(); return &t }(), - } - - if err := server.storage.CreateIssue(ctx, step1, "test"); err != nil { - t.Fatalf("failed to create step1: %v", err) - } - - // Create step 2 (current step - in progress) - step2 := &types.Issue{ - ID: "bd-step2", - Title: "Step 2: Implementation", - Status: types.StatusInProgress, - IssueType: types.TypeTask, - Priority: 2, - Assignee: "worker1", - CreatedAt: time.Now(), - UpdatedAt: time.Now(), - } - - if err := server.storage.CreateIssue(ctx, step2, "test"); err != nil { - t.Fatalf("failed to create step2: %v", err) - } - - // Create step 3 (pending) - step3 := &types.Issue{ - ID: "bd-step3", - Title: "Step 3: Testing", - Status: types.StatusOpen, - IssueType: types.TypeTask, - Priority: 2, - CreatedAt: time.Now(), - UpdatedAt: time.Now(), - } - - if err := server.storage.CreateIssue(ctx, step3, "test"); err != nil { - t.Fatalf("failed to create step3: %v", err) - } - - // Add parent-child dependencies (steps depend on molecule) - for _, stepID := range []string{"bd-step1", "bd-step2", "bd-step3"} { - dep := &types.Dependency{ - IssueID: stepID, - DependsOnID: "bd-mol1", - Type: types.DepParentChild, - CreatedAt: time.Now(), - CreatedBy: "test", - } - if err := server.storage.AddDependency(ctx, dep, "test"); err != nil { - t.Fatalf("failed to add dependency for %s: %v", stepID, err) - } - } - - // Query worker status - result, err := client.GetWorkerStatus(&GetWorkerStatusArgs{}) - if err != nil { - t.Fatalf("GetWorkerStatus failed: %v", err) - } - - if len(result.Workers) != 1 { - t.Fatalf("expected 1 worker (only in_progress issues), got %d", len(result.Workers)) - } - - worker := result.Workers[0] - if worker.Assignee != "worker1" { - t.Errorf("expected assignee 'worker1', got '%s'", worker.Assignee) - } - if worker.MoleculeID != "bd-mol1" { - t.Errorf("expected molecule ID 'bd-mol1', got '%s'", worker.MoleculeID) - } - if worker.MoleculeTitle != "Test Molecule" { - t.Errorf("expected molecule title 'Test Molecule', got '%s'", worker.MoleculeTitle) - } - if worker.StepID != "bd-step2" { - t.Errorf("expected step ID 'bd-step2', got '%s'", worker.StepID) - } - if worker.StepTitle != "Step 2: Implementation" { - t.Errorf("expected step title 'Step 2: Implementation', got '%s'", worker.StepTitle) - } - if worker.TotalSteps != 3 { - t.Errorf("expected 3 total steps, got %d", worker.TotalSteps) - } - // Note: CurrentStep ordering depends on how GetDependents orders results - // Just verify it's set - if worker.CurrentStep < 1 || worker.CurrentStep > 3 { - t.Errorf("expected current step between 1 and 3, got %d", worker.CurrentStep) - } -} - -func TestGetWorkerStatus_FilterByAssignee(t *testing.T) { - server, client, cleanup := setupTestServer(t) - defer cleanup() - - ctx := context.Background() - - // Create issues for two different workers - issue1 := &types.Issue{ - ID: "bd-test1", - Title: "Task for worker1", - Status: types.StatusInProgress, - IssueType: types.TypeTask, - Priority: 2, - Assignee: "worker1", - CreatedAt: time.Now(), - UpdatedAt: time.Now(), - } - - issue2 := &types.Issue{ - ID: "bd-test2", - Title: "Task for worker2", - Status: types.StatusInProgress, - IssueType: types.TypeTask, - Priority: 2, - Assignee: "worker2", - CreatedAt: time.Now(), - UpdatedAt: time.Now(), - } - - if err := server.storage.CreateIssue(ctx, issue1, "test"); err != nil { - t.Fatalf("failed to create issue1: %v", err) - } - if err := server.storage.CreateIssue(ctx, issue2, "test"); err != nil { - t.Fatalf("failed to create issue2: %v", err) - } - - // Query all workers - allResult, err := client.GetWorkerStatus(&GetWorkerStatusArgs{}) - if err != nil { - t.Fatalf("GetWorkerStatus (all) failed: %v", err) - } - - if len(allResult.Workers) != 2 { - t.Errorf("expected 2 workers, got %d", len(allResult.Workers)) - } - - // Query specific worker - filteredResult, err := client.GetWorkerStatus(&GetWorkerStatusArgs{Assignee: "worker1"}) - if err != nil { - t.Fatalf("GetWorkerStatus (filtered) failed: %v", err) - } - - if len(filteredResult.Workers) != 1 { - t.Fatalf("expected 1 worker, got %d", len(filteredResult.Workers)) - } - - if filteredResult.Workers[0].Assignee != "worker1" { - t.Errorf("expected assignee 'worker1', got '%s'", filteredResult.Workers[0].Assignee) - } -} - -func TestGetWorkerStatus_OnlyInProgressIssues(t *testing.T) { - server, client, cleanup := setupTestServer(t) - defer cleanup() - - ctx := context.Background() - - // Create issues with different statuses - openIssue := &types.Issue{ - ID: "bd-open", - Title: "Open task", - Status: types.StatusOpen, - IssueType: types.TypeTask, - Priority: 2, - Assignee: "worker1", - CreatedAt: time.Now(), - UpdatedAt: time.Now(), - } - - inProgressIssue := &types.Issue{ - ID: "bd-inprog", - Title: "In progress task", - Status: types.StatusInProgress, - IssueType: types.TypeTask, - Priority: 2, - Assignee: "worker2", - CreatedAt: time.Now(), - UpdatedAt: time.Now(), - } - - closedIssue := &types.Issue{ - ID: "bd-closed", - Title: "Closed task", - Status: types.StatusClosed, - IssueType: types.TypeTask, - Priority: 2, - Assignee: "worker3", - CreatedAt: time.Now(), - UpdatedAt: time.Now(), - ClosedAt: func() *time.Time { t := time.Now(); return &t }(), - } - - for _, issue := range []*types.Issue{openIssue, inProgressIssue, closedIssue} { - if err := server.storage.CreateIssue(ctx, issue, "test"); err != nil { - t.Fatalf("failed to create issue %s: %v", issue.ID, err) - } - } - - // Query worker status - should only return in_progress issues - result, err := client.GetWorkerStatus(&GetWorkerStatusArgs{}) - if err != nil { - t.Fatalf("GetWorkerStatus failed: %v", err) - } - - if len(result.Workers) != 1 { - t.Fatalf("expected 1 worker (only in_progress), got %d", len(result.Workers)) - } - - if result.Workers[0].Assignee != "worker2" { - t.Errorf("expected assignee 'worker2', got '%s'", result.Workers[0].Assignee) - } -} diff --git a/internal/storage/memory/memory.go b/internal/storage/memory/memory.go index 60ba8268..c44882d0 100644 --- a/internal/storage/memory/memory.go +++ b/internal/storage/memory/memory.go @@ -935,20 +935,6 @@ func (m *MemoryStorage) GetReadyWork(ctx context.Context, filter types.WorkFilte continue } - // Type filtering (gt-7xtn) - if filter.Type != "" { - if string(issue.IssueType) != filter.Type { - continue - } - } else { - // Exclude workflow types from ready work by default - // These are internal workflow items, not work for polecats to claim - switch issue.IssueType { - case types.TypeMergeRequest, types.TypeGate, types.TypeMolecule, types.TypeMessage: - continue - } - } - // Unassigned takes precedence over Assignee filter if filter.Unassigned { if issue.Assignee != "" { diff --git a/internal/storage/sqlite/blocked_cache.go b/internal/storage/sqlite/blocked_cache.go index 93d63f03..e592d507 100644 --- a/internal/storage/sqlite/blocked_cache.go +++ b/internal/storage/sqlite/blocked_cache.go @@ -246,22 +246,3 @@ func (s *SQLiteStorage) rebuildBlockedCache(ctx context.Context, exec execer) er func (s *SQLiteStorage) invalidateBlockedCache(ctx context.Context, exec execer) error { return s.rebuildBlockedCache(ctx, exec) } - -// GetBlockedIssueIDs returns all issue IDs currently in the blocked cache -func (s *SQLiteStorage) GetBlockedIssueIDs(ctx context.Context) ([]string, error) { - rows, err := s.db.QueryContext(ctx, "SELECT issue_id FROM blocked_issues_cache") - if err != nil { - return nil, fmt.Errorf("failed to query blocked_issues_cache: %w", err) - } - defer rows.Close() - - var ids []string - for rows.Next() { - var id string - if err := rows.Scan(&id); err != nil { - return nil, fmt.Errorf("failed to scan blocked issue ID: %w", err) - } - ids = append(ids, id) - } - return ids, rows.Err() -} diff --git a/internal/storage/sqlite/multirepo.go b/internal/storage/sqlite/multirepo.go index 74f4890b..d8826bdc 100644 --- a/internal/storage/sqlite/multirepo.go +++ b/internal/storage/sqlite/multirepo.go @@ -330,9 +330,6 @@ func (s *SQLiteStorage) upsertIssueInTx(ctx context.Context, tx *sql.Tx, issue * } if existingHash != issue.ContentHash { - // Pinned field fix (bd-phtv): Use COALESCE(NULLIF(?, 0), pinned) to preserve - // existing pinned=1 when incoming pinned=0 (which means field was absent in - // JSONL due to omitempty). This prevents auto-import from resetting pinned issues. _, err = tx.ExecContext(ctx, ` UPDATE issues SET content_hash = ?, title = ?, description = ?, design = ?, @@ -340,7 +337,7 @@ func (s *SQLiteStorage) upsertIssueInTx(ctx context.Context, tx *sql.Tx, issue * issue_type = ?, assignee = ?, estimated_minutes = ?, updated_at = ?, closed_at = ?, external_ref = ?, source_repo = ?, deleted_at = ?, deleted_by = ?, delete_reason = ?, original_type = ?, - sender = ?, ephemeral = ?, pinned = COALESCE(NULLIF(?, 0), pinned), is_template = ?, + sender = ?, ephemeral = ?, pinned = ?, is_template = ?, await_type = ?, await_id = ?, timeout_ns = ?, waiters = ? WHERE id = ? `, diff --git a/internal/storage/sqlite/queries.go b/internal/storage/sqlite/queries.go index 6ab807f0..cc8d9df9 100644 --- a/internal/storage/sqlite/queries.go +++ b/internal/storage/sqlite/queries.go @@ -16,49 +16,6 @@ import ( // Graph edges (replies-to, relates-to, duplicates, supersedes) are now managed // exclusively through the dependency API. Use AddDependency() instead. -// parseNullableTimeString parses a nullable time string from database TEXT columns. -// The ncruces/go-sqlite3 driver only auto-converts TEXTβ†’time.Time for columns declared -// as DATETIME/DATE/TIME/TIMESTAMP. For TEXT columns (like deleted_at), we must parse manually. -// Supports RFC3339, RFC3339Nano, and SQLite's native format. -func parseNullableTimeString(ns sql.NullString) *time.Time { - if !ns.Valid || ns.String == "" { - return nil - } - // Try RFC3339Nano first (more precise), then RFC3339, then SQLite format - for _, layout := range []string{time.RFC3339Nano, time.RFC3339, "2006-01-02 15:04:05"} { - if t, err := time.Parse(layout, ns.String); err == nil { - return &t - } - } - return nil // Unparseable - shouldn't happen with valid data -} - -// parseJSONStringArray parses a JSON string array from database TEXT column. -// Returns empty slice if the string is empty or invalid JSON. -func parseJSONStringArray(s string) []string { - if s == "" { - return nil - } - var result []string - if err := json.Unmarshal([]byte(s), &result); err != nil { - return nil // Invalid JSON - shouldn't happen with valid data - } - return result -} - -// formatJSONStringArray formats a string slice as JSON for database storage. -// Returns empty string if the slice is nil or empty. -func formatJSONStringArray(arr []string) string { - if len(arr) == 0 { - return "" - } - data, err := json.Marshal(arr) - if err != nil { - return "" - } - return string(data) -} - // REMOVED (bd-8e05): getNextIDForPrefix and AllocateNextID - sequential ID generation // no longer needed with hash-based IDs // Migration functions moved to migrations.go (bd-fc2d, bd-b245) @@ -368,219 +325,6 @@ func (s *SQLiteStorage) GetIssue(ctx context.Context, id string) (*types.Issue, return &issue, nil } -// GetCloseReason retrieves the close reason from the most recent closed event for an issue -func (s *SQLiteStorage) GetCloseReason(ctx context.Context, issueID string) (string, error) { - var comment sql.NullString - err := s.db.QueryRowContext(ctx, ` - SELECT comment FROM events - WHERE issue_id = ? AND event_type = ? - ORDER BY created_at DESC - LIMIT 1 - `, issueID, types.EventClosed).Scan(&comment) - - if err == sql.ErrNoRows { - return "", nil - } - if err != nil { - return "", fmt.Errorf("failed to get close reason: %w", err) - } - if comment.Valid { - return comment.String, nil - } - return "", nil -} - -// GetCloseReasonsForIssues retrieves close reasons for multiple issues in a single query -func (s *SQLiteStorage) GetCloseReasonsForIssues(ctx context.Context, issueIDs []string) (map[string]string, error) { - result := make(map[string]string) - if len(issueIDs) == 0 { - return result, nil - } - - // Build placeholders for IN clause - placeholders := make([]string, len(issueIDs)) - args := make([]interface{}, len(issueIDs)+1) - args[0] = types.EventClosed - for i, id := range issueIDs { - placeholders[i] = "?" - args[i+1] = id - } - - // Use a subquery to get the most recent closed event for each issue - // #nosec G201 - safe SQL with controlled formatting - query := fmt.Sprintf(` - SELECT e.issue_id, e.comment - FROM events e - INNER JOIN ( - SELECT issue_id, MAX(created_at) as max_created_at - FROM events - WHERE event_type = ? AND issue_id IN (%s) - GROUP BY issue_id - ) latest ON e.issue_id = latest.issue_id AND e.created_at = latest.max_created_at - WHERE e.event_type = ? - `, strings.Join(placeholders, ", ")) - - // Append event_type again for the outer WHERE clause - args = append(args, types.EventClosed) - - rows, err := s.db.QueryContext(ctx, query, args...) - if err != nil { - return nil, fmt.Errorf("failed to get close reasons: %w", err) - } - defer func() { _ = rows.Close() }() - - for rows.Next() { - var issueID string - var comment sql.NullString - if err := rows.Scan(&issueID, &comment); err != nil { - return nil, fmt.Errorf("failed to scan close reason: %w", err) - } - if comment.Valid && comment.String != "" { - result[issueID] = comment.String - } - } - - return result, nil -} - -// GetIssueByExternalRef retrieves an issue by external reference -func (s *SQLiteStorage) GetIssueByExternalRef(ctx context.Context, externalRef string) (*types.Issue, error) { - var issue types.Issue - var closedAt sql.NullTime - var estimatedMinutes sql.NullInt64 - var assignee sql.NullString - var externalRefCol sql.NullString - var compactedAt sql.NullTime - var originalSize sql.NullInt64 - var contentHash sql.NullString - var compactedAtCommit sql.NullString - var sourceRepo sql.NullString - var closeReason sql.NullString - var deletedAt sql.NullString // TEXT column, not DATETIME - must parse manually - var deletedBy sql.NullString - var deleteReason sql.NullString - var originalType sql.NullString - // Messaging fields (bd-kwro) - var sender sql.NullString - var wisp sql.NullInt64 - // Pinned field (bd-7h5) - var pinned sql.NullInt64 - // Template field (beads-1ra) - var isTemplate sql.NullInt64 - // Gate fields (bd-udsi) - var awaitType sql.NullString - var awaitID sql.NullString - var timeoutNs sql.NullInt64 - var waiters sql.NullString - - err := s.db.QueryRowContext(ctx, ` - SELECT id, content_hash, title, description, design, acceptance_criteria, notes, - status, priority, issue_type, assignee, estimated_minutes, - created_at, updated_at, closed_at, external_ref, - compaction_level, compacted_at, compacted_at_commit, original_size, source_repo, close_reason, - deleted_at, deleted_by, delete_reason, original_type, - sender, ephemeral, pinned, is_template, - await_type, await_id, timeout_ns, waiters - FROM issues - WHERE external_ref = ? - `, externalRef).Scan( - &issue.ID, &contentHash, &issue.Title, &issue.Description, &issue.Design, - &issue.AcceptanceCriteria, &issue.Notes, &issue.Status, - &issue.Priority, &issue.IssueType, &assignee, &estimatedMinutes, - &issue.CreatedAt, &issue.UpdatedAt, &closedAt, &externalRefCol, - &issue.CompactionLevel, &compactedAt, &compactedAtCommit, &originalSize, &sourceRepo, &closeReason, - &deletedAt, &deletedBy, &deleteReason, &originalType, - &sender, &wisp, &pinned, &isTemplate, - &awaitType, &awaitID, &timeoutNs, &waiters, - ) - - if err == sql.ErrNoRows { - return nil, nil - } - if err != nil { - return nil, fmt.Errorf("failed to get issue by external_ref: %w", err) - } - - if contentHash.Valid { - issue.ContentHash = contentHash.String - } - if closedAt.Valid { - issue.ClosedAt = &closedAt.Time - } - if estimatedMinutes.Valid { - mins := int(estimatedMinutes.Int64) - issue.EstimatedMinutes = &mins - } - if assignee.Valid { - issue.Assignee = assignee.String - } - if externalRefCol.Valid { - issue.ExternalRef = &externalRefCol.String - } - if compactedAt.Valid { - issue.CompactedAt = &compactedAt.Time - } - if compactedAtCommit.Valid { - issue.CompactedAtCommit = &compactedAtCommit.String - } - if originalSize.Valid { - issue.OriginalSize = int(originalSize.Int64) - } - if sourceRepo.Valid { - issue.SourceRepo = sourceRepo.String - } - if closeReason.Valid { - issue.CloseReason = closeReason.String - } - issue.DeletedAt = parseNullableTimeString(deletedAt) - if deletedBy.Valid { - issue.DeletedBy = deletedBy.String - } - if deleteReason.Valid { - issue.DeleteReason = deleteReason.String - } - if originalType.Valid { - issue.OriginalType = originalType.String - } - // Messaging fields (bd-kwro) - if sender.Valid { - issue.Sender = sender.String - } - if wisp.Valid && wisp.Int64 != 0 { - issue.Wisp = true - } - // Pinned field (bd-7h5) - if pinned.Valid && pinned.Int64 != 0 { - issue.Pinned = true - } - // Template field (beads-1ra) - if isTemplate.Valid && isTemplate.Int64 != 0 { - issue.IsTemplate = true - } - // Gate fields (bd-udsi) - if awaitType.Valid { - issue.AwaitType = awaitType.String - } - if awaitID.Valid { - issue.AwaitID = awaitID.String - } - if timeoutNs.Valid { - issue.Timeout = time.Duration(timeoutNs.Int64) - } - if waiters.Valid && waiters.String != "" { - issue.Waiters = parseJSONStringArray(waiters.String) - } - - // Fetch labels for this issue - labels, err := s.GetLabels(ctx, issue.ID) - if err != nil { - return nil, fmt.Errorf("failed to get labels: %w", err) - } - issue.Labels = labels - - return &issue, nil -} - // Allowed fields for update to prevent SQL injection var allowedUpdateFields = map[string]bool{ "status": true, @@ -847,146 +591,6 @@ func (s *SQLiteStorage) UpdateIssue(ctx context.Context, id string, updates map[ return tx.Commit() } -// UpdateIssueID updates an issue ID and all its text fields in a single transaction -func (s *SQLiteStorage) UpdateIssueID(ctx context.Context, oldID, newID string, issue *types.Issue, actor string) error { - // Get exclusive connection to ensure PRAGMA applies - conn, err := s.db.Conn(ctx) - if err != nil { - return fmt.Errorf("failed to get connection: %w", err) - } - defer func() { _ = conn.Close() }() - - // Disable foreign keys on this specific connection - _, err = conn.ExecContext(ctx, `PRAGMA foreign_keys = OFF`) - if err != nil { - return fmt.Errorf("failed to disable foreign keys: %w", err) - } - - tx, err := conn.BeginTx(ctx, nil) - if err != nil { - return fmt.Errorf("failed to begin transaction: %w", err) - } - defer func() { _ = tx.Rollback() }() - - result, err := tx.ExecContext(ctx, ` - UPDATE issues - SET id = ?, title = ?, description = ?, design = ?, acceptance_criteria = ?, notes = ?, updated_at = ? - WHERE id = ? - `, newID, issue.Title, issue.Description, issue.Design, issue.AcceptanceCriteria, issue.Notes, time.Now(), oldID) - if err != nil { - return fmt.Errorf("failed to update issue ID: %w", err) - } - - rows, err := result.RowsAffected() - if err != nil { - return fmt.Errorf("failed to get rows affected: %w", err) - } - if rows == 0 { - return fmt.Errorf("issue not found: %s", oldID) - } - - _, err = tx.ExecContext(ctx, `UPDATE dependencies SET issue_id = ? WHERE issue_id = ?`, newID, oldID) - if err != nil { - return fmt.Errorf("failed to update issue_id in dependencies: %w", err) - } - - _, err = tx.ExecContext(ctx, `UPDATE dependencies SET depends_on_id = ? WHERE depends_on_id = ?`, newID, oldID) - if err != nil { - return fmt.Errorf("failed to update depends_on_id in dependencies: %w", err) - } - - _, err = tx.ExecContext(ctx, `UPDATE events SET issue_id = ? WHERE issue_id = ?`, newID, oldID) - if err != nil { - return fmt.Errorf("failed to update events: %w", err) - } - - _, err = tx.ExecContext(ctx, `UPDATE labels SET issue_id = ? WHERE issue_id = ?`, newID, oldID) - if err != nil { - return fmt.Errorf("failed to update labels: %w", err) - } - - _, err = tx.ExecContext(ctx, `UPDATE comments SET issue_id = ? WHERE issue_id = ?`, newID, oldID) - if err != nil { - return fmt.Errorf("failed to update comments: %w", err) - } - - _, err = tx.ExecContext(ctx, ` - UPDATE dirty_issues SET issue_id = ? WHERE issue_id = ? - `, newID, oldID) - if err != nil { - return fmt.Errorf("failed to update dirty_issues: %w", err) - } - - _, err = tx.ExecContext(ctx, `UPDATE issue_snapshots SET issue_id = ? WHERE issue_id = ?`, newID, oldID) - if err != nil { - return fmt.Errorf("failed to update issue_snapshots: %w", err) - } - - _, err = tx.ExecContext(ctx, `UPDATE compaction_snapshots SET issue_id = ? WHERE issue_id = ?`, newID, oldID) - if err != nil { - return fmt.Errorf("failed to update compaction_snapshots: %w", err) - } - - _, err = tx.ExecContext(ctx, ` - INSERT INTO dirty_issues (issue_id, marked_at) - VALUES (?, ?) - ON CONFLICT (issue_id) DO UPDATE SET marked_at = excluded.marked_at - `, newID, time.Now()) - if err != nil { - return fmt.Errorf("failed to mark issue dirty: %w", err) - } - - _, err = tx.ExecContext(ctx, ` - INSERT INTO events (issue_id, event_type, actor, old_value, new_value) - VALUES (?, 'renamed', ?, ?, ?) - `, newID, actor, oldID, newID) - if err != nil { - return fmt.Errorf("failed to record rename event: %w", err) - } - - return tx.Commit() -} - -// RenameDependencyPrefix updates the prefix in all dependency records -// GH#630: This was previously a no-op, causing dependencies to break after rename-prefix -func (s *SQLiteStorage) RenameDependencyPrefix(ctx context.Context, oldPrefix, newPrefix string) error { - // Update issue_id column - _, err := s.db.ExecContext(ctx, ` - UPDATE dependencies - SET issue_id = ? || substr(issue_id, length(?) + 1) - WHERE issue_id LIKE ? || '%' - `, newPrefix, oldPrefix, oldPrefix) - if err != nil { - return fmt.Errorf("failed to update issue_id in dependencies: %w", err) - } - - // Update depends_on_id column - _, err = s.db.ExecContext(ctx, ` - UPDATE dependencies - SET depends_on_id = ? || substr(depends_on_id, length(?) + 1) - WHERE depends_on_id LIKE ? || '%' - `, newPrefix, oldPrefix, oldPrefix) - if err != nil { - return fmt.Errorf("failed to update depends_on_id in dependencies: %w", err) - } - - return nil -} - -// RenameCounterPrefix is a no-op with hash-based IDs (bd-8e05) -// Kept for backward compatibility with rename-prefix command -func (s *SQLiteStorage) RenameCounterPrefix(ctx context.Context, oldPrefix, newPrefix string) error { - // Hash-based IDs don't use counters, so nothing to update - return nil -} - -// ResetCounter is a no-op with hash-based IDs (bd-8e05) -// Kept for backward compatibility -func (s *SQLiteStorage) ResetCounter(ctx context.Context, prefix string) error { - // Hash-based IDs don't use counters, so nothing to reset - return nil -} - // CloseIssue closes an issue with a reason func (s *SQLiteStorage) CloseIssue(ctx context.Context, id string, reason string, actor string) error { now := time.Now() @@ -1044,661 +648,3 @@ func (s *SQLiteStorage) CloseIssue(ctx context.Context, id string, reason string return tx.Commit() } - -// CreateTombstone converts an existing issue to a tombstone record. -// This is a soft-delete that preserves the issue in the database with status="tombstone". -// The issue will still appear in exports but be excluded from normal queries. -// Dependencies must be removed separately before calling this method. -func (s *SQLiteStorage) CreateTombstone(ctx context.Context, id string, actor string, reason string) error { - // Get the issue to preserve its original type - issue, err := s.GetIssue(ctx, id) - if err != nil { - return fmt.Errorf("failed to get issue: %w", err) - } - if issue == nil { - return fmt.Errorf("issue not found: %s", id) - } - - tx, err := s.db.BeginTx(ctx, nil) - if err != nil { - return fmt.Errorf("failed to begin transaction: %w", err) - } - defer func() { _ = tx.Rollback() }() - - now := time.Now() - originalType := string(issue.IssueType) - - // Convert issue to tombstone - // Note: closed_at must be set to NULL because of CHECK constraint: - // (status = 'closed') = (closed_at IS NOT NULL) - _, err = tx.ExecContext(ctx, ` - UPDATE issues - SET status = ?, - closed_at = NULL, - deleted_at = ?, - deleted_by = ?, - delete_reason = ?, - original_type = ?, - updated_at = ? - WHERE id = ? - `, types.StatusTombstone, now, actor, reason, originalType, now, id) - if err != nil { - return fmt.Errorf("failed to create tombstone: %w", err) - } - - // Record tombstone creation event - _, err = tx.ExecContext(ctx, ` - INSERT INTO events (issue_id, event_type, actor, comment) - VALUES (?, ?, ?, ?) - `, id, "deleted", actor, reason) - if err != nil { - return fmt.Errorf("failed to record tombstone event: %w", err) - } - - // Mark issue as dirty for incremental export - _, err = tx.ExecContext(ctx, ` - INSERT INTO dirty_issues (issue_id, marked_at) - VALUES (?, ?) - ON CONFLICT (issue_id) DO UPDATE SET marked_at = excluded.marked_at - `, id, now) - if err != nil { - return fmt.Errorf("failed to mark issue dirty: %w", err) - } - - // Invalidate blocked issues cache since status changed (bd-5qim) - // Tombstone issues don't block others, so this affects blocking calculations - if err := s.invalidateBlockedCache(ctx, tx); err != nil { - return fmt.Errorf("failed to invalidate blocked cache: %w", err) - } - - if err := tx.Commit(); err != nil { - return wrapDBError("commit tombstone transaction", err) - } - - return nil -} - -// DeleteIssue permanently removes an issue from the database -func (s *SQLiteStorage) DeleteIssue(ctx context.Context, id string) error { - tx, err := s.db.BeginTx(ctx, nil) - if err != nil { - return fmt.Errorf("failed to begin transaction: %w", err) - } - defer func() { _ = tx.Rollback() }() - - // Delete dependencies (both directions) - _, err = tx.ExecContext(ctx, `DELETE FROM dependencies WHERE issue_id = ? OR depends_on_id = ?`, id, id) - if err != nil { - return fmt.Errorf("failed to delete dependencies: %w", err) - } - - // Delete events - _, err = tx.ExecContext(ctx, `DELETE FROM events WHERE issue_id = ?`, id) - if err != nil { - return fmt.Errorf("failed to delete events: %w", err) - } - - // Delete comments (no FK cascade on this table) (bd-687g) - _, err = tx.ExecContext(ctx, `DELETE FROM comments WHERE issue_id = ?`, id) - if err != nil { - return fmt.Errorf("failed to delete comments: %w", err) - } - - // Delete from dirty_issues - _, err = tx.ExecContext(ctx, `DELETE FROM dirty_issues WHERE issue_id = ?`, id) - if err != nil { - return fmt.Errorf("failed to delete dirty marker: %w", err) - } - - // Delete the issue itself - result, err := tx.ExecContext(ctx, `DELETE FROM issues WHERE id = ?`, id) - if err != nil { - return fmt.Errorf("failed to delete issue: %w", err) - } - - rowsAffected, err := result.RowsAffected() - if err != nil { - return fmt.Errorf("failed to check rows affected: %w", err) - } - if rowsAffected == 0 { - return fmt.Errorf("issue not found: %s", id) - } - - if err := tx.Commit(); err != nil { - return wrapDBError("commit delete transaction", err) - } - - // REMOVED (bd-c7af): Counter sync after deletion - no longer needed with hash IDs - return nil -} - -// DeleteIssuesResult contains statistics about a batch deletion operation -type DeleteIssuesResult struct { - DeletedCount int - DependenciesCount int - LabelsCount int - EventsCount int - OrphanedIssues []string -} - -// DeleteIssues deletes multiple issues in a single transaction -// If cascade is true, recursively deletes dependents -// If cascade is false but force is true, deletes issues and orphans their dependents -// If cascade and force are both false, returns an error if any issue has dependents -// If dryRun is true, only computes statistics without deleting -func (s *SQLiteStorage) DeleteIssues(ctx context.Context, ids []string, cascade bool, force bool, dryRun bool) (*DeleteIssuesResult, error) { - if len(ids) == 0 { - return &DeleteIssuesResult{}, nil - } - - tx, err := s.db.BeginTx(ctx, nil) - if err != nil { - return nil, fmt.Errorf("failed to begin transaction: %w", err) - } - defer func() { _ = tx.Rollback() }() - - idSet := buildIDSet(ids) - result := &DeleteIssuesResult{} - - expandedIDs, err := s.resolveDeleteSet(ctx, tx, ids, idSet, cascade, force, result) - if err != nil { - return nil, wrapDBError("resolve delete set", err) - } - - inClause, args := buildSQLInClause(expandedIDs) - if err := s.populateDeleteStats(ctx, tx, inClause, args, result); err != nil { - return nil, err - } - - if dryRun { - return result, nil - } - - if err := s.executeDelete(ctx, tx, inClause, args, result); err != nil { - return nil, err - } - - if err := tx.Commit(); err != nil { - return nil, fmt.Errorf("failed to commit transaction: %w", err) - } - - // REMOVED (bd-c7af): Counter sync after deletion - no longer needed with hash IDs - - return result, nil -} - -func buildIDSet(ids []string) map[string]bool { - idSet := make(map[string]bool, len(ids)) - for _, id := range ids { - idSet[id] = true - } - return idSet -} - -func (s *SQLiteStorage) resolveDeleteSet(ctx context.Context, tx *sql.Tx, ids []string, idSet map[string]bool, cascade bool, force bool, result *DeleteIssuesResult) ([]string, error) { - if cascade { - return s.expandWithDependents(ctx, tx, ids, idSet) - } - if !force { - return ids, s.validateNoDependents(ctx, tx, ids, idSet, result) - } - return ids, s.trackOrphanedIssues(ctx, tx, ids, idSet, result) -} - -func (s *SQLiteStorage) expandWithDependents(ctx context.Context, tx *sql.Tx, ids []string, _ map[string]bool) ([]string, error) { - allToDelete, err := s.findAllDependentsRecursive(ctx, tx, ids) - if err != nil { - return nil, fmt.Errorf("failed to find dependents: %w", err) - } - expandedIDs := make([]string, 0, len(allToDelete)) - for id := range allToDelete { - expandedIDs = append(expandedIDs, id) - } - return expandedIDs, nil -} - -func (s *SQLiteStorage) validateNoDependents(ctx context.Context, tx *sql.Tx, ids []string, idSet map[string]bool, result *DeleteIssuesResult) error { - for _, id := range ids { - if err := s.checkSingleIssueValidation(ctx, tx, id, idSet, result); err != nil { - return wrapDBError("check dependents", err) - } - } - return nil -} - -func (s *SQLiteStorage) checkSingleIssueValidation(ctx context.Context, tx *sql.Tx, id string, idSet map[string]bool, result *DeleteIssuesResult) error { - var depCount int - err := tx.QueryRowContext(ctx, - `SELECT COUNT(*) FROM dependencies WHERE depends_on_id = ?`, id).Scan(&depCount) - if err != nil { - return fmt.Errorf("failed to check dependents for %s: %w", id, err) - } - if depCount == 0 { - return nil - } - - rows, err := tx.QueryContext(ctx, - `SELECT issue_id FROM dependencies WHERE depends_on_id = ?`, id) - if err != nil { - return fmt.Errorf("failed to get dependents for %s: %w", id, err) - } - defer func() { _ = rows.Close() }() - - hasExternal := false - for rows.Next() { - var depID string - if err := rows.Scan(&depID); err != nil { - return fmt.Errorf("failed to scan dependent: %w", err) - } - if !idSet[depID] { - hasExternal = true - result.OrphanedIssues = append(result.OrphanedIssues, depID) - } - } - - if err := rows.Err(); err != nil { - return fmt.Errorf("failed to iterate dependents for %s: %w", id, err) - } - - if hasExternal { - return fmt.Errorf("issue %s has dependents not in deletion set; use --cascade to delete them or --force to orphan them", id) - } - return nil -} - -func (s *SQLiteStorage) trackOrphanedIssues(ctx context.Context, tx *sql.Tx, ids []string, idSet map[string]bool, result *DeleteIssuesResult) error { - orphanSet := make(map[string]bool) - for _, id := range ids { - if err := s.collectOrphansForID(ctx, tx, id, idSet, orphanSet); err != nil { - return wrapDBError("collect orphans", err) - } - } - for orphanID := range orphanSet { - result.OrphanedIssues = append(result.OrphanedIssues, orphanID) - } - return nil -} - -func (s *SQLiteStorage) collectOrphansForID(ctx context.Context, tx *sql.Tx, id string, idSet map[string]bool, orphanSet map[string]bool) error { - rows, err := tx.QueryContext(ctx, - `SELECT issue_id FROM dependencies WHERE depends_on_id = ?`, id) - if err != nil { - return fmt.Errorf("failed to get dependents for %s: %w", id, err) - } - defer func() { _ = rows.Close() }() - - for rows.Next() { - var depID string - if err := rows.Scan(&depID); err != nil { - return fmt.Errorf("failed to scan dependent: %w", err) - } - if !idSet[depID] { - orphanSet[depID] = true - } - } - return rows.Err() -} - -func buildSQLInClause(ids []string) (string, []interface{}) { - placeholders := make([]string, len(ids)) - args := make([]interface{}, len(ids)) - for i, id := range ids { - placeholders[i] = "?" - args[i] = id - } - return strings.Join(placeholders, ","), args -} - -func (s *SQLiteStorage) populateDeleteStats(ctx context.Context, tx *sql.Tx, inClause string, args []interface{}, result *DeleteIssuesResult) error { - counts := []struct { - query string - dest *int - }{ - {fmt.Sprintf(`SELECT COUNT(*) FROM dependencies WHERE issue_id IN (%s) OR depends_on_id IN (%s)`, inClause, inClause), &result.DependenciesCount}, - {fmt.Sprintf(`SELECT COUNT(*) FROM labels WHERE issue_id IN (%s)`, inClause), &result.LabelsCount}, - {fmt.Sprintf(`SELECT COUNT(*) FROM events WHERE issue_id IN (%s)`, inClause), &result.EventsCount}, - } - - for _, c := range counts { - queryArgs := args - if c.dest == &result.DependenciesCount { - queryArgs = append(args, args...) - } - if err := tx.QueryRowContext(ctx, c.query, queryArgs...).Scan(c.dest); err != nil { - return fmt.Errorf("failed to count: %w", err) - } - } - - result.DeletedCount = len(args) - return nil -} - -func (s *SQLiteStorage) executeDelete(ctx context.Context, tx *sql.Tx, inClause string, args []interface{}, result *DeleteIssuesResult) error { - // Note: This method now creates tombstones instead of hard-deleting (bd-3b4) - // Only dependencies are deleted - issues are converted to tombstones - - // 1. Delete dependencies - tombstones don't block other issues - _, err := tx.ExecContext(ctx, - fmt.Sprintf(`DELETE FROM dependencies WHERE issue_id IN (%s) OR depends_on_id IN (%s)`, inClause, inClause), - append(args, args...)...) - if err != nil { - return fmt.Errorf("failed to delete dependencies: %w", err) - } - - // 2. Get issue types before converting to tombstones (need for original_type) - issueTypes := make(map[string]string) - rows, err := tx.QueryContext(ctx, - fmt.Sprintf(`SELECT id, issue_type FROM issues WHERE id IN (%s)`, inClause), - args...) - if err != nil { - return fmt.Errorf("failed to get issue types: %w", err) - } - for rows.Next() { - var id, issueType string - if err := rows.Scan(&id, &issueType); err != nil { - _ = rows.Close() // #nosec G104 - error handling not critical in error path - return fmt.Errorf("failed to scan issue type: %w", err) - } - issueTypes[id] = issueType - } - _ = rows.Close() - - // 3. Convert issues to tombstones (only for issues that exist) - // Note: closed_at must be set to NULL because of CHECK constraint: - // (status = 'closed') = (closed_at IS NOT NULL) - now := time.Now() - deletedCount := 0 - for id, originalType := range issueTypes { - execResult, err := tx.ExecContext(ctx, ` - UPDATE issues - SET status = ?, - closed_at = NULL, - deleted_at = ?, - deleted_by = ?, - delete_reason = ?, - original_type = ?, - updated_at = ? - WHERE id = ? - `, types.StatusTombstone, now, "batch delete", "batch delete", originalType, now, id) - if err != nil { - return fmt.Errorf("failed to create tombstone for %s: %w", id, err) - } - - rowsAffected, _ := execResult.RowsAffected() - if rowsAffected == 0 { - continue // Issue doesn't exist, skip - } - deletedCount++ - - // Record tombstone creation event - _, err = tx.ExecContext(ctx, ` - INSERT INTO events (issue_id, event_type, actor, comment) - VALUES (?, ?, ?, ?) - `, id, "deleted", "batch delete", "batch delete") - if err != nil { - return fmt.Errorf("failed to record tombstone event for %s: %w", id, err) - } - - // Mark issue as dirty for incremental export - _, err = tx.ExecContext(ctx, ` - INSERT INTO dirty_issues (issue_id, marked_at) - VALUES (?, ?) - ON CONFLICT (issue_id) DO UPDATE SET marked_at = excluded.marked_at - `, id, now) - if err != nil { - return fmt.Errorf("failed to mark issue dirty for %s: %w", id, err) - } - } - - // 4. Invalidate blocked issues cache since statuses changed (bd-5qim) - if err := s.invalidateBlockedCache(ctx, tx); err != nil { - return fmt.Errorf("failed to invalidate blocked cache: %w", err) - } - - result.DeletedCount = deletedCount - return nil -} - -// findAllDependentsRecursive finds all issues that depend on the given issues, recursively -func (s *SQLiteStorage) findAllDependentsRecursive(ctx context.Context, tx *sql.Tx, ids []string) (map[string]bool, error) { - result := make(map[string]bool) - for _, id := range ids { - result[id] = true - } - - toProcess := make([]string, len(ids)) - copy(toProcess, ids) - - for len(toProcess) > 0 { - current := toProcess[0] - toProcess = toProcess[1:] - - rows, err := tx.QueryContext(ctx, - `SELECT issue_id FROM dependencies WHERE depends_on_id = ?`, current) - if err != nil { - return nil, err - } - defer rows.Close() - - for rows.Next() { - var depID string - if err := rows.Scan(&depID); err != nil { - return nil, err - } - if !result[depID] { - result[depID] = true - toProcess = append(toProcess, depID) - } - } - if err := rows.Err(); err != nil { - return nil, err - } - } - - return result, nil -} - -// SearchIssues finds issues matching query and filters -func (s *SQLiteStorage) SearchIssues(ctx context.Context, query string, filter types.IssueFilter) ([]*types.Issue, error) { - // Check for external database file modifications (daemon mode) - s.checkFreshness() - - // Hold read lock during database operations to prevent reconnect() from - // closing the connection mid-query (GH#607 race condition fix) - s.reconnectMu.RLock() - defer s.reconnectMu.RUnlock() - - whereClauses := []string{} - args := []interface{}{} - - if query != "" { - whereClauses = append(whereClauses, "(title LIKE ? OR description LIKE ? OR id LIKE ?)") - pattern := "%" + query + "%" - args = append(args, pattern, pattern, pattern) - } - - if filter.TitleSearch != "" { - whereClauses = append(whereClauses, "title LIKE ?") - pattern := "%" + filter.TitleSearch + "%" - args = append(args, pattern) - } - - // Pattern matching - if filter.TitleContains != "" { - whereClauses = append(whereClauses, "title LIKE ?") - args = append(args, "%"+filter.TitleContains+"%") - } - if filter.DescriptionContains != "" { - whereClauses = append(whereClauses, "description LIKE ?") - args = append(args, "%"+filter.DescriptionContains+"%") - } - if filter.NotesContains != "" { - whereClauses = append(whereClauses, "notes LIKE ?") - args = append(args, "%"+filter.NotesContains+"%") - } - - if filter.Status != nil { - whereClauses = append(whereClauses, "status = ?") - args = append(args, *filter.Status) - } else if !filter.IncludeTombstones { - // Exclude tombstones by default unless explicitly filtering for them (bd-1bu) - whereClauses = append(whereClauses, "status != ?") - args = append(args, types.StatusTombstone) - } - - if filter.Priority != nil { - whereClauses = append(whereClauses, "priority = ?") - args = append(args, *filter.Priority) - } - - // Priority ranges - if filter.PriorityMin != nil { - whereClauses = append(whereClauses, "priority >= ?") - args = append(args, *filter.PriorityMin) - } - if filter.PriorityMax != nil { - whereClauses = append(whereClauses, "priority <= ?") - args = append(args, *filter.PriorityMax) - } - - if filter.IssueType != nil { - whereClauses = append(whereClauses, "issue_type = ?") - args = append(args, *filter.IssueType) - } - - if filter.Assignee != nil { - whereClauses = append(whereClauses, "assignee = ?") - args = append(args, *filter.Assignee) - } - - // Date ranges - if filter.CreatedAfter != nil { - whereClauses = append(whereClauses, "created_at > ?") - args = append(args, filter.CreatedAfter.Format(time.RFC3339)) - } - if filter.CreatedBefore != nil { - whereClauses = append(whereClauses, "created_at < ?") - args = append(args, filter.CreatedBefore.Format(time.RFC3339)) - } - if filter.UpdatedAfter != nil { - whereClauses = append(whereClauses, "updated_at > ?") - args = append(args, filter.UpdatedAfter.Format(time.RFC3339)) - } - if filter.UpdatedBefore != nil { - whereClauses = append(whereClauses, "updated_at < ?") - args = append(args, filter.UpdatedBefore.Format(time.RFC3339)) - } - if filter.ClosedAfter != nil { - whereClauses = append(whereClauses, "closed_at > ?") - args = append(args, filter.ClosedAfter.Format(time.RFC3339)) - } - if filter.ClosedBefore != nil { - whereClauses = append(whereClauses, "closed_at < ?") - args = append(args, filter.ClosedBefore.Format(time.RFC3339)) - } - - // Empty/null checks - if filter.EmptyDescription { - whereClauses = append(whereClauses, "(description IS NULL OR description = '')") - } - if filter.NoAssignee { - whereClauses = append(whereClauses, "(assignee IS NULL OR assignee = '')") - } - if filter.NoLabels { - whereClauses = append(whereClauses, "id NOT IN (SELECT DISTINCT issue_id FROM labels)") - } - - // Label filtering: issue must have ALL specified labels - if len(filter.Labels) > 0 { - for _, label := range filter.Labels { - whereClauses = append(whereClauses, "id IN (SELECT issue_id FROM labels WHERE label = ?)") - args = append(args, label) - } - } - - // Label filtering (OR): issue must have AT LEAST ONE of these labels - if len(filter.LabelsAny) > 0 { - placeholders := make([]string, len(filter.LabelsAny)) - for i, label := range filter.LabelsAny { - placeholders[i] = "?" - args = append(args, label) - } - whereClauses = append(whereClauses, fmt.Sprintf("id IN (SELECT issue_id FROM labels WHERE label IN (%s))", strings.Join(placeholders, ", "))) - } - - // ID filtering: match specific issue IDs - if len(filter.IDs) > 0 { - placeholders := make([]string, len(filter.IDs)) - for i, id := range filter.IDs { - placeholders[i] = "?" - args = append(args, id) - } - whereClauses = append(whereClauses, fmt.Sprintf("id IN (%s)", strings.Join(placeholders, ", "))) - } - - // Wisp filtering (bd-kwro.9) - if filter.Wisp != nil { - if *filter.Wisp { - whereClauses = append(whereClauses, "ephemeral = 1") // SQL column is still 'ephemeral' - } else { - whereClauses = append(whereClauses, "(ephemeral = 0 OR ephemeral IS NULL)") - } - } - - // Pinned filtering (bd-7h5) - if filter.Pinned != nil { - if *filter.Pinned { - whereClauses = append(whereClauses, "pinned = 1") - } else { - whereClauses = append(whereClauses, "(pinned = 0 OR pinned IS NULL)") - } - } - - // Template filtering (beads-1ra) - if filter.IsTemplate != nil { - if *filter.IsTemplate { - whereClauses = append(whereClauses, "is_template = 1") - } else { - whereClauses = append(whereClauses, "(is_template = 0 OR is_template IS NULL)") - } - } - - // Parent filtering (bd-yqhh): filter children by parent issue - if filter.ParentID != nil { - whereClauses = append(whereClauses, "id IN (SELECT issue_id FROM dependencies WHERE type = 'parent-child' AND depends_on_id = ?)") - args = append(args, *filter.ParentID) - } - - whereSQL := "" - if len(whereClauses) > 0 { - whereSQL = "WHERE " + strings.Join(whereClauses, " AND ") - } - - limitSQL := "" - if filter.Limit > 0 { - limitSQL = " LIMIT ?" - args = append(args, filter.Limit) - } - - // #nosec G201 - safe SQL with controlled formatting - querySQL := fmt.Sprintf(` - SELECT id, content_hash, title, description, design, acceptance_criteria, notes, - status, priority, issue_type, assignee, estimated_minutes, - created_at, updated_at, closed_at, external_ref, source_repo, close_reason, - deleted_at, deleted_by, delete_reason, original_type, - sender, ephemeral, pinned, is_template, - await_type, await_id, timeout_ns, waiters - FROM issues - %s - ORDER BY priority ASC, created_at DESC - %s - `, whereSQL, limitSQL) - - rows, err := s.db.QueryContext(ctx, querySQL, args...) - if err != nil { - return nil, fmt.Errorf("failed to search issues: %w", err) - } - defer func() { _ = rows.Close() }() - - return s.scanIssues(ctx, rows) -} diff --git a/internal/storage/sqlite/queries_delete.go b/internal/storage/sqlite/queries_delete.go new file mode 100644 index 00000000..b76b566f --- /dev/null +++ b/internal/storage/sqlite/queries_delete.go @@ -0,0 +1,464 @@ +package sqlite + +import ( + "context" + "database/sql" + "fmt" + "strings" + "time" + + "github.com/steveyegge/beads/internal/types" +) + +// CreateTombstone converts an existing issue to a tombstone record. +// This is a soft-delete that preserves the issue in the database with status="tombstone". +// The issue will still appear in exports but be excluded from normal queries. +// Dependencies must be removed separately before calling this method. +func (s *SQLiteStorage) CreateTombstone(ctx context.Context, id string, actor string, reason string) error { + // Get the issue to preserve its original type + issue, err := s.GetIssue(ctx, id) + if err != nil { + return fmt.Errorf("failed to get issue: %w", err) + } + if issue == nil { + return fmt.Errorf("issue not found: %s", id) + } + + tx, err := s.db.BeginTx(ctx, nil) + if err != nil { + return fmt.Errorf("failed to begin transaction: %w", err) + } + defer func() { _ = tx.Rollback() }() + + now := time.Now() + originalType := string(issue.IssueType) + + // Convert issue to tombstone + // Note: closed_at must be set to NULL because of CHECK constraint: + // (status = 'closed') = (closed_at IS NOT NULL) + _, err = tx.ExecContext(ctx, ` + UPDATE issues + SET status = ?, + closed_at = NULL, + deleted_at = ?, + deleted_by = ?, + delete_reason = ?, + original_type = ?, + updated_at = ? + WHERE id = ? + `, types.StatusTombstone, now, actor, reason, originalType, now, id) + if err != nil { + return fmt.Errorf("failed to create tombstone: %w", err) + } + + // Record tombstone creation event + _, err = tx.ExecContext(ctx, ` + INSERT INTO events (issue_id, event_type, actor, comment) + VALUES (?, ?, ?, ?) + `, id, "deleted", actor, reason) + if err != nil { + return fmt.Errorf("failed to record tombstone event: %w", err) + } + + // Mark issue as dirty for incremental export + _, err = tx.ExecContext(ctx, ` + INSERT INTO dirty_issues (issue_id, marked_at) + VALUES (?, ?) + ON CONFLICT (issue_id) DO UPDATE SET marked_at = excluded.marked_at + `, id, now) + if err != nil { + return fmt.Errorf("failed to mark issue dirty: %w", err) + } + + // Invalidate blocked issues cache since status changed (bd-5qim) + // Tombstone issues don't block others, so this affects blocking calculations + if err := s.invalidateBlockedCache(ctx, tx); err != nil { + return fmt.Errorf("failed to invalidate blocked cache: %w", err) + } + + if err := tx.Commit(); err != nil { + return wrapDBError("commit tombstone transaction", err) + } + + return nil +} + +// DeleteIssue permanently removes an issue from the database +func (s *SQLiteStorage) DeleteIssue(ctx context.Context, id string) error { + tx, err := s.db.BeginTx(ctx, nil) + if err != nil { + return fmt.Errorf("failed to begin transaction: %w", err) + } + defer func() { _ = tx.Rollback() }() + + // Delete dependencies (both directions) + _, err = tx.ExecContext(ctx, `DELETE FROM dependencies WHERE issue_id = ? OR depends_on_id = ?`, id, id) + if err != nil { + return fmt.Errorf("failed to delete dependencies: %w", err) + } + + // Delete events + _, err = tx.ExecContext(ctx, `DELETE FROM events WHERE issue_id = ?`, id) + if err != nil { + return fmt.Errorf("failed to delete events: %w", err) + } + + // Delete comments (no FK cascade on this table) (bd-687g) + _, err = tx.ExecContext(ctx, `DELETE FROM comments WHERE issue_id = ?`, id) + if err != nil { + return fmt.Errorf("failed to delete comments: %w", err) + } + + // Delete from dirty_issues + _, err = tx.ExecContext(ctx, `DELETE FROM dirty_issues WHERE issue_id = ?`, id) + if err != nil { + return fmt.Errorf("failed to delete dirty marker: %w", err) + } + + // Delete the issue itself + result, err := tx.ExecContext(ctx, `DELETE FROM issues WHERE id = ?`, id) + if err != nil { + return fmt.Errorf("failed to delete issue: %w", err) + } + + rowsAffected, err := result.RowsAffected() + if err != nil { + return fmt.Errorf("failed to check rows affected: %w", err) + } + if rowsAffected == 0 { + return fmt.Errorf("issue not found: %s", id) + } + + if err := tx.Commit(); err != nil { + return wrapDBError("commit delete transaction", err) + } + + // REMOVED (bd-c7af): Counter sync after deletion - no longer needed with hash IDs + return nil +} + +// DeleteIssuesResult contains statistics about a batch deletion operation +type DeleteIssuesResult struct { + DeletedCount int + DependenciesCount int + LabelsCount int + EventsCount int + OrphanedIssues []string +} + +// DeleteIssues deletes multiple issues in a single transaction +// If cascade is true, recursively deletes dependents +// If cascade is false but force is true, deletes issues and orphans their dependents +// If cascade and force are both false, returns an error if any issue has dependents +// If dryRun is true, only computes statistics without deleting +func (s *SQLiteStorage) DeleteIssues(ctx context.Context, ids []string, cascade bool, force bool, dryRun bool) (*DeleteIssuesResult, error) { + if len(ids) == 0 { + return &DeleteIssuesResult{}, nil + } + + tx, err := s.db.BeginTx(ctx, nil) + if err != nil { + return nil, fmt.Errorf("failed to begin transaction: %w", err) + } + defer func() { _ = tx.Rollback() }() + + idSet := buildIDSet(ids) + result := &DeleteIssuesResult{} + + expandedIDs, err := s.resolveDeleteSet(ctx, tx, ids, idSet, cascade, force, result) + if err != nil { + return nil, wrapDBError("resolve delete set", err) + } + + inClause, args := buildSQLInClause(expandedIDs) + if err := s.populateDeleteStats(ctx, tx, inClause, args, result); err != nil { + return nil, err + } + + if dryRun { + return result, nil + } + + if err := s.executeDelete(ctx, tx, inClause, args, result); err != nil { + return nil, err + } + + if err := tx.Commit(); err != nil { + return nil, fmt.Errorf("failed to commit transaction: %w", err) + } + + // REMOVED (bd-c7af): Counter sync after deletion - no longer needed with hash IDs + + return result, nil +} + +func buildIDSet(ids []string) map[string]bool { + idSet := make(map[string]bool, len(ids)) + for _, id := range ids { + idSet[id] = true + } + return idSet +} + +func (s *SQLiteStorage) resolveDeleteSet(ctx context.Context, tx *sql.Tx, ids []string, idSet map[string]bool, cascade bool, force bool, result *DeleteIssuesResult) ([]string, error) { + if cascade { + return s.expandWithDependents(ctx, tx, ids, idSet) + } + if !force { + return ids, s.validateNoDependents(ctx, tx, ids, idSet, result) + } + return ids, s.trackOrphanedIssues(ctx, tx, ids, idSet, result) +} + +func (s *SQLiteStorage) expandWithDependents(ctx context.Context, tx *sql.Tx, ids []string, _ map[string]bool) ([]string, error) { + allToDelete, err := s.findAllDependentsRecursive(ctx, tx, ids) + if err != nil { + return nil, fmt.Errorf("failed to find dependents: %w", err) + } + expandedIDs := make([]string, 0, len(allToDelete)) + for id := range allToDelete { + expandedIDs = append(expandedIDs, id) + } + return expandedIDs, nil +} + +func (s *SQLiteStorage) validateNoDependents(ctx context.Context, tx *sql.Tx, ids []string, idSet map[string]bool, result *DeleteIssuesResult) error { + for _, id := range ids { + if err := s.checkSingleIssueValidation(ctx, tx, id, idSet, result); err != nil { + return wrapDBError("check dependents", err) + } + } + return nil +} + +func (s *SQLiteStorage) checkSingleIssueValidation(ctx context.Context, tx *sql.Tx, id string, idSet map[string]bool, result *DeleteIssuesResult) error { + var depCount int + err := tx.QueryRowContext(ctx, + `SELECT COUNT(*) FROM dependencies WHERE depends_on_id = ?`, id).Scan(&depCount) + if err != nil { + return fmt.Errorf("failed to check dependents for %s: %w", id, err) + } + if depCount == 0 { + return nil + } + + rows, err := tx.QueryContext(ctx, + `SELECT issue_id FROM dependencies WHERE depends_on_id = ?`, id) + if err != nil { + return fmt.Errorf("failed to get dependents for %s: %w", id, err) + } + defer func() { _ = rows.Close() }() + + hasExternal := false + for rows.Next() { + var depID string + if err := rows.Scan(&depID); err != nil { + return fmt.Errorf("failed to scan dependent: %w", err) + } + if !idSet[depID] { + hasExternal = true + result.OrphanedIssues = append(result.OrphanedIssues, depID) + } + } + + if err := rows.Err(); err != nil { + return fmt.Errorf("failed to iterate dependents for %s: %w", id, err) + } + + if hasExternal { + return fmt.Errorf("issue %s has dependents not in deletion set; use --cascade to delete them or --force to orphan them", id) + } + return nil +} + +func (s *SQLiteStorage) trackOrphanedIssues(ctx context.Context, tx *sql.Tx, ids []string, idSet map[string]bool, result *DeleteIssuesResult) error { + orphanSet := make(map[string]bool) + for _, id := range ids { + if err := s.collectOrphansForID(ctx, tx, id, idSet, orphanSet); err != nil { + return wrapDBError("collect orphans", err) + } + } + for orphanID := range orphanSet { + result.OrphanedIssues = append(result.OrphanedIssues, orphanID) + } + return nil +} + +func (s *SQLiteStorage) collectOrphansForID(ctx context.Context, tx *sql.Tx, id string, idSet map[string]bool, orphanSet map[string]bool) error { + rows, err := tx.QueryContext(ctx, + `SELECT issue_id FROM dependencies WHERE depends_on_id = ?`, id) + if err != nil { + return fmt.Errorf("failed to get dependents for %s: %w", id, err) + } + defer func() { _ = rows.Close() }() + + for rows.Next() { + var depID string + if err := rows.Scan(&depID); err != nil { + return fmt.Errorf("failed to scan dependent: %w", err) + } + if !idSet[depID] { + orphanSet[depID] = true + } + } + return rows.Err() +} + +func buildSQLInClause(ids []string) (string, []interface{}) { + placeholders := make([]string, len(ids)) + args := make([]interface{}, len(ids)) + for i, id := range ids { + placeholders[i] = "?" + args[i] = id + } + return strings.Join(placeholders, ","), args +} + +func (s *SQLiteStorage) populateDeleteStats(ctx context.Context, tx *sql.Tx, inClause string, args []interface{}, result *DeleteIssuesResult) error { + counts := []struct { + query string + dest *int + }{ + {fmt.Sprintf(`SELECT COUNT(*) FROM dependencies WHERE issue_id IN (%s) OR depends_on_id IN (%s)`, inClause, inClause), &result.DependenciesCount}, + {fmt.Sprintf(`SELECT COUNT(*) FROM labels WHERE issue_id IN (%s)`, inClause), &result.LabelsCount}, + {fmt.Sprintf(`SELECT COUNT(*) FROM events WHERE issue_id IN (%s)`, inClause), &result.EventsCount}, + } + + for _, c := range counts { + queryArgs := args + if c.dest == &result.DependenciesCount { + queryArgs = append(args, args...) + } + if err := tx.QueryRowContext(ctx, c.query, queryArgs...).Scan(c.dest); err != nil { + return fmt.Errorf("failed to count: %w", err) + } + } + + result.DeletedCount = len(args) + return nil +} + +func (s *SQLiteStorage) executeDelete(ctx context.Context, tx *sql.Tx, inClause string, args []interface{}, result *DeleteIssuesResult) error { + // Note: This method now creates tombstones instead of hard-deleting (bd-3b4) + // Only dependencies are deleted - issues are converted to tombstones + + // 1. Delete dependencies - tombstones don't block other issues + _, err := tx.ExecContext(ctx, + fmt.Sprintf(`DELETE FROM dependencies WHERE issue_id IN (%s) OR depends_on_id IN (%s)`, inClause, inClause), + append(args, args...)...) + if err != nil { + return fmt.Errorf("failed to delete dependencies: %w", err) + } + + // 2. Get issue types before converting to tombstones (need for original_type) + issueTypes := make(map[string]string) + rows, err := tx.QueryContext(ctx, + fmt.Sprintf(`SELECT id, issue_type FROM issues WHERE id IN (%s)`, inClause), + args...) + if err != nil { + return fmt.Errorf("failed to get issue types: %w", err) + } + for rows.Next() { + var id, issueType string + if err := rows.Scan(&id, &issueType); err != nil { + _ = rows.Close() // #nosec G104 - error handling not critical in error path + return fmt.Errorf("failed to scan issue type: %w", err) + } + issueTypes[id] = issueType + } + _ = rows.Close() + + // 3. Convert issues to tombstones (only for issues that exist) + // Note: closed_at must be set to NULL because of CHECK constraint: + // (status = 'closed') = (closed_at IS NOT NULL) + now := time.Now() + deletedCount := 0 + for id, originalType := range issueTypes { + execResult, err := tx.ExecContext(ctx, ` + UPDATE issues + SET status = ?, + closed_at = NULL, + deleted_at = ?, + deleted_by = ?, + delete_reason = ?, + original_type = ?, + updated_at = ? + WHERE id = ? + `, types.StatusTombstone, now, "batch delete", "batch delete", originalType, now, id) + if err != nil { + return fmt.Errorf("failed to create tombstone for %s: %w", id, err) + } + + rowsAffected, _ := execResult.RowsAffected() + if rowsAffected == 0 { + continue // Issue doesn't exist, skip + } + deletedCount++ + + // Record tombstone creation event + _, err = tx.ExecContext(ctx, ` + INSERT INTO events (issue_id, event_type, actor, comment) + VALUES (?, ?, ?, ?) + `, id, "deleted", "batch delete", "batch delete") + if err != nil { + return fmt.Errorf("failed to record tombstone event for %s: %w", id, err) + } + + // Mark issue as dirty for incremental export + _, err = tx.ExecContext(ctx, ` + INSERT INTO dirty_issues (issue_id, marked_at) + VALUES (?, ?) + ON CONFLICT (issue_id) DO UPDATE SET marked_at = excluded.marked_at + `, id, now) + if err != nil { + return fmt.Errorf("failed to mark issue dirty for %s: %w", id, err) + } + } + + // 4. Invalidate blocked issues cache since statuses changed (bd-5qim) + if err := s.invalidateBlockedCache(ctx, tx); err != nil { + return fmt.Errorf("failed to invalidate blocked cache: %w", err) + } + + result.DeletedCount = deletedCount + return nil +} + +// findAllDependentsRecursive finds all issues that depend on the given issues, recursively +func (s *SQLiteStorage) findAllDependentsRecursive(ctx context.Context, tx *sql.Tx, ids []string) (map[string]bool, error) { + result := make(map[string]bool) + for _, id := range ids { + result[id] = true + } + + toProcess := make([]string, len(ids)) + copy(toProcess, ids) + + for len(toProcess) > 0 { + current := toProcess[0] + toProcess = toProcess[1:] + + rows, err := tx.QueryContext(ctx, + `SELECT issue_id FROM dependencies WHERE depends_on_id = ?`, current) + if err != nil { + return nil, err + } + defer rows.Close() + + for rows.Next() { + var depID string + if err := rows.Scan(&depID); err != nil { + return nil, err + } + if !result[depID] { + result[depID] = true + toProcess = append(toProcess, depID) + } + } + if err := rows.Err(); err != nil { + return nil, err + } + } + + return result, nil +} diff --git a/internal/storage/sqlite/queries_helpers.go b/internal/storage/sqlite/queries_helpers.go new file mode 100644 index 00000000..c1af423f --- /dev/null +++ b/internal/storage/sqlite/queries_helpers.go @@ -0,0 +1,50 @@ +package sqlite + +import ( + "database/sql" + "encoding/json" + "time" +) + +// parseNullableTimeString parses a nullable time string from database TEXT columns. +// The ncruces/go-sqlite3 driver only auto-converts TEXTβ†’time.Time for columns declared +// as DATETIME/DATE/TIME/TIMESTAMP. For TEXT columns (like deleted_at), we must parse manually. +// Supports RFC3339, RFC3339Nano, and SQLite's native format. +func parseNullableTimeString(ns sql.NullString) *time.Time { + if !ns.Valid || ns.String == "" { + return nil + } + // Try RFC3339Nano first (more precise), then RFC3339, then SQLite format + for _, layout := range []string{time.RFC3339Nano, time.RFC3339, "2006-01-02 15:04:05"} { + if t, err := time.Parse(layout, ns.String); err == nil { + return &t + } + } + return nil // Unparseable - shouldn't happen with valid data +} + +// parseJSONStringArray parses a JSON string array from database TEXT column. +// Returns empty slice if the string is empty or invalid JSON. +func parseJSONStringArray(s string) []string { + if s == "" { + return nil + } + var result []string + if err := json.Unmarshal([]byte(s), &result); err != nil { + return nil // Invalid JSON - shouldn't happen with valid data + } + return result +} + +// formatJSONStringArray formats a string slice as JSON for database storage. +// Returns empty string if the slice is nil or empty. +func formatJSONStringArray(arr []string) string { + if len(arr) == 0 { + return "" + } + data, err := json.Marshal(arr) + if err != nil { + return "" + } + return string(data) +} diff --git a/internal/storage/sqlite/queries_rename.go b/internal/storage/sqlite/queries_rename.go new file mode 100644 index 00000000..b68f4631 --- /dev/null +++ b/internal/storage/sqlite/queries_rename.go @@ -0,0 +1,149 @@ +package sqlite + +import ( + "context" + "fmt" + "time" + + "github.com/steveyegge/beads/internal/types" +) + +// UpdateIssueID updates an issue ID and all its text fields in a single transaction +func (s *SQLiteStorage) UpdateIssueID(ctx context.Context, oldID, newID string, issue *types.Issue, actor string) error { + // Get exclusive connection to ensure PRAGMA applies + conn, err := s.db.Conn(ctx) + if err != nil { + return fmt.Errorf("failed to get connection: %w", err) + } + defer func() { _ = conn.Close() }() + + // Disable foreign keys on this specific connection + _, err = conn.ExecContext(ctx, `PRAGMA foreign_keys = OFF`) + if err != nil { + return fmt.Errorf("failed to disable foreign keys: %w", err) + } + + tx, err := conn.BeginTx(ctx, nil) + if err != nil { + return fmt.Errorf("failed to begin transaction: %w", err) + } + defer func() { _ = tx.Rollback() }() + + result, err := tx.ExecContext(ctx, ` + UPDATE issues + SET id = ?, title = ?, description = ?, design = ?, acceptance_criteria = ?, notes = ?, updated_at = ? + WHERE id = ? + `, newID, issue.Title, issue.Description, issue.Design, issue.AcceptanceCriteria, issue.Notes, time.Now(), oldID) + if err != nil { + return fmt.Errorf("failed to update issue ID: %w", err) + } + + rows, err := result.RowsAffected() + if err != nil { + return fmt.Errorf("failed to get rows affected: %w", err) + } + if rows == 0 { + return fmt.Errorf("issue not found: %s", oldID) + } + + _, err = tx.ExecContext(ctx, `UPDATE dependencies SET issue_id = ? WHERE issue_id = ?`, newID, oldID) + if err != nil { + return fmt.Errorf("failed to update issue_id in dependencies: %w", err) + } + + _, err = tx.ExecContext(ctx, `UPDATE dependencies SET depends_on_id = ? WHERE depends_on_id = ?`, newID, oldID) + if err != nil { + return fmt.Errorf("failed to update depends_on_id in dependencies: %w", err) + } + + _, err = tx.ExecContext(ctx, `UPDATE events SET issue_id = ? WHERE issue_id = ?`, newID, oldID) + if err != nil { + return fmt.Errorf("failed to update events: %w", err) + } + + _, err = tx.ExecContext(ctx, `UPDATE labels SET issue_id = ? WHERE issue_id = ?`, newID, oldID) + if err != nil { + return fmt.Errorf("failed to update labels: %w", err) + } + + _, err = tx.ExecContext(ctx, `UPDATE comments SET issue_id = ? WHERE issue_id = ?`, newID, oldID) + if err != nil { + return fmt.Errorf("failed to update comments: %w", err) + } + + _, err = tx.ExecContext(ctx, ` + UPDATE dirty_issues SET issue_id = ? WHERE issue_id = ? + `, newID, oldID) + if err != nil { + return fmt.Errorf("failed to update dirty_issues: %w", err) + } + + _, err = tx.ExecContext(ctx, `UPDATE issue_snapshots SET issue_id = ? WHERE issue_id = ?`, newID, oldID) + if err != nil { + return fmt.Errorf("failed to update issue_snapshots: %w", err) + } + + _, err = tx.ExecContext(ctx, `UPDATE compaction_snapshots SET issue_id = ? WHERE issue_id = ?`, newID, oldID) + if err != nil { + return fmt.Errorf("failed to update compaction_snapshots: %w", err) + } + + _, err = tx.ExecContext(ctx, ` + INSERT INTO dirty_issues (issue_id, marked_at) + VALUES (?, ?) + ON CONFLICT (issue_id) DO UPDATE SET marked_at = excluded.marked_at + `, newID, time.Now()) + if err != nil { + return fmt.Errorf("failed to mark issue dirty: %w", err) + } + + _, err = tx.ExecContext(ctx, ` + INSERT INTO events (issue_id, event_type, actor, old_value, new_value) + VALUES (?, 'renamed', ?, ?, ?) + `, newID, actor, oldID, newID) + if err != nil { + return fmt.Errorf("failed to record rename event: %w", err) + } + + return tx.Commit() +} + +// RenameDependencyPrefix updates the prefix in all dependency records +// GH#630: This was previously a no-op, causing dependencies to break after rename-prefix +func (s *SQLiteStorage) RenameDependencyPrefix(ctx context.Context, oldPrefix, newPrefix string) error { + // Update issue_id column + _, err := s.db.ExecContext(ctx, ` + UPDATE dependencies + SET issue_id = ? || substr(issue_id, length(?) + 1) + WHERE issue_id LIKE ? || '%' + `, newPrefix, oldPrefix, oldPrefix) + if err != nil { + return fmt.Errorf("failed to update issue_id in dependencies: %w", err) + } + + // Update depends_on_id column + _, err = s.db.ExecContext(ctx, ` + UPDATE dependencies + SET depends_on_id = ? || substr(depends_on_id, length(?) + 1) + WHERE depends_on_id LIKE ? || '%' + `, newPrefix, oldPrefix, oldPrefix) + if err != nil { + return fmt.Errorf("failed to update depends_on_id in dependencies: %w", err) + } + + return nil +} + +// RenameCounterPrefix is a no-op with hash-based IDs (bd-8e05) +// Kept for backward compatibility with rename-prefix command +func (s *SQLiteStorage) RenameCounterPrefix(ctx context.Context, oldPrefix, newPrefix string) error { + // Hash-based IDs don't use counters, so nothing to update + return nil +} + +// ResetCounter is a no-op with hash-based IDs (bd-8e05) +// Kept for backward compatibility +func (s *SQLiteStorage) ResetCounter(ctx context.Context, prefix string) error { + // Hash-based IDs don't use counters, so nothing to reset + return nil +} diff --git a/internal/storage/sqlite/queries_search.go b/internal/storage/sqlite/queries_search.go new file mode 100644 index 00000000..16c3f075 --- /dev/null +++ b/internal/storage/sqlite/queries_search.go @@ -0,0 +1,429 @@ +package sqlite + +import ( + "context" + "database/sql" + "fmt" + "strings" + "time" + + "github.com/steveyegge/beads/internal/types" +) + +// GetCloseReason retrieves the close reason from the most recent closed event for an issue +func (s *SQLiteStorage) GetCloseReason(ctx context.Context, issueID string) (string, error) { + var comment sql.NullString + err := s.db.QueryRowContext(ctx, ` + SELECT comment FROM events + WHERE issue_id = ? AND event_type = ? + ORDER BY created_at DESC + LIMIT 1 + `, issueID, types.EventClosed).Scan(&comment) + + if err == sql.ErrNoRows { + return "", nil + } + if err != nil { + return "", fmt.Errorf("failed to get close reason: %w", err) + } + if comment.Valid { + return comment.String, nil + } + return "", nil +} + +// GetCloseReasonsForIssues retrieves close reasons for multiple issues in a single query +func (s *SQLiteStorage) GetCloseReasonsForIssues(ctx context.Context, issueIDs []string) (map[string]string, error) { + result := make(map[string]string) + if len(issueIDs) == 0 { + return result, nil + } + + // Build placeholders for IN clause + placeholders := make([]string, len(issueIDs)) + args := make([]interface{}, len(issueIDs)+1) + args[0] = types.EventClosed + for i, id := range issueIDs { + placeholders[i] = "?" + args[i+1] = id + } + + // Use a subquery to get the most recent closed event for each issue + // #nosec G201 - safe SQL with controlled formatting + query := fmt.Sprintf(` + SELECT e.issue_id, e.comment + FROM events e + INNER JOIN ( + SELECT issue_id, MAX(created_at) as max_created_at + FROM events + WHERE event_type = ? AND issue_id IN (%s) + GROUP BY issue_id + ) latest ON e.issue_id = latest.issue_id AND e.created_at = latest.max_created_at + WHERE e.event_type = ? + `, strings.Join(placeholders, ", ")) + + // Append event_type again for the outer WHERE clause + args = append(args, types.EventClosed) + + rows, err := s.db.QueryContext(ctx, query, args...) + if err != nil { + return nil, fmt.Errorf("failed to get close reasons: %w", err) + } + defer func() { _ = rows.Close() }() + + for rows.Next() { + var issueID string + var comment sql.NullString + if err := rows.Scan(&issueID, &comment); err != nil { + return nil, fmt.Errorf("failed to scan close reason: %w", err) + } + if comment.Valid && comment.String != "" { + result[issueID] = comment.String + } + } + + return result, nil +} + +// GetIssueByExternalRef retrieves an issue by external reference +func (s *SQLiteStorage) GetIssueByExternalRef(ctx context.Context, externalRef string) (*types.Issue, error) { + var issue types.Issue + var closedAt sql.NullTime + var estimatedMinutes sql.NullInt64 + var assignee sql.NullString + var externalRefCol sql.NullString + var compactedAt sql.NullTime + var originalSize sql.NullInt64 + var contentHash sql.NullString + var compactedAtCommit sql.NullString + var sourceRepo sql.NullString + var closeReason sql.NullString + var deletedAt sql.NullString // TEXT column, not DATETIME - must parse manually + var deletedBy sql.NullString + var deleteReason sql.NullString + var originalType sql.NullString + // Messaging fields (bd-kwro) + var sender sql.NullString + var wisp sql.NullInt64 + // Pinned field (bd-7h5) + var pinned sql.NullInt64 + // Template field (beads-1ra) + var isTemplate sql.NullInt64 + // Gate fields (bd-udsi) + var awaitType sql.NullString + var awaitID sql.NullString + var timeoutNs sql.NullInt64 + var waiters sql.NullString + + err := s.db.QueryRowContext(ctx, ` + SELECT id, content_hash, title, description, design, acceptance_criteria, notes, + status, priority, issue_type, assignee, estimated_minutes, + created_at, updated_at, closed_at, external_ref, + compaction_level, compacted_at, compacted_at_commit, original_size, source_repo, close_reason, + deleted_at, deleted_by, delete_reason, original_type, + sender, ephemeral, pinned, is_template, + await_type, await_id, timeout_ns, waiters + FROM issues + WHERE external_ref = ? + `, externalRef).Scan( + &issue.ID, &contentHash, &issue.Title, &issue.Description, &issue.Design, + &issue.AcceptanceCriteria, &issue.Notes, &issue.Status, + &issue.Priority, &issue.IssueType, &assignee, &estimatedMinutes, + &issue.CreatedAt, &issue.UpdatedAt, &closedAt, &externalRefCol, + &issue.CompactionLevel, &compactedAt, &compactedAtCommit, &originalSize, &sourceRepo, &closeReason, + &deletedAt, &deletedBy, &deleteReason, &originalType, + &sender, &wisp, &pinned, &isTemplate, + &awaitType, &awaitID, &timeoutNs, &waiters, + ) + + if err == sql.ErrNoRows { + return nil, nil + } + if err != nil { + return nil, fmt.Errorf("failed to get issue by external_ref: %w", err) + } + + if contentHash.Valid { + issue.ContentHash = contentHash.String + } + if closedAt.Valid { + issue.ClosedAt = &closedAt.Time + } + if estimatedMinutes.Valid { + mins := int(estimatedMinutes.Int64) + issue.EstimatedMinutes = &mins + } + if assignee.Valid { + issue.Assignee = assignee.String + } + if externalRefCol.Valid { + issue.ExternalRef = &externalRefCol.String + } + if compactedAt.Valid { + issue.CompactedAt = &compactedAt.Time + } + if compactedAtCommit.Valid { + issue.CompactedAtCommit = &compactedAtCommit.String + } + if originalSize.Valid { + issue.OriginalSize = int(originalSize.Int64) + } + if sourceRepo.Valid { + issue.SourceRepo = sourceRepo.String + } + if closeReason.Valid { + issue.CloseReason = closeReason.String + } + issue.DeletedAt = parseNullableTimeString(deletedAt) + if deletedBy.Valid { + issue.DeletedBy = deletedBy.String + } + if deleteReason.Valid { + issue.DeleteReason = deleteReason.String + } + if originalType.Valid { + issue.OriginalType = originalType.String + } + // Messaging fields (bd-kwro) + if sender.Valid { + issue.Sender = sender.String + } + if wisp.Valid && wisp.Int64 != 0 { + issue.Wisp = true + } + // Pinned field (bd-7h5) + if pinned.Valid && pinned.Int64 != 0 { + issue.Pinned = true + } + // Template field (beads-1ra) + if isTemplate.Valid && isTemplate.Int64 != 0 { + issue.IsTemplate = true + } + // Gate fields (bd-udsi) + if awaitType.Valid { + issue.AwaitType = awaitType.String + } + if awaitID.Valid { + issue.AwaitID = awaitID.String + } + if timeoutNs.Valid { + issue.Timeout = time.Duration(timeoutNs.Int64) + } + if waiters.Valid && waiters.String != "" { + issue.Waiters = parseJSONStringArray(waiters.String) + } + + // Fetch labels for this issue + labels, err := s.GetLabels(ctx, issue.ID) + if err != nil { + return nil, fmt.Errorf("failed to get labels: %w", err) + } + issue.Labels = labels + + return &issue, nil +} + +// SearchIssues finds issues matching query and filters +func (s *SQLiteStorage) SearchIssues(ctx context.Context, query string, filter types.IssueFilter) ([]*types.Issue, error) { + // Check for external database file modifications (daemon mode) + s.checkFreshness() + + // Hold read lock during database operations to prevent reconnect() from + // closing the connection mid-query (GH#607 race condition fix) + s.reconnectMu.RLock() + defer s.reconnectMu.RUnlock() + + whereClauses := []string{} + args := []interface{}{} + + if query != "" { + whereClauses = append(whereClauses, "(title LIKE ? OR description LIKE ? OR id LIKE ?)") + pattern := "%" + query + "%" + args = append(args, pattern, pattern, pattern) + } + + if filter.TitleSearch != "" { + whereClauses = append(whereClauses, "title LIKE ?") + pattern := "%" + filter.TitleSearch + "%" + args = append(args, pattern) + } + + // Pattern matching + if filter.TitleContains != "" { + whereClauses = append(whereClauses, "title LIKE ?") + args = append(args, "%"+filter.TitleContains+"%") + } + if filter.DescriptionContains != "" { + whereClauses = append(whereClauses, "description LIKE ?") + args = append(args, "%"+filter.DescriptionContains+"%") + } + if filter.NotesContains != "" { + whereClauses = append(whereClauses, "notes LIKE ?") + args = append(args, "%"+filter.NotesContains+"%") + } + + if filter.Status != nil { + whereClauses = append(whereClauses, "status = ?") + args = append(args, *filter.Status) + } else if !filter.IncludeTombstones { + // Exclude tombstones by default unless explicitly filtering for them (bd-1bu) + whereClauses = append(whereClauses, "status != ?") + args = append(args, types.StatusTombstone) + } + + if filter.Priority != nil { + whereClauses = append(whereClauses, "priority = ?") + args = append(args, *filter.Priority) + } + + // Priority ranges + if filter.PriorityMin != nil { + whereClauses = append(whereClauses, "priority >= ?") + args = append(args, *filter.PriorityMin) + } + if filter.PriorityMax != nil { + whereClauses = append(whereClauses, "priority <= ?") + args = append(args, *filter.PriorityMax) + } + + if filter.IssueType != nil { + whereClauses = append(whereClauses, "issue_type = ?") + args = append(args, *filter.IssueType) + } + + if filter.Assignee != nil { + whereClauses = append(whereClauses, "assignee = ?") + args = append(args, *filter.Assignee) + } + + // Date ranges + if filter.CreatedAfter != nil { + whereClauses = append(whereClauses, "created_at > ?") + args = append(args, filter.CreatedAfter.Format(time.RFC3339)) + } + if filter.CreatedBefore != nil { + whereClauses = append(whereClauses, "created_at < ?") + args = append(args, filter.CreatedBefore.Format(time.RFC3339)) + } + if filter.UpdatedAfter != nil { + whereClauses = append(whereClauses, "updated_at > ?") + args = append(args, filter.UpdatedAfter.Format(time.RFC3339)) + } + if filter.UpdatedBefore != nil { + whereClauses = append(whereClauses, "updated_at < ?") + args = append(args, filter.UpdatedBefore.Format(time.RFC3339)) + } + if filter.ClosedAfter != nil { + whereClauses = append(whereClauses, "closed_at > ?") + args = append(args, filter.ClosedAfter.Format(time.RFC3339)) + } + if filter.ClosedBefore != nil { + whereClauses = append(whereClauses, "closed_at < ?") + args = append(args, filter.ClosedBefore.Format(time.RFC3339)) + } + + // Empty/null checks + if filter.EmptyDescription { + whereClauses = append(whereClauses, "(description IS NULL OR description = '')") + } + if filter.NoAssignee { + whereClauses = append(whereClauses, "(assignee IS NULL OR assignee = '')") + } + if filter.NoLabels { + whereClauses = append(whereClauses, "id NOT IN (SELECT DISTINCT issue_id FROM labels)") + } + + // Label filtering: issue must have ALL specified labels + if len(filter.Labels) > 0 { + for _, label := range filter.Labels { + whereClauses = append(whereClauses, "id IN (SELECT issue_id FROM labels WHERE label = ?)") + args = append(args, label) + } + } + + // Label filtering (OR): issue must have AT LEAST ONE of these labels + if len(filter.LabelsAny) > 0 { + placeholders := make([]string, len(filter.LabelsAny)) + for i, label := range filter.LabelsAny { + placeholders[i] = "?" + args = append(args, label) + } + whereClauses = append(whereClauses, fmt.Sprintf("id IN (SELECT issue_id FROM labels WHERE label IN (%s))", strings.Join(placeholders, ", "))) + } + + // ID filtering: match specific issue IDs + if len(filter.IDs) > 0 { + placeholders := make([]string, len(filter.IDs)) + for i, id := range filter.IDs { + placeholders[i] = "?" + args = append(args, id) + } + whereClauses = append(whereClauses, fmt.Sprintf("id IN (%s)", strings.Join(placeholders, ", "))) + } + + // Wisp filtering (bd-kwro.9) + if filter.Wisp != nil { + if *filter.Wisp { + whereClauses = append(whereClauses, "ephemeral = 1") // SQL column is still 'ephemeral' + } else { + whereClauses = append(whereClauses, "(ephemeral = 0 OR ephemeral IS NULL)") + } + } + + // Pinned filtering (bd-7h5) + if filter.Pinned != nil { + if *filter.Pinned { + whereClauses = append(whereClauses, "pinned = 1") + } else { + whereClauses = append(whereClauses, "(pinned = 0 OR pinned IS NULL)") + } + } + + // Template filtering (beads-1ra) + if filter.IsTemplate != nil { + if *filter.IsTemplate { + whereClauses = append(whereClauses, "is_template = 1") + } else { + whereClauses = append(whereClauses, "(is_template = 0 OR is_template IS NULL)") + } + } + + // Parent filtering (bd-yqhh): filter children by parent issue + if filter.ParentID != nil { + whereClauses = append(whereClauses, "id IN (SELECT issue_id FROM dependencies WHERE type = 'parent-child' AND depends_on_id = ?)") + args = append(args, *filter.ParentID) + } + + whereSQL := "" + if len(whereClauses) > 0 { + whereSQL = "WHERE " + strings.Join(whereClauses, " AND ") + } + + limitSQL := "" + if filter.Limit > 0 { + limitSQL = " LIMIT ?" + args = append(args, filter.Limit) + } + + // #nosec G201 - safe SQL with controlled formatting + querySQL := fmt.Sprintf(` + SELECT id, content_hash, title, description, design, acceptance_criteria, notes, + status, priority, issue_type, assignee, estimated_minutes, + created_at, updated_at, closed_at, external_ref, source_repo, close_reason, + deleted_at, deleted_by, delete_reason, original_type, + sender, ephemeral, pinned, is_template, + await_type, await_id, timeout_ns, waiters + FROM issues + %s + ORDER BY priority ASC, created_at DESC + %s + `, whereSQL, limitSQL) + + rows, err := s.db.QueryContext(ctx, querySQL, args...) + if err != nil { + return nil, fmt.Errorf("failed to search issues: %w", err) + } + defer func() { _ = rows.Close() }() + + return s.scanIssues(ctx, rows) +} diff --git a/internal/storage/sqlite/ready.go b/internal/storage/sqlite/ready.go index d6d9461b..01db66cc 100644 --- a/internal/storage/sqlite/ready.go +++ b/internal/storage/sqlite/ready.go @@ -33,14 +33,6 @@ func (s *SQLiteStorage) GetReadyWork(ctx context.Context, filter types.WorkFilte if filter.Type != "" { whereClauses = append(whereClauses, "i.issue_type = ?") args = append(args, filter.Type) - } else { - // Exclude workflow types from ready work by default (gt-7xtn) - // These are internal workflow items, not work for polecats to claim: - // - merge-request: processed by Refinery - // - gate: async wait conditions - // - molecule: workflow containers - // - message: mail/communication items - whereClauses = append(whereClauses, "i.issue_type NOT IN ('merge-request', 'gate', 'molecule', 'message')") } if filter.Priority != nil { diff --git a/internal/syncbranch/syncbranch_test.go b/internal/syncbranch/syncbranch_test.go index 7c69e9dc..07cef909 100644 --- a/internal/syncbranch/syncbranch_test.go +++ b/internal/syncbranch/syncbranch_test.go @@ -200,12 +200,12 @@ func TestUnset(t *testing.T) { t.Run("removes config value", func(t *testing.T) { store := newTestStore(t) defer store.Close() - + // Set a value first if err := Set(ctx, store, "beads-metadata"); err != nil { t.Fatalf("Set() error = %v", err) } - + // Verify it's set value, err := store.GetConfig(ctx, ConfigKey) if err != nil { @@ -214,12 +214,12 @@ func TestUnset(t *testing.T) { if value != "beads-metadata" { t.Errorf("GetConfig() = %q, want %q", value, "beads-metadata") } - + // Unset it if err := Unset(ctx, store); err != nil { t.Fatalf("Unset() error = %v", err) } - + // Verify it's gone value, err = store.GetConfig(ctx, ConfigKey) if err != nil { @@ -230,152 +230,3 @@ func TestUnset(t *testing.T) { } }) } - -func TestGetFromYAML(t *testing.T) { - // Save and restore any existing env var - origEnv := os.Getenv(EnvVar) - defer os.Setenv(EnvVar, origEnv) - - t.Run("returns empty when nothing configured", func(t *testing.T) { - os.Unsetenv(EnvVar) - branch := GetFromYAML() - // GetFromYAML checks env var first, then config.yaml - // Without env var set, it should return what's in config.yaml (or empty) - // We can't easily mock config.yaml here, so just verify no panic - _ = branch - }) - - t.Run("returns env var value when set", func(t *testing.T) { - os.Setenv(EnvVar, "env-sync-branch") - defer os.Unsetenv(EnvVar) - - branch := GetFromYAML() - if branch != "env-sync-branch" { - t.Errorf("GetFromYAML() = %q, want %q", branch, "env-sync-branch") - } - }) -} - -func TestIsConfigured(t *testing.T) { - // Save and restore any existing env var - origEnv := os.Getenv(EnvVar) - defer os.Setenv(EnvVar, origEnv) - - t.Run("returns true when env var is set", func(t *testing.T) { - os.Setenv(EnvVar, "test-branch") - defer os.Unsetenv(EnvVar) - - if !IsConfigured() { - t.Error("IsConfigured() = false when env var is set, want true") - } - }) - - t.Run("behavior with no env var", func(t *testing.T) { - os.Unsetenv(EnvVar) - // Just verify no panic - actual value depends on config.yaml - _ = IsConfigured() - }) -} - -func TestIsConfiguredWithDB(t *testing.T) { - // Save and restore any existing env var - origEnv := os.Getenv(EnvVar) - defer os.Setenv(EnvVar, origEnv) - - t.Run("returns true when env var is set", func(t *testing.T) { - os.Setenv(EnvVar, "test-branch") - defer os.Unsetenv(EnvVar) - - if !IsConfiguredWithDB("") { - t.Error("IsConfiguredWithDB() = false when env var is set, want true") - } - }) - - t.Run("returns false for nonexistent database", func(t *testing.T) { - os.Unsetenv(EnvVar) - - result := IsConfiguredWithDB("/nonexistent/path/beads.db") - // Should return false because db doesn't exist - if result { - t.Error("IsConfiguredWithDB() = true for nonexistent db, want false") - } - }) - - t.Run("returns false for empty path with no db found", func(t *testing.T) { - os.Unsetenv(EnvVar) - // When empty path is passed and beads.FindDatabasePath() returns empty, - // IsConfiguredWithDB should return false - // This tests the code path where dbPath is empty - tmpDir, _ := os.MkdirTemp("", "test-no-beads-*") - defer os.RemoveAll(tmpDir) - - origWd, _ := os.Getwd() - os.Chdir(tmpDir) - defer os.Chdir(origWd) - - result := IsConfiguredWithDB("") - // Should return false because no database exists - if result { - t.Error("IsConfiguredWithDB('') with no db = true, want false") - } - }) -} - -func TestGetConfigFromDB(t *testing.T) { - t.Run("returns empty for nonexistent database", func(t *testing.T) { - result := getConfigFromDB("/nonexistent/path/beads.db", ConfigKey) - if result != "" { - t.Errorf("getConfigFromDB() for nonexistent db = %q, want empty", result) - } - }) - - t.Run("returns empty when key not found", func(t *testing.T) { - // Create a temporary database - tmpDir, _ := os.MkdirTemp("", "test-beads-db-*") - defer os.RemoveAll(tmpDir) - dbPath := tmpDir + "/beads.db" - - // Create a valid SQLite database with the config table - store, err := sqlite.New(context.Background(), "file:"+dbPath) - if err != nil { - t.Fatalf("Failed to create test database: %v", err) - } - store.Close() - - result := getConfigFromDB(dbPath, "nonexistent.key") - if result != "" { - t.Errorf("getConfigFromDB() for missing key = %q, want empty", result) - } - }) - - t.Run("returns value when key exists", func(t *testing.T) { - // Create a temporary database - tmpDir, _ := os.MkdirTemp("", "test-beads-db-*") - defer os.RemoveAll(tmpDir) - dbPath := tmpDir + "/beads.db" - - // Create a valid SQLite database with the config table - ctx := context.Background() - // Use the same connection string format as getConfigFromDB expects - store, err := sqlite.New(ctx, "file:"+dbPath+"?_journal_mode=DELETE") - if err != nil { - t.Fatalf("Failed to create test database: %v", err) - } - // Set issue_prefix first (required by storage) - if err := store.SetConfig(ctx, "issue_prefix", "bd"); err != nil { - store.Close() - t.Fatalf("Failed to set issue_prefix: %v", err) - } - // Set the config value we're testing - if err := store.SetConfig(ctx, ConfigKey, "test-sync-branch"); err != nil { - store.Close() - t.Fatalf("Failed to set config: %v", err) - } - store.Close() - - result := getConfigFromDB(dbPath, ConfigKey) - if result != "test-sync-branch" { - t.Errorf("getConfigFromDB() = %q, want %q", result, "test-sync-branch") - } - }) -} diff --git a/internal/syncbranch/worktree_helpers_test.go b/internal/syncbranch/worktree_helpers_test.go deleted file mode 100644 index 44fb8984..00000000 --- a/internal/syncbranch/worktree_helpers_test.go +++ /dev/null @@ -1,716 +0,0 @@ -package syncbranch - -import ( - "context" - "os" - "os/exec" - "path/filepath" - "strings" - "testing" -) - -// TestIsNonFastForwardError tests the non-fast-forward error detection -func TestIsNonFastForwardError(t *testing.T) { - tests := []struct { - name string - output string - want bool - }{ - { - name: "non-fast-forward message", - output: "error: failed to push some refs to 'origin'\n! [rejected] main -> main (non-fast-forward)", - want: true, - }, - { - name: "fetch first message", - output: "error: failed to push some refs to 'origin'\nhint: Updates were rejected because the remote contains work that you do\nhint: not have locally. This is usually caused by another repository pushing\nhint: to the same ref. You may want to first integrate the remote changes\nhint: (e.g., 'git pull ...') before pushing again.\nhint: See the 'Note about fast-forwards' in 'git push --help' for details.\nfetch first", - want: true, - }, - { - name: "rejected behind message", - output: "To github.com:user/repo.git\n! [rejected] main -> main (non-fast-forward)\nerror: failed to push some refs\nhint: rejected because behind remote", - want: true, - }, - { - name: "normal push success", - output: "Everything up-to-date", - want: false, - }, - { - name: "authentication error", - output: "fatal: Authentication failed for 'https://github.com/user/repo.git/'", - want: false, - }, - { - name: "permission denied", - output: "ERROR: Permission to user/repo.git denied to user.", - want: false, - }, - { - name: "empty output", - output: "", - want: false, - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - got := isNonFastForwardError(tt.output) - if got != tt.want { - t.Errorf("isNonFastForwardError(%q) = %v, want %v", tt.output, got, tt.want) - } - }) - } -} - -// TestHasChangesInWorktree tests change detection in worktree -func TestHasChangesInWorktree(t *testing.T) { - if testing.Short() { - t.Skip("skipping integration test in short mode") - } - - ctx := context.Background() - - t.Run("no changes in clean worktree", func(t *testing.T) { - repoDir := setupTestRepo(t) - defer os.RemoveAll(repoDir) - - // Create initial commit - jsonlPath := filepath.Join(repoDir, ".beads", "issues.jsonl") - writeFile(t, jsonlPath, `{"id":"test-1"}`) - runGit(t, repoDir, "add", ".") - runGit(t, repoDir, "commit", "-m", "initial") - - hasChanges, err := hasChangesInWorktree(ctx, repoDir, jsonlPath) - if err != nil { - t.Fatalf("hasChangesInWorktree() error = %v", err) - } - if hasChanges { - t.Error("hasChangesInWorktree() = true for clean worktree, want false") - } - }) - - t.Run("detects uncommitted changes", func(t *testing.T) { - repoDir := setupTestRepo(t) - defer os.RemoveAll(repoDir) - - // Create initial commit - jsonlPath := filepath.Join(repoDir, ".beads", "issues.jsonl") - writeFile(t, jsonlPath, `{"id":"test-1"}`) - runGit(t, repoDir, "add", ".") - runGit(t, repoDir, "commit", "-m", "initial") - - // Modify file without committing - writeFile(t, jsonlPath, `{"id":"test-1"}`+"\n"+`{"id":"test-2"}`) - - hasChanges, err := hasChangesInWorktree(ctx, repoDir, jsonlPath) - if err != nil { - t.Fatalf("hasChangesInWorktree() error = %v", err) - } - if !hasChanges { - t.Error("hasChangesInWorktree() = false with uncommitted changes, want true") - } - }) - - t.Run("detects new untracked files", func(t *testing.T) { - repoDir := setupTestRepo(t) - defer os.RemoveAll(repoDir) - - // Create initial commit - jsonlPath := filepath.Join(repoDir, ".beads", "issues.jsonl") - writeFile(t, jsonlPath, `{"id":"test-1"}`) - runGit(t, repoDir, "add", ".") - runGit(t, repoDir, "commit", "-m", "initial") - - // Add new file in .beads - writeFile(t, filepath.Join(repoDir, ".beads", "metadata.json"), `{}`) - - hasChanges, err := hasChangesInWorktree(ctx, repoDir, jsonlPath) - if err != nil { - t.Fatalf("hasChangesInWorktree() error = %v", err) - } - if !hasChanges { - t.Error("hasChangesInWorktree() = false with new file, want true") - } - }) - - t.Run("handles file outside .beads dir", func(t *testing.T) { - repoDir := setupTestRepo(t) - defer os.RemoveAll(repoDir) - - // Create initial commit - jsonlPath := filepath.Join(repoDir, "issues.jsonl") // Not in .beads - writeFile(t, jsonlPath, `{"id":"test-1"}`) - runGit(t, repoDir, "add", ".") - runGit(t, repoDir, "commit", "-m", "initial") - - // Modify file - writeFile(t, jsonlPath, `{"id":"test-1"}`+"\n"+`{"id":"test-2"}`) - - hasChanges, err := hasChangesInWorktree(ctx, repoDir, jsonlPath) - if err != nil { - t.Fatalf("hasChangesInWorktree() error = %v", err) - } - if !hasChanges { - t.Error("hasChangesInWorktree() = false with modified file outside .beads, want true") - } - }) -} - -// TestCommitInWorktree tests committing changes in worktree -func TestCommitInWorktree(t *testing.T) { - if testing.Short() { - t.Skip("skipping integration test in short mode") - } - - ctx := context.Background() - - t.Run("commits staged changes", func(t *testing.T) { - repoDir := setupTestRepo(t) - defer os.RemoveAll(repoDir) - - // Create initial commit - jsonlPath := filepath.Join(repoDir, ".beads", "issues.jsonl") - writeFile(t, jsonlPath, `{"id":"test-1"}`) - runGit(t, repoDir, "add", ".") - runGit(t, repoDir, "commit", "-m", "initial") - - // Modify file - writeFile(t, jsonlPath, `{"id":"test-1"}`+"\n"+`{"id":"test-2"}`) - - // Commit using our function - err := commitInWorktree(ctx, repoDir, ".beads/issues.jsonl", "test commit message") - if err != nil { - t.Fatalf("commitInWorktree() error = %v", err) - } - - // Verify commit was made - output := getGitOutput(t, repoDir, "log", "-1", "--format=%s") - if !strings.Contains(output, "test commit message") { - t.Errorf("commit message = %q, want to contain 'test commit message'", output) - } - }) - - t.Run("commits entire .beads directory", func(t *testing.T) { - repoDir := setupTestRepo(t) - defer os.RemoveAll(repoDir) - - // Create initial commit - jsonlPath := filepath.Join(repoDir, ".beads", "issues.jsonl") - writeFile(t, jsonlPath, `{"id":"test-1"}`) - runGit(t, repoDir, "add", ".") - runGit(t, repoDir, "commit", "-m", "initial") - - // Add multiple files - writeFile(t, filepath.Join(repoDir, ".beads", "metadata.json"), `{"version":"1"}`) - writeFile(t, jsonlPath, `{"id":"test-1"}`+"\n"+`{"id":"test-2"}`) - - err := commitInWorktree(ctx, repoDir, ".beads/issues.jsonl", "multi-file commit") - if err != nil { - t.Fatalf("commitInWorktree() error = %v", err) - } - - // Verify both files were committed - output := getGitOutput(t, repoDir, "diff", "--name-only", "HEAD~1") - if !strings.Contains(output, "issues.jsonl") { - t.Error("issues.jsonl not in commit") - } - if !strings.Contains(output, "metadata.json") { - t.Error("metadata.json not in commit") - } - }) -} - -// TestCopyJSONLToMainRepo tests copying JSONL between worktree and main repo -func TestCopyJSONLToMainRepo(t *testing.T) { - t.Run("copies JSONL file successfully", func(t *testing.T) { - // Setup worktree directory - worktreeDir, _ := os.MkdirTemp("", "test-worktree-*") - defer os.RemoveAll(worktreeDir) - - // Setup main repo directory - mainRepoDir, _ := os.MkdirTemp("", "test-mainrepo-*") - defer os.RemoveAll(mainRepoDir) - - // Create .beads directories - os.MkdirAll(filepath.Join(worktreeDir, ".beads"), 0750) - os.MkdirAll(filepath.Join(mainRepoDir, ".beads"), 0750) - - // Write content to worktree JSONL - worktreeContent := `{"id":"test-1","title":"Test Issue"}` - if err := os.WriteFile(filepath.Join(worktreeDir, ".beads", "issues.jsonl"), []byte(worktreeContent), 0600); err != nil { - t.Fatalf("Failed to write worktree JSONL: %v", err) - } - - mainJSONLPath := filepath.Join(mainRepoDir, ".beads", "issues.jsonl") - - err := copyJSONLToMainRepo(worktreeDir, ".beads/issues.jsonl", mainJSONLPath) - if err != nil { - t.Fatalf("copyJSONLToMainRepo() error = %v", err) - } - - // Verify content was copied - copied, err := os.ReadFile(mainJSONLPath) - if err != nil { - t.Fatalf("Failed to read copied file: %v", err) - } - if string(copied) != worktreeContent { - t.Errorf("copied content = %q, want %q", string(copied), worktreeContent) - } - }) - - t.Run("returns nil when worktree JSONL does not exist", func(t *testing.T) { - worktreeDir, _ := os.MkdirTemp("", "test-worktree-*") - defer os.RemoveAll(worktreeDir) - - mainRepoDir, _ := os.MkdirTemp("", "test-mainrepo-*") - defer os.RemoveAll(mainRepoDir) - - mainJSONLPath := filepath.Join(mainRepoDir, ".beads", "issues.jsonl") - - err := copyJSONLToMainRepo(worktreeDir, ".beads/issues.jsonl", mainJSONLPath) - if err != nil { - t.Errorf("copyJSONLToMainRepo() for nonexistent file = %v, want nil", err) - } - }) - - t.Run("also copies metadata.json if present", func(t *testing.T) { - worktreeDir, _ := os.MkdirTemp("", "test-worktree-*") - defer os.RemoveAll(worktreeDir) - - mainRepoDir, _ := os.MkdirTemp("", "test-mainrepo-*") - defer os.RemoveAll(mainRepoDir) - - // Create .beads directories - os.MkdirAll(filepath.Join(worktreeDir, ".beads"), 0750) - os.MkdirAll(filepath.Join(mainRepoDir, ".beads"), 0750) - - // Write JSONL and metadata to worktree - if err := os.WriteFile(filepath.Join(worktreeDir, ".beads", "issues.jsonl"), []byte(`{}`), 0600); err != nil { - t.Fatalf("Failed to write worktree JSONL: %v", err) - } - metadataContent := `{"prefix":"bd"}` - if err := os.WriteFile(filepath.Join(worktreeDir, ".beads", "metadata.json"), []byte(metadataContent), 0600); err != nil { - t.Fatalf("Failed to write metadata: %v", err) - } - - mainJSONLPath := filepath.Join(mainRepoDir, ".beads", "issues.jsonl") - - err := copyJSONLToMainRepo(worktreeDir, ".beads/issues.jsonl", mainJSONLPath) - if err != nil { - t.Fatalf("copyJSONLToMainRepo() error = %v", err) - } - - // Verify metadata was also copied - metadata, err := os.ReadFile(filepath.Join(mainRepoDir, ".beads", "metadata.json")) - if err != nil { - t.Fatalf("Failed to read metadata: %v", err) - } - if string(metadata) != metadataContent { - t.Errorf("metadata content = %q, want %q", string(metadata), metadataContent) - } - }) -} - -// TestGetRemoteForBranch tests remote detection for branches -func TestGetRemoteForBranch(t *testing.T) { - if testing.Short() { - t.Skip("skipping integration test in short mode") - } - - ctx := context.Background() - - t.Run("returns origin as default", func(t *testing.T) { - repoDir := setupTestRepo(t) - defer os.RemoveAll(repoDir) - - // Create initial commit - writeFile(t, filepath.Join(repoDir, ".beads", "issues.jsonl"), `{"id":"test-1"}`) - runGit(t, repoDir, "add", ".") - runGit(t, repoDir, "commit", "-m", "initial") - - remote := getRemoteForBranch(ctx, repoDir, "nonexistent-branch") - if remote != "origin" { - t.Errorf("getRemoteForBranch() = %q, want 'origin'", remote) - } - }) - - t.Run("returns configured remote", func(t *testing.T) { - repoDir := setupTestRepo(t) - defer os.RemoveAll(repoDir) - - // Create initial commit - writeFile(t, filepath.Join(repoDir, ".beads", "issues.jsonl"), `{"id":"test-1"}`) - runGit(t, repoDir, "add", ".") - runGit(t, repoDir, "commit", "-m", "initial") - - // Configure a custom remote for a branch - runGit(t, repoDir, "config", "branch.test-branch.remote", "upstream") - - remote := getRemoteForBranch(ctx, repoDir, "test-branch") - if remote != "upstream" { - t.Errorf("getRemoteForBranch() = %q, want 'upstream'", remote) - } - }) -} - -// TestGetRepoRoot tests repository root detection -func TestGetRepoRoot(t *testing.T) { - if testing.Short() { - t.Skip("skipping integration test in short mode") - } - - ctx := context.Background() - - t.Run("returns repo root for regular repository", func(t *testing.T) { - repoDir := setupTestRepo(t) - defer os.RemoveAll(repoDir) - - // Create initial commit - writeFile(t, filepath.Join(repoDir, ".beads", "issues.jsonl"), `{"id":"test-1"}`) - runGit(t, repoDir, "add", ".") - runGit(t, repoDir, "commit", "-m", "initial") - - // Change to repo directory - origWd, _ := os.Getwd() - os.Chdir(repoDir) - defer os.Chdir(origWd) - - root, err := GetRepoRoot(ctx) - if err != nil { - t.Fatalf("GetRepoRoot() error = %v", err) - } - - // Resolve symlinks for comparison - expectedRoot, _ := filepath.EvalSymlinks(repoDir) - actualRoot, _ := filepath.EvalSymlinks(root) - - if actualRoot != expectedRoot { - t.Errorf("GetRepoRoot() = %q, want %q", actualRoot, expectedRoot) - } - }) - - t.Run("returns error for non-git directory", func(t *testing.T) { - tmpDir, _ := os.MkdirTemp("", "non-git-*") - defer os.RemoveAll(tmpDir) - - origWd, _ := os.Getwd() - os.Chdir(tmpDir) - defer os.Chdir(origWd) - - _, err := GetRepoRoot(ctx) - if err == nil { - t.Error("GetRepoRoot() expected error for non-git directory") - } - }) - - t.Run("returns repo root from subdirectory", func(t *testing.T) { - repoDir := setupTestRepo(t) - defer os.RemoveAll(repoDir) - - // Create initial commit - writeFile(t, filepath.Join(repoDir, ".beads", "issues.jsonl"), `{"id":"test-1"}`) - runGit(t, repoDir, "add", ".") - runGit(t, repoDir, "commit", "-m", "initial") - - // Create and change to subdirectory - subDir := filepath.Join(repoDir, "subdir", "nested") - os.MkdirAll(subDir, 0750) - - origWd, _ := os.Getwd() - os.Chdir(subDir) - defer os.Chdir(origWd) - - root, err := GetRepoRoot(ctx) - if err != nil { - t.Fatalf("GetRepoRoot() error = %v", err) - } - - // Resolve symlinks for comparison - expectedRoot, _ := filepath.EvalSymlinks(repoDir) - actualRoot, _ := filepath.EvalSymlinks(root) - - if actualRoot != expectedRoot { - t.Errorf("GetRepoRoot() from subdirectory = %q, want %q", actualRoot, expectedRoot) - } - }) - - t.Run("handles worktree correctly", func(t *testing.T) { - // Create main repo - mainRepoDir := setupTestRepo(t) - defer os.RemoveAll(mainRepoDir) - - // Create initial commit - writeFile(t, filepath.Join(mainRepoDir, ".beads", "issues.jsonl"), `{"id":"test-1"}`) - runGit(t, mainRepoDir, "add", ".") - runGit(t, mainRepoDir, "commit", "-m", "initial") - - // Create a worktree - worktreeDir, _ := os.MkdirTemp("", "test-worktree-*") - defer os.RemoveAll(worktreeDir) - runGit(t, mainRepoDir, "worktree", "add", worktreeDir, "-b", "feature") - - // Test from worktree - should return main repo root - origWd, _ := os.Getwd() - os.Chdir(worktreeDir) - defer os.Chdir(origWd) - - root, err := GetRepoRoot(ctx) - if err != nil { - t.Fatalf("GetRepoRoot() from worktree error = %v", err) - } - - // Should return the main repo root, not the worktree - expectedRoot, _ := filepath.EvalSymlinks(mainRepoDir) - actualRoot, _ := filepath.EvalSymlinks(root) - - if actualRoot != expectedRoot { - t.Errorf("GetRepoRoot() from worktree = %q, want main repo %q", actualRoot, expectedRoot) - } - }) -} - -// TestHasGitRemote tests remote detection -func TestHasGitRemote(t *testing.T) { - if testing.Short() { - t.Skip("skipping integration test in short mode") - } - - ctx := context.Background() - - t.Run("returns false for repo without remote", func(t *testing.T) { - repoDir := setupTestRepo(t) - defer os.RemoveAll(repoDir) - - // Create initial commit - writeFile(t, filepath.Join(repoDir, ".beads", "issues.jsonl"), `{"id":"test-1"}`) - runGit(t, repoDir, "add", ".") - runGit(t, repoDir, "commit", "-m", "initial") - - origWd, _ := os.Getwd() - os.Chdir(repoDir) - defer os.Chdir(origWd) - - if HasGitRemote(ctx) { - t.Error("HasGitRemote() = true for repo without remote, want false") - } - }) - - t.Run("returns true for repo with remote", func(t *testing.T) { - repoDir := setupTestRepo(t) - defer os.RemoveAll(repoDir) - - // Create initial commit - writeFile(t, filepath.Join(repoDir, ".beads", "issues.jsonl"), `{"id":"test-1"}`) - runGit(t, repoDir, "add", ".") - runGit(t, repoDir, "commit", "-m", "initial") - - // Add a remote - runGit(t, repoDir, "remote", "add", "origin", "https://github.com/test/repo.git") - - origWd, _ := os.Getwd() - os.Chdir(repoDir) - defer os.Chdir(origWd) - - if !HasGitRemote(ctx) { - t.Error("HasGitRemote() = false for repo with remote, want true") - } - }) - - t.Run("returns false for non-git directory", func(t *testing.T) { - tmpDir, _ := os.MkdirTemp("", "non-git-*") - defer os.RemoveAll(tmpDir) - - origWd, _ := os.Getwd() - os.Chdir(tmpDir) - defer os.Chdir(origWd) - - if HasGitRemote(ctx) { - t.Error("HasGitRemote() = true for non-git directory, want false") - } - }) -} - -// TestGetCurrentBranch tests current branch detection -func TestGetCurrentBranch(t *testing.T) { - if testing.Short() { - t.Skip("skipping integration test in short mode") - } - - ctx := context.Background() - - t.Run("returns current branch name", func(t *testing.T) { - repoDir := setupTestRepo(t) - defer os.RemoveAll(repoDir) - - // Create initial commit - writeFile(t, filepath.Join(repoDir, ".beads", "issues.jsonl"), `{"id":"test-1"}`) - runGit(t, repoDir, "add", ".") - runGit(t, repoDir, "commit", "-m", "initial") - - origWd, _ := os.Getwd() - os.Chdir(repoDir) - defer os.Chdir(origWd) - - branch, err := GetCurrentBranch(ctx) - if err != nil { - t.Fatalf("GetCurrentBranch() error = %v", err) - } - - // The default branch is usually "master" or "main" depending on git config - if branch != "master" && branch != "main" { - // Could also be a user-defined default, just verify it's not empty - if branch == "" { - t.Error("GetCurrentBranch() returned empty string") - } - } - }) - - t.Run("returns correct branch after checkout", func(t *testing.T) { - repoDir := setupTestRepo(t) - defer os.RemoveAll(repoDir) - - // Create initial commit - writeFile(t, filepath.Join(repoDir, ".beads", "issues.jsonl"), `{"id":"test-1"}`) - runGit(t, repoDir, "add", ".") - runGit(t, repoDir, "commit", "-m", "initial") - - // Create and checkout new branch - runGit(t, repoDir, "checkout", "-b", "feature-branch") - - origWd, _ := os.Getwd() - os.Chdir(repoDir) - defer os.Chdir(origWd) - - branch, err := GetCurrentBranch(ctx) - if err != nil { - t.Fatalf("GetCurrentBranch() error = %v", err) - } - - if branch != "feature-branch" { - t.Errorf("GetCurrentBranch() = %q, want 'feature-branch'", branch) - } - }) -} - -// TestFormatVanishedIssues tests the forensic logging formatter -func TestFormatVanishedIssues(t *testing.T) { - t.Run("formats vanished issues correctly", func(t *testing.T) { - localIssues := map[string]issueSummary{ - "bd-1": {ID: "bd-1", Title: "First Issue"}, - "bd-2": {ID: "bd-2", Title: "Second Issue"}, - "bd-3": {ID: "bd-3", Title: "Third Issue"}, - } - mergedIssues := map[string]issueSummary{ - "bd-1": {ID: "bd-1", Title: "First Issue"}, - } - - lines := formatVanishedIssues(localIssues, mergedIssues, 3, 1) - - // Should contain header - found := false - for _, line := range lines { - if strings.Contains(line, "Mass deletion forensic log") { - found = true - break - } - } - if !found { - t.Error("formatVanishedIssues() missing header") - } - - // Should list vanished issues - foundBd2 := false - foundBd3 := false - for _, line := range lines { - if strings.Contains(line, "bd-2") { - foundBd2 = true - } - if strings.Contains(line, "bd-3") { - foundBd3 = true - } - } - if !foundBd2 || !foundBd3 { - t.Errorf("formatVanishedIssues() missing vanished issues: bd-2=%v, bd-3=%v", foundBd2, foundBd3) - } - - // Should show totals - foundTotal := false - for _, line := range lines { - if strings.Contains(line, "Total vanished: 2") { - foundTotal = true - break - } - } - if !foundTotal { - t.Error("formatVanishedIssues() missing total count") - } - }) - - t.Run("truncates long titles", func(t *testing.T) { - longTitle := strings.Repeat("A", 100) - localIssues := map[string]issueSummary{ - "bd-1": {ID: "bd-1", Title: longTitle}, - } - mergedIssues := map[string]issueSummary{} - - lines := formatVanishedIssues(localIssues, mergedIssues, 1, 0) - - // Find the line with bd-1 and check title is truncated - for _, line := range lines { - if strings.Contains(line, "bd-1") { - if len(line) > 80 { // Line should be reasonably short - // Verify it ends with "..." - if !strings.Contains(line, "...") { - t.Error("formatVanishedIssues() should truncate long titles with '...'") - } - } - break - } - } - }) -} - -// TestCheckDivergence tests the public CheckDivergence function -func TestCheckDivergence(t *testing.T) { - if testing.Short() { - t.Skip("skipping integration test in short mode") - } - - ctx := context.Background() - - t.Run("returns no divergence when remote does not exist", func(t *testing.T) { - repoDir := setupTestRepo(t) - defer os.RemoveAll(repoDir) - - // Create initial commit - writeFile(t, filepath.Join(repoDir, ".beads", "issues.jsonl"), `{"id":"test-1"}`) - runGit(t, repoDir, "add", ".") - runGit(t, repoDir, "commit", "-m", "initial") - - // Add remote but don't create the branch on it - runGit(t, repoDir, "remote", "add", "origin", repoDir) // Use self as remote - - info, err := CheckDivergence(ctx, repoDir, "beads-sync") - if err != nil { - // Expected to fail since remote branch doesn't exist - return - } - - // If it succeeds, verify no divergence - if info.IsDiverged { - t.Error("CheckDivergence() should not report divergence when remote doesn't exist") - } - }) -} - -// helper to run git with error handling (already exists but needed for this file) -func runGitHelper(t *testing.T, dir string, args ...string) { - t.Helper() - cmd := exec.Command("git", args...) - cmd.Dir = dir - output, err := cmd.CombinedOutput() - if err != nil { - t.Fatalf("git %v failed: %v\n%s", args, err, output) - } -} diff --git a/internal/syncbranch/worktree_sync_test.go b/internal/syncbranch/worktree_sync_test.go deleted file mode 100644 index 038738b9..00000000 --- a/internal/syncbranch/worktree_sync_test.go +++ /dev/null @@ -1,416 +0,0 @@ -package syncbranch - -import ( - "context" - "os" - "os/exec" - "path/filepath" - "strings" - "testing" - "time" -) - -// TestCommitToSyncBranch tests the main commit function -func TestCommitToSyncBranch(t *testing.T) { - if testing.Short() { - t.Skip("skipping integration test in short mode") - } - - ctx := context.Background() - - t.Run("commits changes to sync branch", func(t *testing.T) { - // Setup: create a repo with a sync branch - repoDir := setupTestRepoWithRemote(t) - defer os.RemoveAll(repoDir) - - syncBranch := "beads-sync" - jsonlPath := filepath.Join(repoDir, ".beads", "issues.jsonl") - - // Create sync branch - runGit(t, repoDir, "checkout", "-b", syncBranch) - writeFile(t, jsonlPath, `{"id":"test-1"}`) - runGit(t, repoDir, "add", ".") - runGit(t, repoDir, "commit", "-m", "initial sync branch commit") - runGit(t, repoDir, "checkout", "master") - - // Write new content to commit - writeFile(t, jsonlPath, `{"id":"test-1"}`+"\n"+`{"id":"test-2"}`) - - result, err := CommitToSyncBranch(ctx, repoDir, syncBranch, jsonlPath, false) - if err != nil { - t.Fatalf("CommitToSyncBranch() error = %v", err) - } - - if !result.Committed { - t.Error("CommitToSyncBranch() Committed = false, want true") - } - if result.Branch != syncBranch { - t.Errorf("CommitToSyncBranch() Branch = %q, want %q", result.Branch, syncBranch) - } - if !strings.Contains(result.Message, "bd sync:") { - t.Errorf("CommitToSyncBranch() Message = %q, want to contain 'bd sync:'", result.Message) - } - }) - - t.Run("returns not committed when no changes", func(t *testing.T) { - repoDir := setupTestRepoWithRemote(t) - defer os.RemoveAll(repoDir) - - syncBranch := "beads-sync" - jsonlPath := filepath.Join(repoDir, ".beads", "issues.jsonl") - - // Create sync branch with content - runGit(t, repoDir, "checkout", "-b", syncBranch) - writeFile(t, jsonlPath, `{"id":"test-1"}`) - runGit(t, repoDir, "add", ".") - runGit(t, repoDir, "commit", "-m", "initial") - runGit(t, repoDir, "checkout", "master") - - // Write the same content that's in the sync branch - writeFile(t, jsonlPath, `{"id":"test-1"}`) - - // Commit with same content (no changes) - result, err := CommitToSyncBranch(ctx, repoDir, syncBranch, jsonlPath, false) - if err != nil { - t.Fatalf("CommitToSyncBranch() error = %v", err) - } - - if result.Committed { - t.Error("CommitToSyncBranch() Committed = true when no changes, want false") - } - }) -} - -// TestPullFromSyncBranch tests pulling changes from sync branch -func TestPullFromSyncBranch(t *testing.T) { - if testing.Short() { - t.Skip("skipping integration test in short mode") - } - - ctx := context.Background() - - t.Run("handles sync branch not on remote", func(t *testing.T) { - repoDir := setupTestRepoWithRemote(t) - defer os.RemoveAll(repoDir) - - syncBranch := "beads-sync" - jsonlPath := filepath.Join(repoDir, ".beads", "issues.jsonl") - - // Create local sync branch but don't set up remote tracking - runGit(t, repoDir, "checkout", "-b", syncBranch) - writeFile(t, jsonlPath, `{"id":"test-1"}`) - runGit(t, repoDir, "add", ".") - runGit(t, repoDir, "commit", "-m", "local sync") - runGit(t, repoDir, "checkout", "master") - - // Pull should handle the case where remote doesn't have the branch - result, err := PullFromSyncBranch(ctx, repoDir, syncBranch, jsonlPath, false) - // This tests the fetch failure path since "origin" points to self without the sync branch - // It should either succeed (not pulled) or fail gracefully - if err != nil { - // Expected - fetch will fail since origin doesn't have sync branch - return - } - if result.Pulled && !result.FastForwarded && !result.Merged { - // Pulled but no change - acceptable - _ = result - } - }) - - t.Run("pulls when already up to date", func(t *testing.T) { - repoDir := setupTestRepoWithRemote(t) - defer os.RemoveAll(repoDir) - - syncBranch := "beads-sync" - jsonlPath := filepath.Join(repoDir, ".beads", "issues.jsonl") - - // Create sync branch and simulate it being tracked - runGit(t, repoDir, "checkout", "-b", syncBranch) - writeFile(t, jsonlPath, `{"id":"test-1"}`) - runGit(t, repoDir, "add", ".") - runGit(t, repoDir, "commit", "-m", "sync commit") - // Set up a fake remote ref at the same commit - runGit(t, repoDir, "update-ref", "refs/remotes/origin/"+syncBranch, "HEAD") - runGit(t, repoDir, "checkout", "master") - - // Pull when already at remote HEAD - result, err := PullFromSyncBranch(ctx, repoDir, syncBranch, jsonlPath, false) - if err != nil { - // Might fail on fetch step, that's acceptable - return - } - // Should have pulled successfully (even if no new content) - if result.Pulled { - // Good - it recognized it's up to date - } - }) - - t.Run("copies JSONL to main repo after sync", func(t *testing.T) { - repoDir := setupTestRepoWithRemote(t) - defer os.RemoveAll(repoDir) - - syncBranch := "beads-sync" - jsonlPath := filepath.Join(repoDir, ".beads", "issues.jsonl") - - // Create sync branch with content - runGit(t, repoDir, "checkout", "-b", syncBranch) - writeFile(t, jsonlPath, `{"id":"sync-1"}`) - runGit(t, repoDir, "add", ".") - runGit(t, repoDir, "commit", "-m", "sync commit") - runGit(t, repoDir, "update-ref", "refs/remotes/origin/"+syncBranch, "HEAD") - runGit(t, repoDir, "checkout", "master") - - // Remove local JSONL to verify it gets copied back - os.Remove(jsonlPath) - - result, err := PullFromSyncBranch(ctx, repoDir, syncBranch, jsonlPath, false) - if err != nil { - return // Acceptable in test env - } - - if result.Pulled { - // Verify JSONL was copied to main repo - if _, err := os.Stat(jsonlPath); os.IsNotExist(err) { - t.Error("PullFromSyncBranch() did not copy JSONL to main repo") - } - } - }) - - t.Run("handles fast-forward case", func(t *testing.T) { - repoDir := setupTestRepoWithRemote(t) - defer os.RemoveAll(repoDir) - - syncBranch := "beads-sync" - jsonlPath := filepath.Join(repoDir, ".beads", "issues.jsonl") - - // Create sync branch with base commit - runGit(t, repoDir, "checkout", "-b", syncBranch) - writeFile(t, jsonlPath, `{"id":"base"}`) - runGit(t, repoDir, "add", ".") - runGit(t, repoDir, "commit", "-m", "base") - baseCommit := strings.TrimSpace(getGitOutput(t, repoDir, "rev-parse", "HEAD")) - - // Add another commit and set as remote - writeFile(t, jsonlPath, `{"id":"base"}`+"\n"+`{"id":"remote"}`) - runGit(t, repoDir, "add", ".") - runGit(t, repoDir, "commit", "-m", "remote commit") - runGit(t, repoDir, "update-ref", "refs/remotes/origin/"+syncBranch, "HEAD") - - // Reset back to base (so remote is ahead) - runGit(t, repoDir, "reset", "--hard", baseCommit) - runGit(t, repoDir, "checkout", "master") - - // Pull should fast-forward - result, err := PullFromSyncBranch(ctx, repoDir, syncBranch, jsonlPath, false) - if err != nil { - return // Acceptable with self-remote - } - - // Just verify result is populated correctly - _ = result.FastForwarded - _ = result.Merged - }) -} - -// TestResetToRemote tests resetting sync branch to remote state -// Note: Full remote tests are in cmd/bd tests; this tests the basic flow -func TestResetToRemote(t *testing.T) { - if testing.Short() { - t.Skip("skipping integration test in short mode") - } - - ctx := context.Background() - - t.Run("returns error when fetch fails", func(t *testing.T) { - repoDir := setupTestRepoWithRemote(t) - defer os.RemoveAll(repoDir) - - syncBranch := "beads-sync" - jsonlPath := filepath.Join(repoDir, ".beads", "issues.jsonl") - - // Create local sync branch without remote - runGit(t, repoDir, "checkout", "-b", syncBranch) - writeFile(t, jsonlPath, `{"id":"local-1"}`) - runGit(t, repoDir, "add", ".") - runGit(t, repoDir, "commit", "-m", "local commit") - runGit(t, repoDir, "checkout", "master") - - // ResetToRemote should fail since remote branch doesn't exist - err := ResetToRemote(ctx, repoDir, syncBranch, jsonlPath) - if err == nil { - // If it succeeds without remote, that's also acceptable - // (the remote is set to self, might not have sync branch) - } - }) -} - -// TestPushSyncBranch tests the push function -// Note: Full push tests require actual remote; this tests basic error handling -func TestPushSyncBranch(t *testing.T) { - if testing.Short() { - t.Skip("skipping integration test in short mode") - } - - ctx := context.Background() - - t.Run("handles missing worktree gracefully", func(t *testing.T) { - repoDir := setupTestRepoWithRemote(t) - defer os.RemoveAll(repoDir) - - syncBranch := "beads-sync" - - // Create sync branch - runGit(t, repoDir, "checkout", "-b", syncBranch) - writeFile(t, filepath.Join(repoDir, ".beads", "issues.jsonl"), `{"id":"test-1"}`) - runGit(t, repoDir, "add", ".") - runGit(t, repoDir, "commit", "-m", "initial") - runGit(t, repoDir, "checkout", "master") - - // PushSyncBranch should handle the worktree creation - err := PushSyncBranch(ctx, repoDir, syncBranch) - // Will fail because origin doesn't have the branch, but should not panic - if err != nil { - // Expected - push will fail since origin doesn't have the branch set up - if !strings.Contains(err.Error(), "push failed") { - // Some other error - acceptable in test env - } - } - }) -} - -// TestRunCmdWithTimeoutMessage tests the timeout message function -func TestRunCmdWithTimeoutMessage(t *testing.T) { - ctx := context.Background() - - t.Run("runs command and returns output", func(t *testing.T) { - cmd := exec.CommandContext(ctx, "echo", "hello") - output, err := runCmdWithTimeoutMessage(ctx, "test message", 5*time.Second, cmd) - if err != nil { - t.Fatalf("runCmdWithTimeoutMessage() error = %v", err) - } - if !strings.Contains(string(output), "hello") { - t.Errorf("runCmdWithTimeoutMessage() output = %q, want to contain 'hello'", output) - } - }) - - t.Run("returns error for failing command", func(t *testing.T) { - cmd := exec.CommandContext(ctx, "false") // Always exits with 1 - _, err := runCmdWithTimeoutMessage(ctx, "test message", 5*time.Second, cmd) - if err == nil { - t.Error("runCmdWithTimeoutMessage() expected error for failing command") - } - }) -} - -// TestPreemptiveFetchAndFastForward tests the pre-emptive fetch function -func TestPreemptiveFetchAndFastForward(t *testing.T) { - if testing.Short() { - t.Skip("skipping integration test in short mode") - } - - ctx := context.Background() - - t.Run("returns nil when remote branch does not exist", func(t *testing.T) { - repoDir := setupTestRepoWithRemote(t) - defer os.RemoveAll(repoDir) - - // Create sync branch locally but don't push - runGit(t, repoDir, "checkout", "-b", "beads-sync") - writeFile(t, filepath.Join(repoDir, ".beads", "issues.jsonl"), `{"id":"test-1"}`) - runGit(t, repoDir, "add", ".") - runGit(t, repoDir, "commit", "-m", "initial") - - err := preemptiveFetchAndFastForward(ctx, repoDir, "beads-sync", "origin") - if err != nil { - t.Errorf("preemptiveFetchAndFastForward() error = %v, want nil (not an error when remote doesn't exist)", err) - } - }) - - t.Run("no-op when local equals remote", func(t *testing.T) { - repoDir := setupTestRepoWithRemote(t) - defer os.RemoveAll(repoDir) - - syncBranch := "beads-sync" - - // Create sync branch - runGit(t, repoDir, "checkout", "-b", syncBranch) - writeFile(t, filepath.Join(repoDir, ".beads", "issues.jsonl"), `{"id":"test-1"}`) - runGit(t, repoDir, "add", ".") - runGit(t, repoDir, "commit", "-m", "initial") - // Set remote ref at same commit - runGit(t, repoDir, "update-ref", "refs/remotes/origin/"+syncBranch, "HEAD") - - err := preemptiveFetchAndFastForward(ctx, repoDir, syncBranch, "origin") - // Should succeed since we're already in sync - if err != nil { - // Might fail on fetch step with self-remote, acceptable - return - } - }) -} - -// TestFetchAndRebaseInWorktree tests the fetch and rebase function -func TestFetchAndRebaseInWorktree(t *testing.T) { - if testing.Short() { - t.Skip("skipping integration test in short mode") - } - - ctx := context.Background() - - t.Run("returns error when fetch fails", func(t *testing.T) { - repoDir := setupTestRepoWithRemote(t) - defer os.RemoveAll(repoDir) - - syncBranch := "beads-sync" - - // Create sync branch locally - runGit(t, repoDir, "checkout", "-b", syncBranch) - writeFile(t, filepath.Join(repoDir, ".beads", "issues.jsonl"), `{"id":"test-1"}`) - runGit(t, repoDir, "add", ".") - runGit(t, repoDir, "commit", "-m", "initial") - - // fetchAndRebaseInWorktree should fail since remote doesn't have the branch - err := fetchAndRebaseInWorktree(ctx, repoDir, syncBranch, "origin") - if err == nil { - // If it succeeds, it means the test setup allowed it (self remote) - return - } - // Expected to fail - if !strings.Contains(err.Error(), "fetch failed") { - // Some other error - still acceptable - } - }) -} - -// Helper: setup a test repo with a (fake) remote -func setupTestRepoWithRemote(t *testing.T) string { - t.Helper() - - tmpDir, err := os.MkdirTemp("", "bd-test-repo-*") - if err != nil { - t.Fatalf("Failed to create temp dir: %v", err) - } - - // Initialize git repo - runGit(t, tmpDir, "init") - runGit(t, tmpDir, "config", "user.email", "test@test.com") - runGit(t, tmpDir, "config", "user.name", "Test User") - - // Create initial commit - writeFile(t, filepath.Join(tmpDir, "README.md"), "# Test Repo") - runGit(t, tmpDir, "add", ".") - runGit(t, tmpDir, "commit", "-m", "initial commit") - - // Create .beads directory - beadsDir := filepath.Join(tmpDir, ".beads") - if err := os.MkdirAll(beadsDir, 0750); err != nil { - os.RemoveAll(tmpDir) - t.Fatalf("Failed to create .beads dir: %v", err) - } - - // Add a fake remote (just for configuration purposes) - runGit(t, tmpDir, "remote", "add", "origin", tmpDir) - - return tmpDir -} - diff --git a/internal/types/types.go b/internal/types/types.go index a762134c..cf83d7aa 100644 --- a/internal/types/types.go +++ b/internal/types/types.go @@ -348,7 +348,7 @@ type Dependency struct { DependsOnID string `json:"depends_on_id"` Type DependencyType `json:"type"` CreatedAt time.Time `json:"created_at"` - CreatedBy string `json:"created_by,omitempty"` + CreatedBy string `json:"created_by"` // Metadata contains type-specific edge data (JSON blob) // Examples: similarity scores, approval details, skill proficiency Metadata string `json:"metadata,omitempty"` diff --git a/skills/beads/README.md b/skills/beads/README.md new file mode 100644 index 00000000..25a68484 --- /dev/null +++ b/skills/beads/README.md @@ -0,0 +1,109 @@ +# Beads Skill for Claude Code + +A comprehensive skill for using [beads](https://github.com/steveyegge/beads) (bd) issue tracking with Claude Code. + +## What This Skill Does + +This skill teaches Claude Code how to use bd effectively for: +- **Multi-session work tracking** - Persistent memory across conversation compactions +- **Dependency management** - Graph-based issue relationships +- **Session handoff** - Writing notes that survive context resets +- **Molecules and wisps** (v0.34.0+) - Reusable work templates and ephemeral workflows + +## Installation + +Copy the `beads/` directory to your Claude Code skills location: + +```bash +# Global installation +cp -r beads ~/.claude/skills/ + +# Or project-local +cp -r beads .claude/skills/ +``` + +## When Claude Uses This Skill + +The skill activates when conversations involve: +- "multi-session", "complex dependencies", "resume after weeks" +- "project memory", "persistent context", "side quest tracking" +- Work that spans multiple days or compaction cycles +- Tasks too complex for simple TodoWrite lists + +## File Structure + +``` +beads/ +β”œβ”€β”€ SKILL.md # Main skill file (Claude reads this first) +β”œβ”€β”€ README.md # This file (for humans) +└── references/ # Detailed documentation (loaded on demand) + β”œβ”€β”€ BOUNDARIES.md # When to use bd vs TodoWrite + β”œβ”€β”€ CLI_BOOTSTRAP_ADMIN.md # CLI command reference + β”œβ”€β”€ DEPENDENCIES.md # Dependency semantics (A blocks B vs B blocks A) + β”œβ”€β”€ INTEGRATION_PATTERNS.md # TodoWrite and other tool integration + β”œβ”€β”€ ISSUE_CREATION.md # When and how to create issues + β”œβ”€β”€ MOLECULES.md # Protos, mols, wisps (v0.34.0+) + β”œβ”€β”€ PATTERNS.md # Common usage patterns + β”œβ”€β”€ RESUMABILITY.md # Writing notes for post-compaction recovery + β”œβ”€β”€ STATIC_DATA.md # Using bd for reference databases + β”œβ”€β”€ TROUBLESHOOTING.md # Common issues and fixes + └── WORKFLOWS.md # Step-by-step workflow guides +``` + +## Key Concepts + +### bd vs TodoWrite + +| Use bd when... | Use TodoWrite when... | +|----------------|----------------------| +| Work spans multiple sessions | Single-session tasks | +| Complex dependencies exist | Linear step-by-step work | +| Need to resume after weeks | Just need a quick checklist | +| Knowledge work with fuzzy boundaries | Clear, immediate tasks | + +### The Dependency Direction Trap + +`bd dep add A B` means **"A depends on B"** (B must complete before A can start). + +```bash +# Want: "Setup must complete before Implementation" +bd dep add implementation setup # βœ“ CORRECT +# NOT: bd dep add setup implementation # βœ— WRONG +``` + +### Surviving Compaction + +When Claude's context gets compacted, conversation history is lost but bd state survives. Write notes as if explaining to a future Claude with zero context: + +```bash +bd update issue-123 --notes "COMPLETED: JWT auth with RS256 +KEY DECISION: RS256 over HS256 for key rotation +IN PROGRESS: Password reset flow +NEXT: Implement rate limiting" +``` + +## Requirements + +- [bd CLI](https://github.com/steveyegge/beads) installed (`brew install steveyegge/beads/bd`) +- A git repository (bd requires git for sync) +- Initialized database (`bd init` in project root) + +## Version Compatibility + +- **v0.34.0+**: Full support including molecules, wisps, and cross-project dependencies +- **v0.15.0+**: Core functionality (dependencies, notes, status tracking) +- **Earlier versions**: Basic functionality but some features may be missing + +## Contributing + +This skill is maintained at [github.com/steveyegge/beads](https://github.com/steveyegge/beads) in the `skills/beads/` directory. + +Issues and PRs welcome for: +- Documentation improvements +- New workflow patterns +- Bug fixes in examples +- Additional troubleshooting scenarios + +## License + +MIT (same as beads) diff --git a/skills/beads/SKILL.md b/skills/beads/SKILL.md index dd138c10..18a64c18 100644 --- a/skills/beads/SKILL.md +++ b/skills/beads/SKILL.md @@ -1,824 +1,644 @@ --- name: beads -description: > - Tracks complex, multi-session work using the Beads issue tracker and dependency graphs, and provides - persistent memory that survives conversation compaction. Use when work spans multiple sessions, has - complex dependencies, or needs persistent context across compaction cycles. Trigger with phrases like - "create task for", "what's ready to work on", "show task", "track this work", "what's blocking", or - "update status". -allowed-tools: "Read,Bash(bd:*)" -version: "0.34.0" -author: "Steve Yegge " -license: "MIT" +description: Track complex, multi-session work with dependency graphs using beads issue tracker. Use when work spans multiple sessions, has complex dependencies, or requires persistent context across compaction cycles. For simple single-session linear tasks, TodoWrite remains appropriate. --- -# Beads - Persistent Task Memory for AI Agents - -Graph-based issue tracker that survives conversation compaction. Provides persistent memory for multi-session work with complex dependencies. +# Beads ## Overview -**bd (beads)** replaces markdown task lists with a dependency-aware graph stored in git. Unlike TodoWrite (session-scoped), bd persists across compactions and tracks complex dependencies. +bd is a graph-based issue tracker for persistent memory across sessions. Use for multi-session work with complex dependencies; use TodoWrite for simple single-session tasks. -**Key Distinction**: -- **bd**: Multi-session work, dependencies, survives compaction, git-backed -- **TodoWrite**: Single-session tasks, linear execution, conversation-scoped +## When to Use bd vs TodoWrite -**Core Capabilities**: -- πŸ“Š **Dependency Graphs**: Track what blocks what (blocks, parent-child, discovered-from, related) -- πŸ’Ύ **Compaction Survival**: Tasks persist when conversation history is compacted -- πŸ™ **Git Integration**: Issues versioned in `.beads/issues.jsonl`, sync with `bd sync` -- πŸ” **Smart Discovery**: Auto-finds ready work (`bd ready`), blocked work (`bd blocked`) -- πŸ“ **Audit Trails**: Complete history of status changes, notes, and decisions -- 🏷️ **Rich Metadata**: Priority (P0-P4), types (bug/feature/task/epic), labels, assignees +### Use bd when: +- **Multi-session work** - Tasks spanning multiple compaction cycles or days +- **Complex dependencies** - Work with blockers, prerequisites, or hierarchical structure +- **Knowledge work** - Strategic documents, research, or tasks with fuzzy boundaries +- **Side quests** - Exploratory work that might pause the main task +- **Project memory** - Need to resume work after weeks away with full context -**When to Use bd vs TodoWrite**: -- ❓ "Will I need this context in 2 weeks?" β†’ **YES** = bd -- ❓ "Could conversation history get compacted?" β†’ **YES** = bd -- ❓ "Does this have blockers/dependencies?" β†’ **YES** = bd -- ❓ "Is this fuzzy/exploratory work?" β†’ **YES** = bd -- ❓ "Will this be done in this session?" β†’ **YES** = TodoWrite -- ❓ "Is this just a task list for me right now?" β†’ **YES** = TodoWrite +### Use TodoWrite when: +- **Single-session tasks** - Work that completes within current session +- **Linear execution** - Straightforward step-by-step tasks with no branching +- **Immediate context** - All information already in conversation +- **Simple tracking** - Just need a checklist to show progress -**Decision Rule**: If resuming in 2 weeks would be hard without bd, use bd. +**Key insight**: If resuming work after 2 weeks would be difficult without bd, use bd. If the work can be picked up from a markdown skim, TodoWrite is sufficient. -## Prerequisites +### Test Yourself: bd or TodoWrite? -**Required**: -- **bd CLI**: Version 0.34.0 or later installed and in PATH -- **Git Repository**: Current directory must be a git repo -- **Initialization**: `bd init` must be run once (humans do this, not agents) +Ask these questions to decide: -**Verify Installation**: -```bash -bd --version # Should return 0.34.0 or later +**Choose bd if:** +- ❓ "Will I need this context in 2 weeks?" β†’ Yes = bd +- ❓ "Could conversation history get compacted?" β†’ Yes = bd +- ❓ "Does this have blockers/dependencies?" β†’ Yes = bd +- ❓ "Is this fuzzy/exploratory work?" β†’ Yes = bd + +**Choose TodoWrite if:** +- ❓ "Will this be done in this session?" β†’ Yes = TodoWrite +- ❓ "Is this just a task list for me right now?" β†’ Yes = TodoWrite +- ❓ "Is this linear with no branching?" β†’ Yes = TodoWrite + +**When in doubt**: Use bd. Better to have persistent memory you don't need than to lose context you needed. + +**For detailed decision criteria and examples, read:** [references/BOUNDARIES.md](references/BOUNDARIES.md) + +## Surviving Compaction Events + +**Critical**: Compaction events delete conversation history but preserve beads. After compaction, bd state is your only persistent memory. + +**What survives compaction:** +- All bead data (issues, notes, dependencies, status) +- Complete work history and context + +**What doesn't survive:** +- Conversation history +- TodoWrite lists +- Recent discussion context + +**Writing notes for post-compaction recovery:** + +Write notes as if explaining to a future agent with zero conversation context: + +**Pattern:** +```markdown +notes field format: +- COMPLETED: Specific deliverables ("implemented JWT refresh endpoint + rate limiting") +- IN PROGRESS: Current state + next immediate step ("testing password reset flow, need user input on email template") +- BLOCKERS: What's preventing progress +- KEY DECISIONS: Important context or user guidance ``` -**First-Time Setup** (humans run once): -```bash -cd /path/to/your/repo -bd init # Creates .beads/ directory with database +**After compaction:** `bd show ` reconstructs full context from notes field. + +### Notes Quality Self-Check + +Before checkpointing (especially pre-compaction), verify your notes pass these tests: + +❓ **Future-me test**: "Could I resume this work in 2 weeks with zero conversation history?" +- [ ] What was completed? (Specific deliverables, not "made progress") +- [ ] What's in progress? (Current state + immediate next step) +- [ ] What's blocked? (Specific blockers with context) +- [ ] What decisions were made? (Why, not just what) + +❓ **Stranger test**: "Could another developer understand this without asking me?" +- [ ] Technical choices explained (not just stated) +- [ ] Trade-offs documented (why this approach vs alternatives) +- [ ] User input captured (decisions that came from discussion) + +**Good note example:** +``` +COMPLETED: JWT auth with RS256 (1hr access, 7d refresh tokens) +KEY DECISION: RS256 over HS256 per security review - enables key rotation +IN PROGRESS: Password reset flow - email service working, need rate limiting +BLOCKERS: Waiting on user decision: reset token expiry (15min vs 1hr trade-off) +NEXT: Implement rate limiting (5 attempts/15min) once expiry decided ``` -**Optional**: -- **BEADS_DIR** environment variable for alternate database location -- **Daemon** for background sync: `bd daemon --start` +**Bad note example:** +``` +Working on auth. Made some progress. More to do. +``` -## Instructions +**For complete compaction recovery workflow, read:** [references/WORKFLOWS.md](references/WORKFLOWS.md#compaction-survival) -### Session Start Protocol +## Session Start Protocol -**Every session, start here:** +**bd is available when:** +- Project has a `.beads/` directory (project-local database), OR +- `~/.beads/` exists (global fallback database for any directory) -#### Step 1: Check for Ready Work +**At session start, always check for bd availability and run ready check.** + +### Session Start Checklist + +Copy this checklist when starting any session where bd is available: + +``` +Session Start: +- [ ] Run bd ready --json to see available work +- [ ] Run bd list --status in_progress --json for active work +- [ ] If in_progress exists: bd show to read notes +- [ ] Report context to user: "X items ready: [summary]" +- [ ] If using global ~/.beads, mention this in report +- [ ] If nothing ready: bd blocked --json to check blockers +``` + +**Pattern**: Always check both `bd ready` AND `bd list --status in_progress`. Read notes field first to understand where previous session left off. + +**Report format**: +- "I can see X items ready to work on: [summary]" +- "Issue Y is in_progress. Last session: [summary from notes]. Next: [from notes]. Should I continue with that?" + +This establishes immediate shared context about available and active work without requiring user prompting. + +**For detailed collaborative handoff process, read:** [references/WORKFLOWS.md](references/WORKFLOWS.md#session-handoff) + +**Note**: bd auto-discovers the database: +- Uses `.beads/*.db` in current project if exists +- Falls back to `~/.beads/default.db` otherwise +- No configuration needed + +### When No Work is Ready + +If `bd ready` returns empty but issues exist: ```bash -bd ready +bd blocked --json ``` -Shows tasks with no open blockers, sorted by priority (P0 β†’ P4). - -**What this shows**: -- Task ID (e.g., `myproject-abc`) -- Title -- Priority level -- Issue type (bug, feature, task, epic) - -**Example output**: -``` -claude-code-plugins-abc [P1] [task] open - Implement user authentication - -claude-code-plugins-xyz [P0] [epic] in_progress - Refactor database layer -``` - -#### Step 2: Pick Highest Priority Task - -Choose the highest priority (P0 > P1 > P2 > P3 > P4) task that's ready. - -#### Step 3: Get Full Context - -```bash -bd show -``` - -Displays: -- Full task description -- Dependency graph (what blocks this, what this blocks) -- Audit trail (all status changes, notes) -- Metadata (created, updated, assignee, labels) - -#### Step 4: Start Working - -```bash -bd update --status in_progress -``` - -Marks task as actively being worked on. - -#### Step 5: Add Notes as You Work - -```bash -bd update --notes "Completed: X. In progress: Y. Blocked by: Z" -``` - -**Critical for compaction survival**: Write notes as if explaining to a future agent with zero conversation context. - -**Note Format** (best practice): -``` -COMPLETED: Specific deliverables (e.g., "implemented JWT refresh endpoint + rate limiting") -IN PROGRESS: Current state + next immediate step -BLOCKERS: What's preventing progress -KEY DECISIONS: Important context or user guidance -``` +Report blockers and suggest next steps. --- -### Task Creation Workflow +## Progress Checkpointing -#### When to Create Tasks +Update bd notes at these checkpoints (don't wait for session end): -Create bd tasks when: -- User mentions tracking work across sessions -- User says "we should fix/build/add X" -- Work has dependencies or blockers -- Exploratory/research work with fuzzy boundaries +**Critical triggers:** +- ⚠️ **Context running low** - User says "running out of context" / "approaching compaction" / "close to token limit" +- πŸ“Š **Token budget > 70%** - Proactively checkpoint when approaching limits +- 🎯 **Major milestone reached** - Completed significant piece of work +- 🚧 **Hit a blocker** - Can't proceed, need to capture what was tried +- πŸ”„ **Task transition** - Switching issues or about to close this one +- ❓ **Before user input** - About to ask decision that might change direction -#### Basic Task Creation +**Proactive monitoring during session:** +- At 70% token usage: "We're at 70% token usage - good time to checkpoint bd notes?" +- At 85% token usage: "Approaching token limit (85%) - checkpointing current state to bd" +- At 90% token usage: Automatically checkpoint without asking -```bash -bd create "Task title" -p 1 --type task +**Current token usage**: Check `Token usage:` messages to monitor proactively. + +**Checkpoint checklist:** + +``` +Progress Checkpoint: +- [ ] Update notes with COMPLETED/IN_PROGRESS/NEXT format +- [ ] Document KEY DECISIONS or BLOCKERS since last update +- [ ] Mark current status (in_progress/blocked/closed) +- [ ] If discovered new work: create issues with discovered-from +- [ ] Verify notes are self-explanatory for post-compaction resume ``` -**Arguments**: -- **Title**: Brief description (required) -- **Priority**: 0-4 where 0=critical, 1=high, 2=medium, 3=low, 4=backlog (default: 2) -- **Type**: bug, feature, task, epic, chore (default: task) +**Most important**: When user says "running out of context" OR when you see >70% token usage - checkpoint immediately, even if mid-task. -**Example**: -```bash -bd create "Fix authentication bug" -p 0 --type bug -``` - -#### Create with Description - -```bash -bd create "Implement OAuth" -p 1 --description "Add OAuth2 support for Google, GitHub, Microsoft. Use passport.js library." -``` - -#### Epic with Children - -```bash -# Create parent epic -bd create "Epic: OAuth Implementation" -p 0 --type epic -# Returns: myproject-abc - -# Create child tasks -bd create "Research OAuth providers" -p 1 --parent myproject-abc -bd create "Implement auth endpoints" -p 1 --parent myproject-abc -bd create "Add frontend login UI" -p 2 --parent myproject-abc -``` - ---- - -### Update & Progress Workflow - -#### Change Status - -```bash -bd update --status -``` - -**Status Values**: -- `open` - Not started -- `in_progress` - Actively working -- `blocked` - Stuck, waiting on something -- `closed` - Completed - -**Example**: -```bash -bd update myproject-abc --status blocked -``` - -#### Add Progress Notes - -```bash -bd update --notes "Progress update here" -``` - -**Appends** to existing notes field (doesn't replace). - -#### Change Priority - -```bash -bd update -p 0 # Escalate to critical -``` - -#### Add Labels - -```bash -bd label add backend -bd label add security -``` - -Labels provide cross-cutting categorization beyond status/type. - ---- - -### Dependency Management - -#### Add Dependencies - -```bash -bd dep add -``` - -**Meaning**: `` blocks `` (parent must be completed first). - -**Dependency Types**: -- **blocks**: Parent must close before child becomes ready -- **parent-child**: Hierarchical relationship (epics and subtasks) -- **discovered-from**: Task A led to discovering task B -- **related**: Tasks are related but not blocking - -**Example**: -```bash -# Deployment blocked by tests passing -bd dep add deploy-task test-task # test-task blocks deploy-task -``` - -#### View Dependencies - -```bash -bd dep list -``` - -Shows: -- What this task blocks (dependents) -- What blocks this task (blockers) - -#### Circular Dependency Prevention - -bd automatically prevents circular dependencies. If you try to create a cycle, the command fails. - ---- - -### Completion Workflow - -#### Close a Task - -```bash -bd close --reason "Completion summary" -``` - -**Best Practice**: Always include a reason describing what was accomplished. - -**Example**: -```bash -bd close myproject-abc --reason "Completed: OAuth endpoints implemented with Google, GitHub providers. Tests passing." -``` - -#### Check Newly Unblocked Work - -After closing a task, run: - -```bash -bd ready -``` - -Closing a task may unblock dependent tasks, making them newly ready. - -#### Close Epics When Children Complete - -```bash -bd epic close-eligible -``` - -Automatically closes epics where all child tasks are closed. - ---- - -### Git Sync Workflow - -#### All-in-One Sync - -```bash -bd sync -``` - -**Performs**: -1. Export database to `.beads/issues.jsonl` -2. Commit changes to git -3. Pull from remote (merge if needed) -4. Import updated JSONL back to database -5. Push local commits to remote - -**Use when**: End of session, before handing off to teammate, after major progress. - -#### Export Only - -```bash -bd export -o backup.jsonl -``` - -Creates JSONL backup without git operations. - -#### Import Only - -```bash -bd import -i backup.jsonl -``` - -Imports JSONL file into database. - -#### Background Daemon - -```bash -bd daemon --start # Auto-sync in background -bd daemon --status # Check daemon health -bd daemon --stop # Stop auto-sync -``` - -Daemon watches for database changes and auto-exports to JSONL. - ---- - -### Find & Search Commands - -#### Find Ready Work - -```bash -bd ready -``` - -Shows tasks with no open blockers. - -#### List All Tasks - -```bash -bd list --status open # Only open tasks -bd list --priority 0 # Only P0 (critical) -bd list --type bug # Only bugs -bd list --label backend # Only labeled "backend" -bd list --assignee alice # Only assigned to alice -``` - -#### Show Task Details - -```bash -bd show -``` - -Full details: description, dependencies, audit trail, metadata. - -#### Search by Text - -```bash -bd search "authentication" # Search titles and descriptions -bd search login --status open # Combine with filters -``` - -#### Find Blocked Work - -```bash -bd blocked -``` - -Shows all tasks that have open blockers preventing them from being worked on. - -#### Project Statistics - -```bash -bd stats -``` - -Shows: -- Total issues by status (open, in_progress, blocked, closed) -- Issues by priority (P0-P4) -- Issues by type (bug, feature, task, epic, chore) -- Completion rate - ---- - -### Complete Command Reference - -| Command | When to Use | Example | -|---------|-------------|---------| -| **FIND COMMANDS** | | | -| `bd ready` | Find unblocked tasks | User asks "what should I work on?" | -| `bd list` | View all tasks (with filters) | "Show me all open bugs" | -| `bd show ` | Get task details | "Show me task bd-42" | -| `bd search ` | Text search across tasks | "Find tasks about auth" | -| `bd blocked` | Find stuck work | "What's blocking us?" | -| `bd stats` | Project metrics | "How many tasks are open?" | -| **CREATE COMMANDS** | | | -| `bd create` | Track new work | "Create a task for this bug" | -| `bd template create` | Use issue template | "Create task from bug template" | -| `bd init` | Initialize beads | "Set up beads in this repo" (humans only) | -| **UPDATE COMMANDS** | | | -| `bd update ` | Change status/priority/notes | "Mark as in progress" | -| `bd dep add` | Link dependencies | "This blocks that" | -| `bd label add` | Tag with labels | "Label this as backend" | -| `bd comments add` | Add comment | "Add comment to task" | -| `bd reopen ` | Reopen closed task | "Reopen bd-42, found regression" | -| `bd rename-prefix` | Rename issue prefix | "Change prefix from bd- to proj-" | -| `bd epic status` | Check epic progress | "Show epic completion %" | -| **COMPLETE COMMANDS** | | | -| `bd close ` | Mark task done | "Close this task, it's done" | -| `bd epic close-eligible` | Auto-close complete epics | "Close epics where all children done" | -| **SYNC COMMANDS** | | | -| `bd sync` | Git sync (all-in-one) | "Sync tasks to git" | -| `bd export` | Export to JSONL | "Backup all tasks" | -| `bd import` | Import from JSONL | "Restore from backup" | -| `bd daemon` | Background sync manager | "Start auto-sync daemon" | -| **CLEANUP COMMANDS** | | | -| `bd delete ` | Delete issues | "Delete test task" (requires --force) | -| `bd compact` | Archive old closed tasks | "Compress database" | -| **REPORTING COMMANDS** | | | -| `bd stats` | Project metrics | "Show project health" | -| `bd audit record` | Log interactions | "Record this LLM call" | -| `bd workflow` | Show workflow guide | "How do I use beads?" | -| **ADVANCED COMMANDS** | | | -| `bd prime` | Refresh AI context | "Load bd workflow rules" | -| `bd quickstart` | Interactive tutorial | "Teach me beads basics" | -| `bd daemons` | Multi-repo daemon mgmt | "Manage all beads daemons" | -| `bd version` | Version check | "Check bd version" | -| `bd restore ` | Restore compacted issue | "Get full history from git" | - ---- - -## Output - -This skill produces: - -**Task IDs**: Format `-` (e.g., `claude-code-plugins-abc`, `myproject-xyz`) - -**Status Summaries**: -``` -5 open, 2 in_progress, 1 blocked, 47 closed -``` - -**Dependency Graphs** (visual tree): -``` -myproject-abc: Deploy to production [P0] [blocked] - Blocked by: - ↳ myproject-def: Run integration tests [P1] [in_progress] - ↳ myproject-ghi: Fix failing tests [P1] [open] -``` - -**Audit Trails** (complete history): -``` -2025-12-22 10:00 - Created by alice (P2, task) -2025-12-22 10:15 - Priority changed: P2 β†’ P0 -2025-12-22 10:30 - Status changed: open β†’ in_progress -2025-12-22 11:00 - Notes added: "Implemented JWT auth..." -2025-12-22 14:00 - Status changed: in_progress β†’ blocked -2025-12-22 14:01 - Notes added: "Blocked: API endpoint returns 503" -``` - ---- - -## Error Handling - -### Common Failures - -#### 1. `bd: command not found` -**Cause**: bd CLI not installed or not in PATH -**Solution**: Install from https://github.com/steveyegge/beads -```bash -# macOS/Linux -curl -fsSL https://raw.githubusercontent.com/steveyegge/beads/main/scripts/install.sh | bash - -# Or via npm -npm install -g @beads/bd - -# Or via Homebrew -brew install steveyegge/beads/bd -``` - -#### 2. `No .beads database found` -**Cause**: beads not initialized in this repository -**Solution**: Run `bd init` (humans do this once, not agents) -```bash -bd init # Creates .beads/ directory -``` - -#### 3. `Task not found: ` -**Cause**: Invalid task ID or task doesn't exist -**Solution**: Use `bd list` to see all tasks and verify ID format -```bash -bd list # See all tasks -bd search # Find task by title -``` - -#### 4. `Circular dependency detected` -**Cause**: Attempting to create a dependency cycle (A blocks B, B blocks A) -**Solution**: bd prevents circular dependencies automatically. Restructure dependency graph. -```bash -bd dep list # View current dependencies -``` - -#### 5. Git merge conflicts in `.beads/issues.jsonl` -**Cause**: Multiple users modified same issue -**Solution**: bd sync handles JSONL conflicts automatically. If manual intervention needed: -```bash -# View conflict -git status - -# bd provides conflict resolution tools -bd sync --merge # Attempt auto-resolution -``` - -#### 6. `Database is locked` -**Cause**: Daemon or another process has exclusive lock -**Solution**: Restart daemon or wait for lock to release -```bash -bd daemon --stop -bd daemon --start -``` - -#### 7. Sync failures -**Cause**: Network issues, authentication failures, or git configuration -**Solution**: Check git remote access and credentials -```bash -git fetch # Test connectivity -git status # Verify repo state -``` - ---- - -## Examples - -### Example 1: Multi-Session Feature (Epic with Children) - -**User Request**: "We need to implement OAuth, this will take multiple sessions" - -**Agent Response**: -```bash -# Create epic -bd create "Epic: OAuth Implementation" -p 0 --type epic -# Returns: claude-code-plugins-abc - -# Create child tasks -bd create "Research OAuth providers (Google, GitHub, Microsoft)" -p 1 --parent claude-code-plugins-abc -# Returns: claude-code-plugins-abc.1 - -bd create "Implement backend auth endpoints" -p 1 --parent claude-code-plugins-abc -# Returns: claude-code-plugins-abc.2 - -bd create "Add frontend login UI components" -p 2 --parent claude-code-plugins-abc -# Returns: claude-code-plugins-abc.3 - -# Add dependencies (backend must complete before frontend) -bd dep add claude-code-plugins-abc.3 claude-code-plugins-abc.2 - -# Start with research -bd update claude-code-plugins-abc.1 --status in_progress -``` - -**Result**: Work structured, ready to resume after compaction. - ---- - -### Example 2: Tracking Blocked Work - -**Scenario**: Agent discovers API is down during implementation - -**Agent Actions**: -```bash -# Mark current task as blocked -bd update claude-code-plugins-xyz --status blocked --notes "API endpoint /auth returns 503, reported to backend team" - -# Create blocker task -bd create "Fix /auth endpoint 503 error" -p 0 --type bug -# Returns: claude-code-plugins-blocker - -# Link dependency (blocker blocks original task) -bd dep add claude-code-plugins-xyz claude-code-plugins-blocker - -# Find other ready work -bd ready -# Shows tasks that aren't blocked - agent can switch to those -``` - -**Result**: Blocked work documented, agent productive on other tasks. - ---- - -### Example 3: Session Resume After Compaction - -**Session 1**: -```bash -bd create "Implement user authentication" -p 1 -bd update myproject-auth --status in_progress -bd update myproject-auth --notes "COMPLETED: JWT library integrated. IN PROGRESS: Testing token refresh. NEXT: Rate limiting" -# [Conversation compacted - history deleted] -``` - -**Session 2** (weeks later): -```bash -bd ready -# Shows: myproject-auth [P1] [task] in_progress - -bd show myproject-auth -# Full context preserved: -# - Title: Implement user authentication -# - Status: in_progress -# - Notes: "COMPLETED: JWT library integrated. IN PROGRESS: Testing token refresh. NEXT: Rate limiting" -# - No conversation history needed! - -# Agent continues exactly where it left off -bd update myproject-auth --notes "COMPLETED: Token refresh working. IN PROGRESS: Rate limiting implementation" -``` - -**Result**: Zero context loss despite compaction. - ---- - -### Example 4: Complex Dependencies (3-Level Graph) - -**Scenario**: Build feature with prerequisites - -```bash -# Create tasks -bd create "Deploy to production" -p 0 -# Returns: deploy-prod - -bd create "Run integration tests" -p 1 -# Returns: integration-tests - -bd create "Fix failing unit tests" -p 1 -# Returns: fix-tests - -# Create dependency chain -bd dep add deploy-prod integration-tests # Integration blocks deploy -bd dep add integration-tests fix-tests # Fixes block integration - -# Check what's ready -bd ready -# Shows: fix-tests (no blockers) -# Hides: integration-tests (blocked by fix-tests) -# Hides: deploy-prod (blocked by integration-tests) - -# Work on ready task -bd update fix-tests --status in_progress -# ... fix tests ... -bd close fix-tests --reason "All unit tests passing" - -# Check ready again -bd ready -# Shows: integration-tests (now unblocked!) -# Still hides: deploy-prod (still blocked) -``` - -**Result**: Dependency chain enforces correct order automatically. - ---- - -### Example 5: Team Collaboration (Git Sync) - -**Alice's Session**: -```bash -bd create "Refactor database layer" -p 1 -bd update db-refactor --status in_progress -bd update db-refactor --notes "Started: Migrating to Prisma ORM" - -# End of day - sync to git -bd sync -# Commits tasks to git, pushes to remote -``` - -**Bob's Session** (next day): -```bash -# Start of day - sync from git -bd sync -# Pulls latest tasks from remote - -bd ready -# Shows: db-refactor [P1] [in_progress] (assigned to alice) - -# Bob checks status -bd show db-refactor -# Sees Alice's notes: "Started: Migrating to Prisma ORM" - -# Bob works on different task (no conflicts) -bd create "Add API rate limiting" -p 2 -bd update rate-limit --status in_progress - -# End of day -bd sync -# Both Alice's and Bob's tasks synchronized -``` - -**Result**: Distributed team coordination through git. - ---- - -## Resources - -### When to Use bd vs TodoWrite (Decision Tree) - -**Use bd when**: -- βœ… Work spans multiple sessions or days -- βœ… Tasks have dependencies or blockers -- βœ… Need to survive conversation compaction -- βœ… Exploratory/research work with fuzzy boundaries -- βœ… Collaboration with team (git sync) - -**Use TodoWrite when**: -- βœ… Single-session linear tasks -- βœ… Simple checklist for immediate work -- βœ… All context is in current conversation -- βœ… Will complete within current session - -**Decision Rule**: If resuming in 2 weeks would be hard without bd, use bd. - ---- - -### Essential Commands Quick Reference - -Top 10 most-used commands: - -| Command | Purpose | -|---------|---------| -| `bd ready` | Show tasks ready to work on | -| `bd create "Title" -p 1` | Create new task | -| `bd show ` | View task details | -| `bd update --status in_progress` | Start working | -| `bd update --notes "Progress"` | Add progress notes | -| `bd close --reason "Done"` | Complete task | -| `bd dep add ` | Add dependency | -| `bd list` | See all tasks | -| `bd search ` | Find tasks by keyword | -| `bd sync` | Sync with git remote | - ---- - -### Session Start Protocol (Every Session) - -1. **Run** `bd ready` first -2. **Pick** highest priority ready task -3. **Run** `bd show ` to get full context -4. **Update** status to `in_progress` -5. **Add notes** as you work (critical for compaction survival) +**Test yourself**: "If compaction happened right now, could future-me resume from these notes?" --- ### Database Selection -bd uses `.beads/` directory by default. +bd automatically selects the appropriate database: +- **Project-local** (`.beads/` in project): Used for project-specific work +- **Global fallback** (`~/.beads/`): Used when no project-local database exists -**Alternate Database**: +**Use case for global database**: Cross-project tracking, personal task management, knowledge work that doesn't belong to a specific project. + +**When to use --db flag explicitly:** +- Accessing a specific database outside current directory +- Working with multiple databases (e.g., project database + reference database) +- Example: `bd --db /path/to/reference/terms.db list` + +**Database discovery rules:** +- bd looks for `.beads/*.db` in current working directory +- If not found, uses `~/.beads/default.db` +- Shell cwd can reset between commands - use absolute paths with --db when operating on non-local databases + +**For complete session start workflows, read:** [references/WORKFLOWS.md](references/WORKFLOWS.md#session-start) + +## Core Operations + +All bd commands support `--json` flag for structured output when needed for programmatic parsing. + +### Essential Operations + +**Check ready work:** ```bash -export BEADS_DIR=/path/to/alternate/beads -bd ready # Uses alternate database +bd ready +bd ready --json # For structured output +bd ready --priority 0 # Filter by priority +bd ready --assignee alice # Filter by assignee ``` -**Multiple Databases**: Use `BEADS_DIR` to switch between projects. +**Create new issue:** + +**IMPORTANT**: Always quote title and description arguments with double quotes, especially when containing spaces or special characters. + +```bash +bd create "Fix login bug" +bd create "Add OAuth" -p 0 -t feature +bd create "Write tests" -d "Unit tests for auth module" --assignee alice +bd create "Research caching" --design "Evaluate Redis vs Memcached" + +# Examples with special characters (requires quoting): +bd create "Fix: auth doesn't handle edge cases" -p 1 +bd create "Refactor auth module" -d "Split auth.go into separate files (handlers, middleware, utils)" +``` + +**Update issue status:** +```bash +bd update issue-123 --status in_progress +bd update issue-123 --priority 0 +bd update issue-123 --assignee bob +bd update issue-123 --design "Decided to use Redis for persistence support" +``` + +**Close completed work:** +```bash +bd close issue-123 +bd close issue-123 --reason "Implemented in PR #42" +bd close issue-1 issue-2 issue-3 --reason "Bulk close related work" +``` + +**Show issue details:** +```bash +bd show issue-123 +bd show issue-123 --json +``` + +**List issues:** +```bash +bd list +bd list --status open +bd list --priority 0 +bd list --type bug +bd list --assignee alice +``` + +**For complete CLI reference with all flags and examples, read:** [references/CLI_REFERENCE.md](references/CLI_REFERENCE.md) + +## Field Usage Reference + +Quick guide for when and how to use each bd field: + +| Field | Purpose | When to Set | Update Frequency | +|-------|---------|-------------|------------------| +| **description** | Immutable problem statement | At creation | Never (fixed forever) | +| **design** | Initial approach, architecture, decisions | During planning | Rarely (only if approach changes) | +| **acceptance-criteria** | Concrete deliverables checklist (`- [ ]` syntax) | When design is clear | Mark `- [x]` as items complete | +| **notes** | Session handoff (COMPLETED/IN_PROGRESS/NEXT) | During work | At session end, major milestones | +| **status** | Workflow state (openβ†’in_progressβ†’closed) | As work progresses | When changing phases | +| **priority** | Urgency level (0=highest, 3=lowest) | At creation | Adjust if priorities shift | + +**Key pattern**: Notes field is your "read me first" at session start. See [WORKFLOWS.md](references/WORKFLOWS.md#session-handoff) for session handoff details. --- -### Advanced Features +## Issue Lifecycle Workflow -For complex scenarios, see references: +### 1. Discovery Phase (Proactive Issue Creation) -- **Compaction Strategies**: `{baseDir}/references/ADVANCED_WORKFLOWS.md` - - Tier 1/2/ultra compaction for old closed issues - - Semantic summarization to reduce database size +**During exploration or implementation, proactively file issues for:** +- Bugs or problems discovered +- Potential improvements noticed +- Follow-up work identified +- Technical debt encountered +- Questions requiring research -- **Epic Management**: `{baseDir}/references/ADVANCED_WORKFLOWS.md` - - Nested epics (epics containing epics) - - Bulk operations on epic children +**Pattern:** +```bash +# When encountering new work during a task: +bd create "Found: auth doesn't handle profile permissions" +bd dep add current-task-id new-issue-id --type discovered-from -- **Template System**: `{baseDir}/references/ADVANCED_WORKFLOWS.md` - - Custom issue templates - - Template variables and defaults +# Continue with original task - issue persists for later +``` -- **Git Integration**: `{baseDir}/references/GIT_INTEGRATION.md` - - Merge conflict resolution - - Daemon architecture - - Branching strategies +**Key benefit**: Capture context immediately instead of losing it when conversation ends. -- **Team Collaboration**: `{baseDir}/references/TEAM_COLLABORATION.md` - - Multi-user workflows - - Worktree support - - Prefix strategies +### 2. Execution Phase (Status Maintenance) ---- +**Mark issues in_progress when starting work:** +```bash +bd update issue-123 --status in_progress +``` -### Full Documentation +**Update throughout work:** +```bash +# Add design notes as implementation progresses +bd update issue-123 --design "Using JWT with RS256 algorithm" -Complete reference: https://github.com/steveyegge/beads +# Update acceptance criteria if requirements clarify +bd update issue-123 --acceptance "- JWT validation works\n- Tests pass\n- Error handling returns 401" +``` -Existing detailed guides: -- `{baseDir}/references/CLI_REFERENCE.md` - Complete command syntax -- `{baseDir}/references/WORKFLOWS.md` - Detailed workflow patterns -- `{baseDir}/references/DEPENDENCIES.md` - Dependency system deep dive -- `{baseDir}/references/RESUMABILITY.md` - Compaction survival guide -- `{baseDir}/references/BOUNDARIES.md` - bd vs TodoWrite detailed comparison -- `{baseDir}/references/STATIC_DATA.md` - Database schema reference +**Close when complete:** +```bash +bd close issue-123 --reason "Implemented JWT validation with tests passing" +``` ---- +**Important**: Closed issues remain in database - they're not deleted, just marked complete for project history. -**Progressive Disclosure**: This skill provides essential instructions for all 30 beads commands. For advanced topics (compaction, templates, team workflows), see the references directory. Slash commands (`/bd-create`, `/bd-ready`, etc.) remain available as explicit fallback for power users. +### 3. Planning Phase (Dependency Graphs) + +For complex multi-step work, structure issues with dependencies before starting: + +**Create parent epic:** +```bash +bd create "Implement user authentication" -t epic -d "OAuth integration with JWT tokens" +``` + +**Create subtasks:** +```bash +bd create "Set up OAuth credentials" -t task +bd create "Implement authorization flow" -t task +bd create "Add token refresh" -t task +``` + +**Link with dependencies:** +```bash +# parent-child for epic structure +bd dep add auth-epic auth-setup --type parent-child +bd dep add auth-epic auth-flow --type parent-child + +# blocks for ordering +bd dep add auth-setup auth-flow +``` + +**For detailed dependency patterns and types, read:** [references/DEPENDENCIES.md](references/DEPENDENCIES.md) + +## Dependency Types Reference + +bd supports four dependency types: + +1. **blocks** - Hard blocker (issue A blocks issue B from starting) +2. **related** - Soft link (issues are related but not blocking) +3. **parent-child** - Hierarchical (epic/subtask relationship) +4. **discovered-from** - Provenance (issue B discovered while working on A) + +**For complete guide on when to use each type with examples and patterns, read:** [references/DEPENDENCIES.md](references/DEPENDENCIES.md) + +## Integration with TodoWrite + +**Both tools complement each other at different timescales:** + +### Temporal Layering Pattern + +**TodoWrite** (short-term working memory - this hour): +- Tactical execution: "Review Section 3", "Expand Q&A answers" +- Marked completed as you go +- Present/future tense ("Review", "Expand", "Create") +- Ephemeral: Disappears when session ends + +**Beads** (long-term episodic memory - this week/month): +- Strategic objectives: "Continue work on strategic planning document" +- Key decisions and outcomes in notes field +- Past tense in notes ("COMPLETED", "Discovered", "Blocked by") +- Persistent: Survives compaction and session boundaries + +### The Handoff Pattern + +1. **Session start**: Read bead β†’ Create TodoWrite items for immediate actions +2. **During work**: Mark TodoWrite items completed as you go +3. **Reach milestone**: Update bead notes with outcomes + context +4. **Session end**: TodoWrite disappears, bead survives with enriched notes + +**After compaction**: TodoWrite is gone forever, but bead notes reconstruct what happened. + +### Example: TodoWrite tracks execution, Beads capture meaning + +**TodoWrite:** +``` +[completed] Implement login endpoint +[in_progress] Add password hashing with bcrypt +[pending] Create session middleware +``` + +**Corresponding bead notes:** +``` +bd update issue-123 --notes "COMPLETED: Login endpoint with bcrypt password +hashing (12 rounds). KEY DECISION: Using JWT tokens (not sessions) for stateless +auth - simplifies horizontal scaling. IN PROGRESS: Session middleware implementation. +NEXT: Need user input on token expiry time (1hr vs 24hr trade-off)." +``` + +**Don't duplicate**: TodoWrite tracks execution, Beads captures meaning and context. + +**For patterns on transitioning between tools mid-session, read:** [references/BOUNDARIES.md](references/BOUNDARIES.md#integration-patterns) + +## Common Patterns + +### Pattern 1: Knowledge Work Session + +**Scenario**: User asks "Help me write a proposal for expanding the analytics platform" + +**What you see**: +```bash +$ bd ready +# Returns: bd-42 "Research analytics platform expansion proposal" (in_progress) + +$ bd show bd-42 +Notes: "COMPLETED: Reviewed current stack (Mixpanel, Amplitude) +IN PROGRESS: Drafting cost-benefit analysis section +NEXT: Need user input on budget constraints before finalizing recommendations" +``` + +**What you do**: +1. Read notes to understand current state +2. Create TodoWrite for immediate work: + ``` + - [ ] Draft cost-benefit analysis + - [ ] Ask user about budget constraints + - [ ] Finalize recommendations + ``` +3. Work on tasks, mark TodoWrite items completed +4. At milestone, update bd notes: + ```bash + bd update bd-42 --notes "COMPLETED: Cost-benefit analysis drafted. + KEY DECISION: User confirmed $50k budget cap - ruled out enterprise options. + IN PROGRESS: Finalizing recommendations (Posthog + custom ETL). + NEXT: Get user review of draft before closing issue." + ``` + +**Outcome**: TodoWrite disappears at session end, but bd notes preserve context for next session. + +### Pattern 2: Side Quest Handling + +During main task, discover a problem: +1. Create issue: `bd create "Found: inventory system needs refactoring"` +2. Link using discovered-from: `bd dep add main-task new-issue --type discovered-from` +3. Assess: blocker or can defer? +4. If blocker: `bd update main-task --status blocked`, work on new issue +5. If deferrable: note in issue, continue main task + +### Pattern 3: Multi-Session Project Resume + +Starting work after time away: +1. Run `bd ready` to see available work +2. Run `bd blocked` to understand what's stuck +3. Run `bd list --status closed --limit 10` to see recent completions +4. Run `bd show issue-id` on issue to work on +5. Update status and begin work + +**For complete workflow walkthroughs with checklists, read:** [references/WORKFLOWS.md](references/WORKFLOWS.md) + +## Issue Creation + +**Quick guidelines:** +- Ask user first for knowledge work with fuzzy boundaries +- Create directly for clear bugs, technical debt, or discovered work +- Use clear titles, sufficient context in descriptions +- Design field: HOW to build (can change during implementation) +- Acceptance criteria: WHAT success looks like (should remain stable) + +### Issue Creation Checklist + +Copy when creating new issues: + +``` +Creating Issue: +- [ ] Title: Clear, specific, action-oriented +- [ ] Description: Problem statement (WHY this matters) - immutable +- [ ] Design: HOW to build (can change during work) +- [ ] Acceptance: WHAT success looks like (stays stable) +- [ ] Priority: 0=critical, 1=high, 2=normal, 3=low +- [ ] Type: bug/feature/task/epic/chore +``` + +**Self-check for acceptance criteria:** + +❓ "If I changed the implementation approach, would these criteria still apply?" +- β†’ **Yes** = Good criteria (outcome-focused) +- β†’ **No** = Move to design field (implementation-focused) + +**Example:** +- βœ… Acceptance: "User tokens persist across sessions and refresh automatically" +- ❌ Wrong: "Use JWT tokens with 1-hour expiry" (that's design, not acceptance) + +**For detailed guidance on when to ask vs create, issue quality, resumability patterns, and design vs acceptance criteria, read:** [references/ISSUE_CREATION.md](references/ISSUE_CREATION.md) + +## Alternative Use Cases + +bd is primarily for work tracking, but can also serve as queryable database for static reference data (glossaries, terminology) with adaptations. + +**For guidance on using bd for reference databases and static data, read:** [references/STATIC_DATA.md](references/STATIC_DATA.md) + +## Statistics and Monitoring + +**Check project health:** +```bash +bd stats +bd stats --json +``` + +Returns: total issues, open, in_progress, closed, blocked, ready, avg lead time + +**Find blocked work:** +```bash +bd blocked +bd blocked --json +``` + +Use stats to: +- Report progress to user +- Identify bottlenecks +- Understand project velocity + +## Advanced Features + +### Issue Types + +```bash +bd create "Title" -t task # Standard work item (default) +bd create "Title" -t bug # Defect or problem +bd create "Title" -t feature # New functionality +bd create "Title" -t epic # Large work with subtasks +bd create "Title" -t chore # Maintenance or cleanup +``` + +### Priority Levels + +```bash +bd create "Title" -p 0 # Highest priority (critical) +bd create "Title" -p 1 # High priority +bd create "Title" -p 2 # Normal priority (default) +bd create "Title" -p 3 # Low priority +``` + +### Bulk Operations + +```bash +# Close multiple issues at once +bd close issue-1 issue-2 issue-3 --reason "Completed in sprint 5" + +# Create multiple issues from markdown file +bd create --file issues.md +``` + +### Dependency Visualization + +```bash +# Show full dependency tree for an issue +bd dep tree issue-123 + +# Check for circular dependencies +bd dep cycles +``` + +### Built-in Help + +```bash +# Quick start guide (comprehensive built-in reference) +bd quickstart + +# Command-specific help +bd create --help +bd dep --help +``` + +## JSON Output + +All bd commands support `--json` flag for structured output: + +```bash +bd ready --json +bd show issue-123 --json +bd list --status open --json +bd stats --json +``` + +Use JSON output when you need to parse results programmatically or extract specific fields. + +## Troubleshooting + +**If bd command not found:** +- Check installation: `bd version` +- Verify PATH includes bd binary location + +**If issues seem lost:** +- Use `bd list` to see all issues +- Filter by status: `bd list --status closed` +- Closed issues remain in database permanently + +**If bd show can't find issue by name:** +- `bd show` requires issue IDs, not issue titles +- Workaround: `bd list | grep -i "search term"` to find ID first +- Then: `bd show issue-id` with the discovered ID +- For glossaries/reference databases where names matter more than IDs, consider using markdown format alongside the database + +**If dependencies seem wrong:** +- Use `bd show issue-id` to see full dependency tree +- Use `bd dep tree issue-id` for visualization +- Dependencies are directional: `bd dep add from-id to-id` means from-id blocks to-id +- See [references/DEPENDENCIES.md](references/DEPENDENCIES.md#common-mistakes) + +**If database seems out of sync:** +- bd auto-syncs JSONL after each operation (5s debounce) +- bd auto-imports JSONL when newer than DB (after git pull) +- Manual operations: `bd export`, `bd import` + +## Reference Files + +Detailed information organized by topic: + +| Reference | Read When | +|-----------|-----------| +| [references/BOUNDARIES.md](references/BOUNDARIES.md) | Need detailed decision criteria for bd vs TodoWrite, or integration patterns | +| [references/CLI_REFERENCE.md](references/CLI_REFERENCE.md) | Need complete command reference, flag details, or examples | +| [references/WORKFLOWS.md](references/WORKFLOWS.md) | Need step-by-step workflows with checklists for common scenarios | +| [references/DEPENDENCIES.md](references/DEPENDENCIES.md) | Need deep understanding of dependency types or relationship patterns | +| [references/ISSUE_CREATION.md](references/ISSUE_CREATION.md) | Need guidance on when to ask vs create issues, issue quality, or design vs acceptance criteria | +| [references/STATIC_DATA.md](references/STATIC_DATA.md) | Want to use bd for reference databases, glossaries, or static data instead of work tracking | diff --git a/skills/beads/references/INTEGRATION_PATTERNS.md b/skills/beads/references/INTEGRATION_PATTERNS.md new file mode 100644 index 00000000..366493f0 --- /dev/null +++ b/skills/beads/references/INTEGRATION_PATTERNS.md @@ -0,0 +1,407 @@ +# Integration Patterns with Other Skills + +How bd-issue-tracking integrates with TodoWrite, writing-plans, and other skills for optimal workflow. + +## Contents + +- [TodoWrite Integration](#todowrite-integration) - Temporal layering pattern +- [writing-plans Integration](#writing-plans-integration) - Detailed implementation plans +- [Cross-Skill Workflows](#cross-skill-workflows) - Using multiple skills together +- [Decision Framework](#decision-framework) - When to use which tool + +--- + +## TodoWrite Integration + +**Both tools complement each other at different timescales:** + +### Temporal Layering Pattern + +**TodoWrite** (short-term working memory - this hour): +- Tactical execution: "Review Section 3", "Expand Q&A answers" +- Marked completed as you go +- Present/future tense ("Review", "Expand", "Create") +- Ephemeral: Disappears when session ends + +**Beads** (long-term episodic memory - this week/month): +- Strategic objectives: "Continue work on strategic planning document" +- Key decisions and outcomes in notes field +- Past tense in notes ("COMPLETED", "Discovered", "Blocked by") +- Persistent: Survives compaction and session boundaries + +**Key insight**: TodoWrite = working copy for the current hour. Beads = project journal for the current month. + +### The Handoff Pattern + +1. **Session start**: Read bead β†’ Create TodoWrite items for immediate actions +2. **During work**: Mark TodoWrite items completed as you go +3. **Reach milestone**: Update bead notes with outcomes + context +4. **Session end**: TodoWrite disappears, bead survives with enriched notes + +**After compaction**: TodoWrite is gone forever, but bead notes reconstruct what happened. + +### Example: TodoWrite tracks execution, Beads capture meaning + +**TodoWrite (ephemeral execution view):** +``` +[completed] Implement login endpoint +[in_progress] Add password hashing with bcrypt +[pending] Create session middleware +``` + +**Corresponding bead notes (persistent context):** +```bash +bd update issue-123 --notes "COMPLETED: Login endpoint with bcrypt password +hashing (12 rounds). KEY DECISION: Using JWT tokens (not sessions) for stateless +auth - simplifies horizontal scaling. IN PROGRESS: Session middleware implementation. +NEXT: Need user input on token expiry time (1hr vs 24hr trade-off)." +``` + +**What's different**: +- TodoWrite: Task names (what to do) +- Beads: Outcomes and decisions (what was learned, why it matters) + +**Don't duplicate**: TodoWrite tracks execution, Beads captures meaning and context. + +### When to Update Each Tool + +**Update TodoWrite** (frequently): +- Mark task completed as you finish each one +- Add new tasks as you break down work +- Update in_progress when switching tasks + +**Update Beads** (at milestones): +- Completed a significant piece of work +- Made a key decision that needs documentation +- Hit a blocker that pauses progress +- About to ask user for input +- Session token usage > 70% +- End of session + +**Pattern**: TodoWrite changes every few minutes. Beads updates every hour or at natural breakpoints. + +### Full Workflow Example + +**Scenario**: Implement OAuth authentication (multi-session work) + +**Session 1 - Planning**: +```bash +# Create bd issue +bd create "Implement OAuth authentication" -t feature -p 0 --design " +JWT tokens with refresh rotation. +See BOUNDARIES.md for bd vs TodoWrite decision. +" + +# Mark in_progress +bd update oauth-1 --status in_progress + +# Create TodoWrite for today's work +TodoWrite: +- [ ] Research OAuth 2.0 refresh token flow +- [ ] Design token schema +- [ ] Set up test environment +``` + +**End of Session 1**: +```bash +# Update bd with outcomes +bd update oauth-1 --notes "COMPLETED: Researched OAuth2 refresh flow. Decided on 7-day refresh tokens. +KEY DECISION: RS256 over HS256 (enables key rotation per security review). +IN PROGRESS: Need to set up test OAuth provider. +NEXT: Configure test provider, then implement token endpoint." + +# TodoWrite disappears when session ends +``` + +**Session 2 - Implementation** (after compaction): +```bash +# Read bd to reconstruct context +bd show oauth-1 +# See: COMPLETED research, NEXT is configure test provider + +# Create fresh TodoWrite from NEXT +TodoWrite: +- [ ] Configure test OAuth provider +- [ ] Implement token endpoint +- [ ] Add basic tests + +# Work proceeds... + +# Update bd at milestone +bd update oauth-1 --notes "COMPLETED: Test provider configured, token endpoint implemented. +TESTS: 5 passing (token generation, validation, expiry). +IN PROGRESS: Adding refresh token rotation. +NEXT: Implement rotation, add rate limiting, security review." +``` + +**For complete decision criteria and boundaries, see:** [BOUNDARIES.md](BOUNDARIES.md) + +--- + +## writing-plans Integration + +**For complex multi-step features**, the design field in bd issues can link to detailed implementation plans that break work into bite-sized RED-GREEN-REFACTOR steps. + +### When to Create Detailed Plans + +**Use detailed plans for:** +- Complex features with multiple components +- Multi-session work requiring systematic breakdown +- Features where TDD discipline adds value (core logic, critical paths) +- Work that benefits from explicit task sequencing + +**Skip detailed plans for:** +- Simple features (single function, straightforward logic) +- Exploratory work (API testing, pattern discovery) +- Infrastructure setup (configuration, wiring) + +**The test:** If you can implement it in one session without a checklist, skip the detailed plan. + +### Using the writing-plans Skill + +When design field needs detailed breakdown, reference the **writing-plans** skill: + +**Pattern:** +```bash +# Create issue with high-level design +bd create "Implement OAuth token refresh" --design " +Add JWT refresh token flow with rotation. +See docs/plans/2025-10-23-oauth-refresh-design.md for detailed plan. +" + +# Then use writing-plans skill to create detailed plan +# The skill creates: docs/plans/YYYY-MM-DD-.md +``` + +**Detailed plan structure** (from writing-plans): +- Bite-sized tasks (2-5 minutes each) +- Explicit RED-GREEN-REFACTOR steps per task +- Exact file paths and complete code +- Verification commands with expected output +- Frequent commit points + +**Example task from detailed plan:** +```markdown +### Task 1: Token Refresh Endpoint + +**Files:** +- Create: `src/auth/refresh.py` +- Test: `tests/auth/test_refresh.py` + +**Step 1: Write failing test** +```python +def test_refresh_token_returns_new_access_token(): + refresh_token = create_valid_refresh_token() + response = refresh_endpoint(refresh_token) + assert response.status == 200 + assert response.access_token is not None +``` + +**Step 2: Run test to verify it fails** +Run: `pytest tests/auth/test_refresh.py::test_refresh_token_returns_new_access_token -v` +Expected: FAIL with "refresh_endpoint not defined" + +**Step 3: Implement minimal code** +[... exact implementation ...] + +**Step 4: Verify test passes** +[... verification ...] + +**Step 5: Commit** +```bash +git add tests/auth/test_refresh.py src/auth/refresh.py +git commit -m "feat: add token refresh endpoint" +``` +``` + +### Integration with bd Workflow + +**Three-layer structure**: +1. **bd issue**: Strategic objective + high-level design +2. **Detailed plan** (writing-plans): Step-by-step execution guide +3. **TodoWrite**: Current task within the plan + +**During planning phase:** +1. Create bd issue with high-level design +2. If complex: Use writing-plans skill to create detailed plan +3. Link plan in design field: `See docs/plans/YYYY-MM-DD-.md` + +**During execution phase:** +1. Open detailed plan (if exists) +2. Use TodoWrite to track current task within plan +3. Update bd notes at milestones, not per-task +4. Close bd issue when all plan tasks complete + +**Don't duplicate:** Detailed plan = execution steps. BD notes = outcomes and decisions. + +**Example bd notes after using detailed plan:** +```bash +bd update oauth-5 --notes "COMPLETED: Token refresh endpoint (5 tasks from plan: endpoint + rotation + tests) +KEY DECISION: 7-day refresh tokens (vs 30-day) - reduces risk of token theft +TESTS: All 12 tests passing (auth, rotation, expiry, error handling)" +``` + +### When NOT to Use Detailed Plans + +**Red flags:** +- Feature is simple enough to implement in one pass +- Work is exploratory (discovering patterns, testing APIs) +- Infrastructure work (OAuth setup, MCP configuration) +- Would spend more time planning than implementing + +**Rule of thumb:** Use detailed plans when systematic breakdown prevents mistakes, not for ceremony. + +**Pattern summary**: +- **Simple feature**: bd issue only +- **Complex feature**: bd issue + TodoWrite +- **Very complex feature**: bd issue + writing-plans + TodoWrite + +--- + +## Cross-Skill Workflows + +### Pattern: Research Document with Strategic Planning + +**Scenario**: User asks "Help me write a strategic planning document for Q4" + +**Tools used**: bd-issue-tracking + developing-strategic-documents skill + +**Workflow**: +1. Create bd issue for tracking: + ```bash + bd create "Q4 strategic planning document" -t task -p 0 + bd update strat-1 --status in_progress + ``` + +2. Use developing-strategic-documents skill for research and writing + +3. Update bd notes at milestones: + ```bash + bd update strat-1 --notes "COMPLETED: Research phase (reviewed 5 competitor docs, 3 internal reports) + KEY DECISION: Focus on market expansion over cost optimization per exec input + IN PROGRESS: Drafting recommendations section + NEXT: Get exec review of draft recommendations before finalizing" + ``` + +4. TodoWrite tracks immediate writing tasks: + ``` + - [ ] Draft recommendation 1: Market expansion + - [ ] Add supporting data from research + - [ ] Create budget estimates + ``` + +**Why this works**: bd preserves context across sessions (document might take days), skill provides writing framework, TodoWrite tracks current work. + +### Pattern: Multi-File Refactoring + +**Scenario**: Refactor authentication system across 8 files + +**Tools used**: bd-issue-tracking + systematic-debugging (if issues found) + +**Workflow**: +1. Create epic and subtasks: + ```bash + bd create "Refactor auth system to use JWT" -t epic -p 0 + bd create "Update login endpoint" -t task + bd create "Update token validation" -t task + bd create "Update middleware" -t task + bd create "Update tests" -t task + + # Link hierarchy + bd dep add auth-epic login-1 --type parent-child + bd dep add auth-epic validation-2 --type parent-child + bd dep add auth-epic middleware-3 --type parent-child + bd dep add auth-epic tests-4 --type parent-child + + # Add ordering + bd dep add validation-2 login-1 # validation depends on login + bd dep add middleware-3 validation-2 # middleware depends on validation + bd dep add tests-4 middleware-3 # tests depend on middleware + ``` + +2. Work through subtasks in order, using TodoWrite for each: + ``` + Current: login-1 + TodoWrite: + - [ ] Update login route signature + - [ ] Add JWT generation + - [ ] Update tests + - [ ] Verify backward compatibility + ``` + +3. Update bd notes as each completes: + ```bash + bd close login-1 --reason "Updated to JWT. Tests passing. Backward compatible with session auth." + ``` + +4. If issues discovered, use systematic-debugging skill + create blocker issues + +**Why this works**: bd tracks dependencies and progress across files, TodoWrite focuses on current file, skills provide specialized frameworks when needed. + +--- + +## Decision Framework + +### Which Tool for Which Purpose? + +| Need | Tool | Why | +|------|------|-----| +| Track today's execution | TodoWrite | Lightweight, shows current progress | +| Preserve context across sessions | bd | Survives compaction, persistent memory | +| Detailed implementation steps | writing-plans | RED-GREEN-REFACTOR breakdown | +| Research document structure | developing-strategic-documents | Domain-specific framework | +| Debug complex issue | systematic-debugging | Structured debugging protocol | + +### Decision Tree + +``` +Is this work done in this session? +β”œβ”€ Yes β†’ Use TodoWrite only +└─ No β†’ Use bd + β”œβ”€ Simple feature β†’ bd issue + TodoWrite + └─ Complex feature β†’ bd issue + writing-plans + TodoWrite + +Will conversation history get compacted? +β”œβ”€ Likely β†’ Use bd (context survives) +└─ Unlikely β†’ TodoWrite is sufficient + +Does work have dependencies or blockers? +β”œβ”€ Yes β†’ Use bd (tracks relationships) +└─ No β†’ TodoWrite is sufficient + +Is this specialized domain work? +β”œβ”€ Research/writing β†’ developing-strategic-documents +β”œβ”€ Complex debugging β†’ systematic-debugging +β”œβ”€ Detailed implementation β†’ writing-plans +└─ General tracking β†’ bd + TodoWrite +``` + +### Integration Anti-Patterns + +**Don't**: +- Duplicate TodoWrite tasks into bd notes (different purposes) +- Create bd issues for single-session linear work (use TodoWrite) +- Put detailed implementation steps in bd notes (use writing-plans) +- Update bd after every TodoWrite task (update at milestones) +- Use writing-plans for exploratory work (defeats the purpose) + +**Do**: +- Update bd when changing tools or reaching milestones +- Use TodoWrite as "working copy" of bd's NEXT section +- Link between tools (bd design field β†’ writing-plans file path) +- Choose the right level of formality for the work complexity + +--- + +## Summary + +**Key principle**: Each tool operates at a different timescale and level of detail. + +- **TodoWrite**: Minutes to hours (current execution) +- **bd**: Hours to weeks (persistent context) +- **writing-plans**: Days to weeks (detailed breakdown) +- **Other skills**: As needed (domain frameworks) + +**Integration pattern**: Use the lightest tool sufficient for the task, add heavier tools only when complexity demands it. + +**For complete boundaries and decision criteria, see:** [BOUNDARIES.md](BOUNDARIES.md) diff --git a/skills/beads/references/MOLECULES.md b/skills/beads/references/MOLECULES.md new file mode 100644 index 00000000..9484b832 --- /dev/null +++ b/skills/beads/references/MOLECULES.md @@ -0,0 +1,354 @@ +# Molecules and Wisps Reference + +This reference covers bd's molecular chemistry system for reusable work templates and ephemeral workflows. + +## The Chemistry Metaphor + +bd v0.34.0 introduces a chemistry-inspired workflow system: + +| Phase | Name | Storage | Synced? | Use Case | +|-------|------|---------|---------|----------| +| **Solid** | Proto | `.beads/` | Yes | Reusable template (epic with `template` label) | +| **Liquid** | Mol | `.beads/` | Yes | Persistent instance (real issues from template) | +| **Vapor** | Wisp | `.beads-wisp/` | No | Ephemeral instance (operational work, no audit trail) | + +**Phase transitions:** +- `spawn` / `pour`: Solid (proto) β†’ Liquid (mol) +- `wisp create`: Solid (proto) β†’ Vapor (wisp) +- `squash`: Vapor (wisp) β†’ Digest (permanent summary) +- `burn`: Vapor (wisp) β†’ Nothing (deleted, no trace) +- `distill`: Liquid (ad-hoc epic) β†’ Solid (proto) + +## When to Use Molecules + +### Use Protos/Mols When: +- **Repeatable patterns** - Same workflow structure used multiple times (releases, reviews, onboarding) +- **Team knowledge capture** - Encoding tribal knowledge as executable templates +- **Audit trail matters** - Work that needs to be tracked and reviewed later +- **Cross-session persistence** - Work spanning multiple days/sessions + +### Use Wisps When: +- **Operational loops** - Patrol cycles, health checks, routine monitoring +- **One-shot orchestration** - Temporary coordination that shouldn't clutter history +- **Diagnostic runs** - Debugging workflows with no archival value +- **High-frequency ephemeral work** - Would create noise in permanent database + +**Key insight:** Wisps prevent database bloat from routine operations while still providing structure during execution. + +--- + +## Proto Management + +### Creating a Proto + +Protos are epics with the `template` label. Create manually or distill from existing work: + +```bash +# Manual creation +bd create "Release Workflow" --type epic --label template +bd create "Run tests for {{component}}" --type task +bd dep add task-id epic-id --type parent-child + +# Distill from ad-hoc work (extracts template from existing epic) +bd mol distill bd-abc123 --as "Release Workflow" --var version=1.0.0 +``` + +**Proto naming convention:** Use `mol-` prefix for clarity (e.g., `mol-release`, `mol-patrol`). + +### Listing Protos + +```bash +bd mol catalog # List all protos +bd mol catalog --json # Machine-readable +``` + +### Viewing Proto Structure + +```bash +bd mol show mol-release # Show template structure and variables +bd mol show mol-release --json # Machine-readable +``` + +--- + +## Spawning Molecules + +### Basic Spawn (Creates Wisp by Default) + +```bash +bd mol spawn mol-patrol # Creates wisp (ephemeral) +bd mol spawn mol-feature --pour # Creates mol (persistent) +bd mol spawn mol-release --var version=2.0 # With variable substitution +``` + +**Chemistry shortcuts:** +```bash +bd pour mol-feature # Shortcut for spawn --pour +bd wisp create mol-patrol # Explicit wisp creation +``` + +### Spawn with Immediate Execution + +```bash +bd mol run mol-release --var version=2.0 +``` + +`bd mol run` does three things: +1. Spawns the molecule (persistent) +2. Assigns root issue to caller +3. Pins root issue for session recovery + +**Use `mol run` when:** Starting durable work that should survive crashes. The pin ensures `bd ready` shows the work after restart. + +### Spawn with Attachments + +Attach additional protos in a single command: + +```bash +bd mol spawn mol-feature --attach mol-testing --var name=auth +# Spawns mol-feature, then spawns mol-testing and bonds them +``` + +**Attach types:** +- `sequential` (default) - Attached runs after primary completes +- `parallel` - Attached runs alongside primary +- `conditional` - Attached runs only if primary fails + +```bash +bd mol spawn mol-deploy --attach mol-rollback --attach-type conditional +``` + +--- + +## Bonding Molecules + +### Bond Types + +```bash +bd mol bond A B # Sequential: B runs after A +bd mol bond A B --type parallel # Parallel: B runs alongside A +bd mol bond A B --type conditional # Conditional: B runs if A fails +``` + +### Operand Combinations + +| A | B | Result | +|---|---|--------| +| proto | proto | Compound proto (reusable template) | +| proto | mol | Spawn proto, attach to molecule | +| mol | proto | Spawn proto, attach to molecule | +| mol | mol | Join into compound molecule | + +### Phase Control in Bonds + +By default, spawned protos inherit target's phase. Override with flags: + +```bash +# Found bug during wisp patrol? Persist it: +bd mol bond mol-critical-bug wisp-patrol --pour + +# Need ephemeral diagnostic on persistent feature? +bd mol bond mol-temp-check bd-feature --wisp +``` + +### Custom Compound Names + +```bash +bd mol bond mol-feature mol-deploy --as "Feature with Deploy" +``` + +--- + +## Wisp Lifecycle + +### Creating Wisps + +```bash +bd wisp create mol-patrol # From proto +bd mol spawn mol-patrol # Same (spawn defaults to wisp) +bd mol spawn mol-check --var target=db # With variables +``` + +### Listing Wisps + +```bash +bd wisp list # List all wisps +bd wisp list --json # Machine-readable +``` + +### Ending Wisps + +**Option 1: Squash (compress to digest)** +```bash +bd mol squash wisp-abc123 # Auto-generate summary +bd mol squash wisp-abc123 --summary "Completed patrol" # Agent-provided summary +bd mol squash wisp-abc123 --keep-children # Keep children, just create digest +bd mol squash wisp-abc123 --dry-run # Preview +``` + +Squash creates a permanent digest issue summarizing the wisp's work, then deletes the wisp children. + +**Option 2: Burn (delete without trace)** +```bash +bd mol burn wisp-abc123 # Delete wisp, no digest +``` + +Use burn for routine work with no archival value. + +### Garbage Collection + +```bash +bd wisp gc # Clean up orphaned wisps +``` + +--- + +## Distilling Protos + +Extract a reusable template from ad-hoc work: + +```bash +bd mol distill bd-o5xe --as "Release Workflow" +bd mol distill bd-abc --var feature_name=auth-refactor --var version=1.0.0 +``` + +**What distill does:** +1. Loads existing epic and all children +2. Clones structure as new proto (adds `template` label) +3. Replaces concrete values with `{{variable}}` placeholders + +**Variable syntax (both work):** +```bash +--var branch=feature-auth # variable=value (recommended) +--var feature-auth=branch # value=variable (auto-detected) +``` + +**Use cases:** +- Team develops good workflow organically, wants to reuse it +- Capture tribal knowledge as executable templates +- Create starting point for similar future work + +--- + +## Cross-Project Dependencies + +### Concept + +Projects can depend on capabilities shipped by other projects: + +```bash +# Project A ships a capability +bd ship auth-api # Marks capability as available + +# Project B depends on it +bd dep add bd-123 external:project-a:auth-api +``` + +### Shipping Capabilities + +```bash +bd ship # Ship capability (requires closed issue) +bd ship --force # Ship even if issue not closed +bd ship --dry-run # Preview +``` + +**How it works:** +1. Find issue with `export:` label +2. Validate issue is closed +3. Add `provides:` label + +### Depending on External Capabilities + +```bash +bd dep add external:: +``` + +The dependency is satisfied when the external project has a closed issue with `provides:` label. + +**`bd ready` respects external deps:** Issues blocked by unsatisfied external dependencies won't appear in ready list. + +--- + +## Common Patterns + +### Pattern: Weekly Review Proto + +```bash +# Create proto +bd create "Weekly Review" --type epic --label template +bd create "Review open issues" --type task +bd create "Update priorities" --type task +bd create "Archive stale work" --type task +# Link as children... + +# Use each week +bd mol spawn mol-weekly-review --pour +``` + +### Pattern: Ephemeral Patrol Cycle + +```bash +# Patrol proto exists +bd wisp create mol-patrol + +# Execute patrol work... + +# End patrol +bd mol squash wisp-abc123 --summary "Patrol complete: 3 issues found, 2 resolved" +``` + +### Pattern: Feature with Rollback + +```bash +bd mol spawn mol-deploy --attach mol-rollback --attach-type conditional +# If deploy fails, rollback automatically becomes unblocked +``` + +### Pattern: Capture Tribal Knowledge + +```bash +# After completing a good workflow organically +bd mol distill bd-release-epic --as "Release Process" --var version=X.Y.Z +# Now team can: bd mol spawn mol-release-process --var version=2.0.0 +``` + +--- + +## CLI Quick Reference + +| Command | Purpose | +|---------|---------| +| `bd mol catalog` | List available protos | +| `bd mol show ` | Show proto/mol structure | +| `bd mol spawn ` | Create wisp from proto (default) | +| `bd mol spawn --pour` | Create persistent mol from proto | +| `bd mol run ` | Spawn + assign + pin (durable execution) | +| `bd mol bond ` | Combine protos or molecules | +| `bd mol distill ` | Extract proto from ad-hoc work | +| `bd mol squash ` | Compress wisp children to digest | +| `bd mol burn ` | Delete wisp without trace | +| `bd pour ` | Shortcut for `spawn --pour` | +| `bd wisp create ` | Create ephemeral wisp | +| `bd wisp list` | List all wisps | +| `bd wisp gc` | Garbage collect orphaned wisps | +| `bd ship ` | Publish capability for cross-project deps | + +--- + +## Troubleshooting + +**"Proto not found"** +- Check `bd mol catalog` for available protos +- Protos need `template` label on the epic + +**"Variable not substituted"** +- Use `--var key=value` syntax +- Check proto for `{{key}}` placeholders with `bd mol show` + +**"Wisp commands fail"** +- Wisps stored in `.beads-wisp/` (separate from `.beads/`) +- Check `bd wisp list` for active wisps + +**"External dependency not satisfied"** +- Target project must have closed issue with `provides:` label +- Use `bd ship ` in target project first diff --git a/skills/beads/references/PATTERNS.md b/skills/beads/references/PATTERNS.md new file mode 100644 index 00000000..fb1e0849 --- /dev/null +++ b/skills/beads/references/PATTERNS.md @@ -0,0 +1,341 @@ +# Common Usage Patterns + +Practical patterns for using bd effectively across different scenarios. + +## Contents + +- [Knowledge Work Session](#knowledge-work-session) - Resume long-running research or writing tasks +- [Side Quest Handling](#side-quest-handling) - Capture discovered work without losing context +- [Multi-Session Project Resume](#multi-session-project-resume) - Pick up work after time away +- [Status Transitions](#status-transitions) - When to change issue status +- [Compaction Recovery](#compaction-recovery) - Resume after conversation history is lost +- [Issue Closure](#issue-closure) - Documenting completion properly + +--- + +## Knowledge Work Session + +**Scenario**: User asks "Help me write a proposal for expanding the analytics platform" + +**What you see**: +```bash +$ bd ready +# Returns: bd-42 "Research analytics platform expansion proposal" (in_progress) + +$ bd show bd-42 +Notes: "COMPLETED: Reviewed current stack (Mixpanel, Amplitude) +IN PROGRESS: Drafting cost-benefit analysis section +NEXT: Need user input on budget constraints before finalizing recommendations" +``` + +**What you do**: +1. Read notes to understand current state +2. Create TodoWrite for immediate work: + ``` + - [ ] Draft cost-benefit analysis + - [ ] Ask user about budget constraints + - [ ] Finalize recommendations + ``` +3. Work on tasks, mark TodoWrite items completed +4. At milestone, update bd notes: + ```bash + bd update bd-42 --notes "COMPLETED: Cost-benefit analysis drafted. + KEY DECISION: User confirmed $50k budget cap - ruled out enterprise options. + IN PROGRESS: Finalizing recommendations (Posthog + custom ETL). + NEXT: Get user review of draft before closing issue." + ``` + +**Outcome**: TodoWrite disappears at session end, but bd notes preserve context for next session. + +**Key insight**: Notes field captures the "why" and context, TodoWrite tracks the "doing" right now. + +--- + +## Side Quest Handling + +**Scenario**: During main task, discover a problem that needs attention. + +**Pattern**: +1. Create issue immediately: `bd create "Found: inventory system needs refactoring"` +2. Link provenance: `bd dep add main-task new-issue --type discovered-from` +3. Assess urgency: blocker or can defer? +4. **If blocker**: + - `bd update main-task --status blocked` + - `bd update new-issue --status in_progress` + - Work on the blocker +5. **If deferrable**: + - Note in new issue's design field + - Continue main task + - New issue persists for later + +**Why this works**: Captures context immediately (before forgetting), preserves relationship to main work, allows flexible prioritization. + +**Example (with MCP):** + +Working on "Implement checkout flow" (checkout-1), discover payment validation security hole: + +1. Create bug issue: `mcp__plugin_beads_beads__create` with `{title: "Fix: payment validation bypasses card expiry check", type: "bug", priority: 0}` +2. Link discovery: `mcp__plugin_beads_beads__dep` with `{from_issue: "checkout-1", to_issue: "payment-bug-2", type: "discovered-from"}` +3. Block current work: `mcp__plugin_beads_beads__update` with `{issue_id: "checkout-1", status: "blocked", notes: "Blocked by payment-bug-2: security hole in validation"}` +4. Start new work: `mcp__plugin_beads_beads__update` with `{issue_id: "payment-bug-2", status: "in_progress"}` + +(CLI: `bd create "Fix: payment validation..." -t bug -p 0` then `bd dep add` and `bd update` commands) + +--- + +## Multi-Session Project Resume + +**Scenario**: Starting work after days or weeks away from a project. + +**Pattern (with MCP)**: +1. **Check what's ready**: Use `mcp__plugin_beads_beads__ready` to see available work +2. **Check what's stuck**: Use `mcp__plugin_beads_beads__blocked` to understand blockers +3. **Check recent progress**: Use `mcp__plugin_beads_beads__list` with `status:"closed"` to see completions +4. **Read detailed context**: Use `mcp__plugin_beads_beads__show` for the issue you'll work on +5. **Update status**: Use `mcp__plugin_beads_beads__update` with `status:"in_progress"` +6. **Begin work**: Create TodoWrite from notes field's NEXT section + +(CLI: `bd ready`, `bd blocked`, `bd list --status closed`, `bd show `, `bd update --status in_progress`) + +**Example**: +```bash +$ bd ready +Ready to work on (3): + auth-5: "Add OAuth refresh token rotation" (priority: 0) + api-12: "Document REST API endpoints" (priority: 1) + test-8: "Add integration tests for payment flow" (priority: 2) + +$ bd show auth-5 +Title: Add OAuth refresh token rotation +Status: open +Priority: 0 (critical) + +Notes: +COMPLETED: Basic JWT auth working +IN PROGRESS: Need to add token refresh +NEXT: Implement rotation per OWASP guidelines (7-day refresh tokens) +BLOCKER: None - ready to proceed + +$ bd update auth-5 --status in_progress +# Now create TodoWrite based on NEXT section +``` + +**For complete session start workflow with checklist, see:** [WORKFLOWS.md](WORKFLOWS.md#session-start) + +--- + +## Status Transitions + +Understanding when to change issue status. + +### Status Lifecycle + +``` +open β†’ in_progress β†’ closed + ↓ ↓ +blocked blocked +``` + +### When to Use Each Status + +**open** (default): +- Issue created but not started +- Waiting for dependencies to clear +- Planned work not yet begun +- **Command**: Issues start as `open` by default + +**in_progress**: +- Actively working on this issue right now +- Has been read and understood +- Making commits or changes related to this +- **Command**: `bd update issue-id --status in_progress` +- **When**: Start of work session on this issue + +**blocked**: +- Cannot proceed due to external blocker +- Waiting for user input/decision +- Dependency not completed +- Technical blocker discovered +- **Command**: `bd update issue-id --status blocked` +- **When**: Hit a blocker, capture what blocks you in notes +- **Note**: Document blocker in notes field: "BLOCKER: Waiting for API key from ops team" + +**closed**: +- Work completed and verified +- Tests passing +- Acceptance criteria met +- **Command**: `bd close issue-id --reason "Implemented with tests passing"` +- **When**: All work done, ready to move on +- **Note**: Issues remain in database, just marked complete + +### Transition Examples + +**Starting work**: +```bash +bd ready # See what's available +bd update auth-5 --status in_progress +# Begin working +``` + +**Hit a blocker**: +```bash +bd update auth-5 --status blocked --notes "BLOCKER: Need OAuth client ID from product team. Emailed Jane on 2025-10-23." +# Switch to different issue or create new work +``` + +**Unblocking**: +```bash +# Once blocker resolved +bd update auth-5 --status in_progress --notes "UNBLOCKED: Received OAuth credentials. Resuming implementation." +``` + +**Completing**: +```bash +bd close auth-5 --reason "Implemented OAuth refresh with 7-day rotation. Tests passing. PR #42 merged." +``` + +--- + +## Compaction Recovery + +**Scenario**: Conversation history has been compacted. You need to resume work with zero conversation context. + +**What survives compaction**: +- All bd issues and notes +- Complete work history +- Dependencies and relationships + +**What's lost**: +- Conversation history +- TodoWrite lists +- Recent discussion + +### Recovery Pattern + +1. **Check in-progress work**: + ```bash + bd list --status in_progress + ``` + +2. **Read notes for context**: + ```bash + bd show issue-id + # Read notes field - should explain current state + ``` + +3. **Reconstruct TodoWrite from notes**: + - COMPLETED section: Done, skip + - IN PROGRESS section: Current state + - NEXT section: **This becomes your TodoWrite list** + +4. **Report to user**: + ``` + "From bd notes: [summary of COMPLETED]. Currently [IN PROGRESS]. + Next steps: [from NEXT]. Should I continue with that?" + ``` + +### Example Recovery + +**bd show returns**: +``` +Issue: bd-42 "OAuth refresh token implementation" +Status: in_progress +Notes: +COMPLETED: Basic JWT validation working (RS256, 1hr access tokens) +KEY DECISION: 7-day refresh tokens per security review +IN PROGRESS: Implementing token rotation endpoint +NEXT: Add rate limiting (5 refresh attempts per 15min), then write tests +BLOCKER: None +``` + +**Recovery actions**: +1. Read notes, understand context +2. Create TodoWrite: + ``` + - [ ] Implement rate limiting on refresh endpoint + - [ ] Write tests for token rotation + - [ ] Verify security guidelines met + ``` +3. Report: "From notes: JWT validation is done with 7-day refresh tokens. Currently implementing rotation endpoint. Next: add rate limiting and tests. Should I continue?" +4. Resume work based on user response + +**For complete compaction survival workflow, see:** [WORKFLOWS.md](WORKFLOWS.md#compaction-survival) + +--- + +## Issue Closure + +**Scenario**: Work is complete. How to close properly? + +### Closure Checklist + +Before closing, verify: +- [ ] **Acceptance criteria met**: All items checked off +- [ ] **Tests passing**: If applicable +- [ ] **Documentation updated**: If needed +- [ ] **Follow-up work filed**: New issues created for discovered work +- [ ] **Key decisions documented**: In notes field + +### Closure Pattern + +**Minimal closure** (simple tasks): +```bash +bd close task-123 --reason "Implemented feature X" +``` + +**Detailed closure** (complex work): +```bash +# Update notes with final state +bd update task-123 --notes "COMPLETED: OAuth refresh with 7-day rotation +KEY DECISION: RS256 over HS256 per security review +TESTS: 12 tests passing (auth, rotation, expiry, errors) +FOLLOW-UP: Filed perf-99 for token cleanup job" + +# Close with summary +bd close task-123 --reason "Implemented OAuth refresh token rotation with rate limiting. All security guidelines met. Tests passing." +``` + +### Documenting Resolution (Outcome vs Design) + +For issues where the outcome differed from initial design, use `--notes` to document what actually happened: + +```bash +# Initial design was hypothesis - document actual outcome in notes +bd update bug-456 --notes "RESOLUTION: Not a bug - behavior is correct per OAuth spec. Documentation was unclear. Filed docs-789 to clarify auth flow in user guide." + +bd close bug-456 --reason "Resolved: documentation issue, not bug" +``` + +**Pattern**: Design field = initial approach. Notes field = what actually happened (prefix with RESOLUTION: for clarity). + +### Discovering Follow-up Work + +When closing reveals new work: + +```bash +# While closing auth feature, realize performance needs work +bd create "Optimize token lookup query" -t task -p 2 + +# Link the provenance +bd dep add auth-5 perf-99 --type discovered-from + +# Now close original +bd close auth-5 --reason "OAuth refresh implemented. Discovered perf optimization needed (filed perf-99)." +``` + +**Why link with discovered-from**: Preserves the context of how you found the new work. Future you will appreciate knowing it came from the auth implementation. + +--- + +## Pattern Summary + +| Pattern | When to Use | Key Command | Preserves | +|---------|-------------|-------------|-----------| +| **Knowledge Work** | Long-running research, writing | `bd update --notes` | Context across sessions | +| **Side Quest** | Discovered during other work | `bd dep add --type discovered-from` | Relationship to original | +| **Multi-Session Resume** | Returning after time away | `bd ready`, `bd show` | Full project state | +| **Status Transitions** | Tracking work state | `bd update --status` | Current state | +| **Compaction Recovery** | History lost | Read notes field | All context in notes | +| **Issue Closure** | Completing work | `bd close --reason` | Decisions and outcomes | + +**For detailed workflows with step-by-step checklists, see:** [WORKFLOWS.md](WORKFLOWS.md) diff --git a/skills/beads/references/TROUBLESHOOTING.md b/skills/beads/references/TROUBLESHOOTING.md new file mode 100644 index 00000000..2043c9c7 --- /dev/null +++ b/skills/beads/references/TROUBLESHOOTING.md @@ -0,0 +1,489 @@ +# Troubleshooting Guide + +Common issues encountered when using bd and how to resolve them. + +## Interface-Specific Troubleshooting + +**MCP tools (local environment):** +- MCP tools require bd daemon running +- Check daemon status: `bd daemon --status` (CLI) +- If MCP tools fail, verify daemon is running and restart if needed +- MCP tools automatically use daemon mode (no --no-daemon option) + +**CLI (web environment or local):** +- CLI can use daemon mode (default) or direct mode (--no-daemon) +- Direct mode has 3-5 second sync delay +- Web environment: Install via `npm install -g @beads/cli` +- Web environment: Initialize via `bd init ` before first use + +**Most issues below apply to both interfaces** - the underlying database and daemon behavior is the same. + +## Contents + +- [Dependencies Not Persisting](#dependencies-not-persisting) +- [Status Updates Not Visible](#status-updates-not-visible) +- [Daemon Won't Start](#daemon-wont-start) +- [Database Errors on Cloud Storage](#database-errors-on-cloud-storage) +- [JSONL File Not Created](#jsonl-file-not-created) +- [Version Requirements](#version-requirements) + +--- + +## Dependencies Not Persisting + +### Symptom +```bash +bd dep add issue-2 issue-1 --type blocks +# Reports: βœ“ Added dependency +bd show issue-2 +# Shows: No dependencies listed +``` + +### Root Cause (Fixed in v0.15.0+) +This was a **bug in bd** (GitHub issue #101) where the daemon ignored dependencies during issue creation. **Fixed in bd v0.15.0** (Oct 21, 2025). + +### Resolution + +**1. Check your bd version:** +```bash +bd version +``` + +**2. If version < 0.15.0, update bd:** +```bash +# Via Homebrew (macOS/Linux) +brew upgrade bd + +# Via go install +go install github.com/steveyegge/beads/cmd/bd@latest + +# Via package manager +# See https://github.com/steveyegge/beads#installing +``` + +**3. Restart daemon after upgrade:** +```bash +pkill -f "bd daemon" # Kill old daemon +bd daemon # Start new daemon with fix +``` + +**4. Test dependency creation:** +```bash +bd create "Test A" -t task +bd create "Test B" -t task +bd dep add --type blocks +bd show +# Should show: "Depends on (1): β†’ " +``` + +### Still Not Working? + +If dependencies still don't persist after updating: + +1. **Check daemon is running:** + ```bash + ps aux | grep "bd daemon" + ``` + +2. **Try without --no-daemon flag:** + ```bash + # Instead of: bd --no-daemon dep add ... + # Use: bd dep add ... (let daemon handle it) + ``` + +3. **Check JSONL file:** + ```bash + cat .beads/issues.jsonl | jq '.dependencies' + # Should show dependency array + ``` + +4. **Report to beads GitHub** with: + - `bd version` output + - Operating system + - Reproducible test case + +--- + +## Status Updates Not Visible + +### Symptom +```bash +bd --no-daemon update issue-1 --status in_progress +# Reports: βœ“ Updated issue: issue-1 +bd show issue-1 +# Shows: Status: open (not in_progress!) +``` + +### Root Cause +This is **expected behavior**, not a bug. Understanding requires knowing bd's architecture: + +**BD Architecture:** +- **JSONL files** (`.beads/issues.jsonl`): Human-readable export format +- **SQLite database** (`.beads/*.db`): Source of truth for queries +- **Daemon**: Syncs JSONL ↔ SQLite every 5 minutes + +**What `--no-daemon` actually does:** +- **Writes**: Go directly to JSONL file +- **Reads**: Still come from SQLite database +- **Sync delay**: Daemon imports JSONL β†’ SQLite periodically + +### Resolution + +**Option 1: Use daemon mode (recommended)** +```bash +# Don't use --no-daemon for CRUD operations +bd update issue-1 --status in_progress +bd show issue-1 +# βœ“ Status reflects immediately +``` + +**Option 2: Wait for sync (if using --no-daemon)** +```bash +bd --no-daemon update issue-1 --status in_progress +# Wait 3-5 seconds for daemon to sync +sleep 5 +bd show issue-1 +# βœ“ Status should reflect now +``` + +**Option 3: Manual sync trigger** +```bash +bd --no-daemon update issue-1 --status in_progress +# Trigger sync by exporting/importing +bd export > /dev/null 2>&1 # Forces sync +bd show issue-1 +``` + +### When to Use `--no-daemon` + +**Use --no-daemon for:** +- Batch import scripts (performance) +- CI/CD environments (no persistent daemon) +- Testing/debugging + +**Don't use --no-daemon for:** +- Interactive development +- Real-time status checks +- When you need immediate query results + +--- + +## Daemon Won't Start + +### Symptom +```bash +bd daemon +# Error: not in a git repository +# Hint: run 'git init' to initialize a repository +``` + +### Root Cause +bd daemon requires a **git repository** because it uses git for: +- Syncing issues to git remote (optional) +- Version control of `.beads/*.jsonl` files +- Commit history of issue changes + +### Resolution + +**Initialize git repository:** +```bash +# In your project directory +git init +bd daemon +# βœ“ Daemon should start now +``` + +**Prevent git remote operations:** +```bash +# If you don't want daemon to pull from remote +bd daemon --global=false +``` + +**Flags:** +- `--global=false`: Don't sync with git remote +- `--interval=10m`: Custom sync interval (default: 5m) +- `--auto-commit=true`: Auto-commit JSONL changes + +--- + +## Database Errors on Cloud Storage + +### Symptom +```bash +# In directory: /Users/name/Google Drive/... +bd init myproject +# Error: disk I/O error (522) +# OR: Error: database is locked +``` + +### Root Cause +**SQLite incompatibility with cloud sync filesystems.** + +Cloud services (Google Drive, Dropbox, OneDrive, iCloud) don't support: +- POSIX file locking (required by SQLite) +- Consistent file handles across sync operations +- Atomic write operations + +This is a **known SQLite limitation**, not a bd bug. + +### Resolution + +**Move bd database to local filesystem:** + +```bash +# Wrong location (cloud sync) +~/Google Drive/My Work/project/.beads/ # βœ— Will fail + +# Correct location (local disk) +~/Repos/project/.beads/ # βœ“ Works reliably +~/Projects/project/.beads/ # βœ“ Works reliably +``` + +**Migration steps:** + +1. **Move project to local disk:** + ```bash + mv ~/Google\ Drive/project ~/Repos/project + cd ~/Repos/project + ``` + +2. **Re-initialize bd (if needed):** + ```bash + bd init myproject + ``` + +3. **Import existing issues (if you had JSONL export):** + ```bash + bd import < issues-backup.jsonl + ``` + +**Alternative: Use global `~/.beads/` database** + +If you must keep work on cloud storage: +```bash +# Don't initialize bd in cloud-synced directory +# Use global database instead +cd ~/Google\ Drive/project +bd create "My task" +# Uses ~/.beads/default.db (on local disk) +``` + +**Workaround limitations:** +- No per-project database isolation +- All projects share same issue prefix +- Manual tracking of which issues belong to which project + +**Recommendation:** Keep code/projects on local disk, sync final deliverables to cloud. + +--- + +## JSONL File Not Created + +### Symptom +```bash +bd init myproject +bd --no-daemon create "Test" -t task +ls .beads/ +# Only shows: .gitignore, myproject.db +# Missing: issues.jsonl +``` + +### Root Cause +**JSONL initialization coupling.** The `issues.jsonl` file is created by daemon on first startup, not by `bd init`. + +### Resolution + +**Start daemon once to initialize JSONL:** +```bash +bd daemon --global=false & +# Wait for initialization +sleep 2 + +# Now JSONL file exists +ls .beads/issues.jsonl +# βœ“ File created + +# Subsequent --no-daemon operations work +bd --no-daemon create "Task 1" -t task +cat .beads/issues.jsonl +# βœ“ Shows task data +``` + +**Why this matters:** +- Daemon owns the JSONL export format +- First daemon run creates empty JSONL skeleton +- `--no-daemon` operations assume JSONL exists + +**Pattern for batch scripts:** +```bash +#!/bin/bash +# Batch import script + +bd init myproject +bd daemon --global=false & # Start daemon +sleep 3 # Wait for initialization + +# Now safe to use --no-daemon for performance +for item in "${items[@]}"; do + bd --no-daemon create "$item" -t feature +done + +# Daemon syncs JSONL β†’ SQLite in background +sleep 5 # Wait for final sync + +# Query results +bd stats +``` + +--- + +## Version Requirements + +### Minimum Version for Dependency Persistence + +**Issue:** Dependencies created but don't appear in `bd show` or dependency tree. + +**Fix:** Upgrade to **bd v0.15.0+** (released Oct 2025) + +**Check version:** +```bash +bd version +# Should show: bd version 0.15.0 or higher +``` + +**If using MCP plugin:** +```bash +# Update Claude Code beads plugin +claude plugin update beads +``` + +### Breaking Changes + +**v0.15.0:** +- MCP parameter names changed from `from_id/to_id` to `issue_id/depends_on_id` +- Dependency creation now persists correctly in daemon mode + +**v0.14.0:** +- Daemon architecture changes +- Auto-sync JSONL behavior introduced + +--- + +## MCP-Specific Issues + +### Dependencies Created Backwards + +**Symptom:** +Using MCP tools, dependencies end up reversed from intended. + +**Example:** +```python +# Want: "task-2 depends on task-1" (task-1 blocks task-2) +beads_add_dependency(issue_id="task-1", depends_on_id="task-2") +# Wrong! This makes task-1 depend on task-2 +``` + +**Root Cause:** +Parameter confusion between old (`from_id/to_id`) and new (`issue_id/depends_on_id`) names. + +**Resolution:** + +**Correct MCP usage (bd v0.15.0+):** +```python +# Correct: task-2 depends on task-1 +beads_add_dependency( + issue_id="task-2", # Issue that has dependency + depends_on_id="task-1", # Issue that must complete first + dep_type="blocks" +) +``` + +**Mnemonic:** +- `issue_id`: The issue that **waits** +- `depends_on_id`: The issue that **must finish first** + +**Equivalent CLI:** +```bash +bd dep add task-2 task-1 --type blocks +# Meaning: task-2 depends on task-1 +``` + +**Verify dependency direction:** +```bash +bd show task-2 +# Should show: "Depends on: task-1" +# Not the other way around +``` + +--- + +## Getting Help + +### Debug Checklist + +Before reporting issues, collect this information: + +```bash +# 1. Version +bd version + +# 2. Daemon status +ps aux | grep "bd daemon" + +# 3. Database location +echo $PWD/.beads/*.db +ls -la .beads/ + +# 4. Git status +git status +git log --oneline -1 + +# 5. JSONL contents (for dependency issues) +cat .beads/issues.jsonl | jq '.' | head -50 +``` + +### Report to beads GitHub + +If problems persist: + +1. **Check existing issues:** https://github.com/steveyegge/beads/issues +2. **Create new issue** with: + - bd version (`bd version`) + - Operating system + - Debug checklist output (above) + - Minimal reproducible example + - Expected vs actual behavior + +### Claude Code Skill Issues + +If the **bd-issue-tracking skill** provides incorrect guidance: + +1. **Check skill version:** + ```bash + ls -la ~/.claude/skills/bd-issue-tracking/ + head -20 ~/.claude/skills/bd-issue-tracking/SKILL.md + ``` + +2. **Report via Claude Code feedback** or user's GitHub + +--- + +## Quick Reference: Common Fixes + +| Problem | Quick Fix | +|---------|-----------| +| Dependencies not saving | Upgrade to bd v0.15.0+ | +| Status updates lag | Use daemon mode (not `--no-daemon`) | +| Daemon won't start | Run `git init` first | +| Database errors on Google Drive | Move to local filesystem | +| JSONL file missing | Start daemon once: `bd daemon &` | +| Dependencies backwards (MCP) | Update to v0.15.0+, use `issue_id/depends_on_id` correctly | + +--- + +## Related Documentation + +- [CLI Reference](CLI_REFERENCE.md) - Complete command documentation +- [Dependencies Guide](DEPENDENCIES.md) - Understanding dependency types +- [Workflows](WORKFLOWS.md) - Step-by-step workflow guides +- [beads GitHub](https://github.com/steveyegge/beads) - Official documentation