diff --git a/.beads/issues.jsonl b/.beads/issues.jsonl index a84bd100..95b48d7b 100644 --- a/.beads/issues.jsonl +++ b/.beads/issues.jsonl @@ -13,6 +13,7 @@ {"id":"bd-0zp7","title":"Add missing hook calls in mail reply and ack","description":"The mail commands are missing hook calls:\n\n1. runMailReply (mail.go:525-672) creates a message but doesn't call hookRunner.Run(hooks.EventMessage, ...) after creating the reply in direct mode (around line 640)\n\n2. runMailAck (mail.go:432-523) closes messages but doesn't call hookRunner.Run(hooks.EventClose, ...) after closing each message (around line 487 for daemon mode, 493 for direct mode)\n\nThis means GGT hooks won't fire for replies or message acknowledgments.","status":"tombstone","priority":1,"issue_type":"bug","created_at":"2025-12-16T20:52:53.069412-08:00","updated_at":"2025-12-25T01:21:01.952723-08:00","deleted_at":"2025-12-25T01:21:01.952723-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} {"id":"bd-14ie","title":"Work on beads-2vn: Add simple built-in beads viewer (GH#6...","description":"Work on beads-2vn: Add simple built-in beads viewer (GH#654). Add bd list --pretty with --watch flag, tree view with priority/status symbols. When done, submit MR (not PR) to integration branch for Refinery.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-19T22:56:47.305831-08:00","updated_at":"2025-12-19T23:28:32.429492-08:00","closed_at":"2025-12-19T23:23:13.928323-08:00","close_reason":"Implemented --pretty flag with tree view and symbols. Tests pass."} {"id":"bd-14v0","title":"Add Windows code signing for bd.exe releases","description":"## Context\n\nGo binaries (including bd.exe) are commonly flagged by antivirus software as false positives due to heuristic detection. See docs/ANTIVIRUS.md for full details.\n\n## Problem\n\nKaspersky and other AV software flag bd.exe as PDM:Trojan.Win32.Generic, causing it to be quarantined or deleted.\n\n## Solution\n\nImplement code signing for Windows releases using:\n1. An EV (Extended Validation) code certificate\n2. Integration with GoReleaser to sign Windows binaries during release\n\n## Benefits\n\n- Reduces false positive rates over time as the certificate builds reputation\n- Provides tamper verification for users\n- Improves SmartScreen trust rating on Windows\n- Professional appearance for enterprise users\n\n## Implementation Steps\n\n1. Acquire EV code signing certificate (annual cost ~$300-500)\n2. Set up signtool or osslsigncode in release pipeline\n3. Update .goreleaser.yml to sign Windows binaries\n4. Update checksums to include signed binary hashes\n5. Document signing verification in ANTIVIRUS.md\n\n## References\n\n- docs/ANTIVIRUS.md - Current documentation\n- bd-t4u1 - Original Kaspersky false positive report\n- https://github.com/golang/go/issues/16292 - Go project discussion","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-23T23:46:48.459177-08:00","updated_at":"2025-12-23T23:54:41.912141-08:00","closed_at":"2025-12-23T23:54:41.912141-08:00","close_reason":"Implemented Windows code signing infrastructure. Added signing script, GoReleaser hook, updated release workflow and documentation. Signing is gracefully degraded when certificate secrets are not configured - releases continue as unsigned. Certificate acquisition (EV cert) is still required to actually enable signing.","dependencies":[{"issue_id":"bd-14v0","depends_on_id":"bd-t4u1","type":"discovered-from","created_at":"2025-12-23T23:47:02.024159-08:00","created_by":"daemon"}]} +{"id":"bd-1ban","title":"Test actor direct","status":"open","priority":4,"issue_type":"task","created_at":"2025-12-26T20:46:02.423367-08:00","updated_at":"2025-12-26T20:46:02.423367-08:00"} {"id":"bd-1dez","title":"Mol Mall: Formula marketplace using GitHub as backend","description":"Create a marketplace for sharing molecule formulas using GitHub repos as the hosting backend.\n\n## Architecture Update (Dec 2025)\n\n**Formulas are the sharing layer.** With ephemeral protos (bd-rciw), the architecture is:\n\n```\nFormulas ──cook──→ [ephemeral proto] ──pour/wisp──→ Mol/Wisp\n ↑ │\n └────────────────── distill ─────────────────────────┘\n```\n\n- **Formulas**: JSON source files (.formula.json) - the thing you share\n- **Protos**: Transient compilation artifacts - auto-deleted after use\n- **Mols/Wisps**: Execution instances - not shared directly\n\n**Key operations:**\n- `bd distill \u003cmol-id\u003e` → Extract formula from completed work\n- `bd mol publish \u003cformula\u003e` → Share to GitHub\n- `bd mol install \u003curl\u003e` → Fetch from GitHub\n- `bd pour \u003cformula\u003e` → Cook and spawn (proto is ephemeral)\n\n## Why GitHub?\n\nGitHub solves multiple problems at once:\n- **Hosting**: Raw file URLs for formula.json\n- **Versioning**: Git tags (v1.0.0, v1.2.0)\n- **Auth**: GitHub tokens for private formulas\n- **Discovery**: GitHub search, topics, stars\n- **Collaboration**: PRs for contributions, issues for bugs\n- **Organizations**: Natural scoping (@anthropic/, @gastown/)\n\n## URL Scheme\n\n```bash\n# Direct GitHub URL\nbd mol install github.com/anthropics/mol-code-review\n\n# With version tag\nbd mol install github.com/anthropics/mol-code-review@v1.2.0\n\n# Shorthand (via registry lookup)\nbd mol install @anthropic/mol-code-review\n```\n\n## Architecture\n\nEach formula lives in its own repo (like Go modules):\n```\ngithub.com/anthropics/mol-code-review/\n├── formula.json # The formula\n├── README.md # Documentation\n└── CHANGELOG.md # Version history\n```\n\n## ID Namespace\n\n| Entity | ID Format | Example |\n|--------|-----------|---------|\n| Formula (GitHub) | `github.com/org/repo` | `github.com/anthropics/mol-code-review` |\n| Installed formula | `mol-name` | `mol-code-review` |\n| Poured instance | `\u003cdb\u003e-mol-xxx` | `bd-mol-b8c` |","notes":"Deferred - focusing on Christmas launch first","status":"closed","priority":2,"issue_type":"epic","created_at":"2025-12-25T12:05:17.666574-08:00","updated_at":"2025-12-25T21:53:13.415431-08:00","closed_at":"2025-12-25T21:53:13.415431-08:00","close_reason":"Migrated to gastown rig as gt-uzf2l (Mol Mall is Gas Town infrastructure)"} {"id":"bd-1dez.1","title":"bd distill: Extract formula from mol/epic","description":"Extract a formula from completed work (mol, wisp, or epic).\n\n**Key change**: Distill works on execution artifacts (mols/wisps/epics), not protos.\nProtos are ephemeral - they don't persist. Distillation extracts patterns from\nactual executed work.\n\n## Usage\n```bash\nbd distill bd-mol-xyz -o my-workflow.formula.json\nbd distill bd-epic-abc -o feature-workflow.formula.json\n```\n\n## Use Cases\n- **Emergent patterns**: Structured work manually, want to templatize it\n- **Modified execution**: Poured a formula, added custom steps, want to capture\n- **Learning from success**: Extract what made a complex mol succeed\n\n## Implementation\n1. Load mol/wisp/epic subgraph (root + all children)\n2. Convert to formula JSON structure\n3. Extract variables from patterns (titles, descriptions)\n4. Generate step IDs from issue titles (slugify)\n5. Write .formula.json file\n\n## Output Format\n```json\n{\n \"formula\": \"my-workflow\",\n \"description\": \"...\",\n \"version\": 1,\n \"vars\": { ... },\n \"steps\": [ ... ]\n}\n```\n\n## Architecture Note\nThis closes the formula lifecycle loop:\n Formulas ──cook──→ Mols ──distill──→ Formulas\n\nAll sharing happens via formulas. Mols contain execution context and aren't shared.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-25T12:05:47.045105-08:00","updated_at":"2025-12-25T18:54:39.967765-08:00","closed_at":"2025-12-25T18:54:39.967765-08:00","close_reason":"Command already implemented; updated help text, added daemon support, and -o shorthand","dependencies":[{"issue_id":"bd-1dez.1","depends_on_id":"bd-1dez","type":"parent-child","created_at":"2025-12-25T12:05:47.045596-08:00","created_by":"daemon"}]} {"id":"bd-1dez.2","title":"bd formula add: Import formula to local catalog","description":"Import a formula file to the local catalog (search path).\n\n**Replaces**: \"bd mol promote\" (proto-to-proto concept is obsolete with ephemeral protos)\n\n## Usage\n```bash\n# Add a formula file to project catalog\nbd formula add my-workflow.formula.json\n\n# Add to user-level catalog\nbd formula add my-workflow.formula.json --scope user\n\n# Add from URL\nbd formula add https://example.com/workflow.formula.json\n```\n\n## Implementation\n1. Parse the formula file (validate JSON structure)\n2. Determine target directory based on scope:\n - project: .beads/formulas/\n - user: ~/.beads/formulas/\n - town: ~/gt/.beads/formulas/\n3. Copy/download formula to target\n4. Verify it is loadable: bd formula show \u003cname\u003e\n\n## Flags\n- `--scope \u003clevel\u003e` - Where to add (project|user|town, default: project)\n- `--name \u003cname\u003e` - Override formula name (default: from file)\n\n## Note\nThis is for manually adding formulas. For GitHub-hosted formulas, use:\n bd mol install github.com/org/formula-name","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-25T12:05:48.588283-08:00","updated_at":"2025-12-25T19:54:35.242576-08:00","closed_at":"2025-12-25T19:54:35.242576-08:00","close_reason":"Implemented bd formula add command with scope and URL support","dependencies":[{"issue_id":"bd-1dez.2","depends_on_id":"bd-1dez","type":"parent-child","created_at":"2025-12-25T12:05:48.590203-08:00","created_by":"daemon"},{"issue_id":"bd-1dez.2","depends_on_id":"bd-1dez.1","type":"blocks","created_at":"2025-12-25T12:07:06.745686-08:00","created_by":"daemon"}]} @@ -35,7 +36,9 @@ {"id":"bd-28sq.5","title":"Fix JSON errors in epic.go","description":"Replace direct stderr writes with FatalErrorRespectJSON() in epic.go.\n\nCommands affected:\n- epicStatusCmd\n- epicCloseEligibleCmd","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-25T13:32:11.12219-08:00","updated_at":"2025-12-25T13:55:39.738404-08:00","closed_at":"2025-12-25T13:55:39.738404-08:00","close_reason":"Fixed all FatalErrorRespectJSON patterns in epic.go","dependencies":[{"issue_id":"bd-28sq.5","depends_on_id":"bd-28sq","type":"parent-child","created_at":"2025-12-25T13:32:11.124147-08:00","created_by":"daemon"}]} {"id":"bd-29fb","title":"Implement bd close --continue flag","description":"Auto-advance to next step in molecule when closing an issue. Referenced by gt-um6q, gt-lz13. Needed for molecule navigation workflow.","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-12-23T00:17:55.032875-08:00","updated_at":"2025-12-23T01:26:47.255313-08:00","closed_at":"2025-12-23T01:26:47.255313-08:00","close_reason":"Already implemented: --continue flag auto-advances to next step in molecule, --no-auto prevents auto-claiming"} {"id":"bd-2ep8","title":"Update CHANGELOG.md with release notes","description":"Add meaningful release notes to CHANGELOG.md describing what changed in 0.30.7","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T22:56:48.649053-08:00","updated_at":"2025-12-19T22:57:31.69559-08:00","closed_at":"2025-12-19T22:57:31.69559-08:00","dependencies":[{"issue_id":"bd-2ep8","depends_on_id":"bd-8pyn","type":"parent-child","created_at":"2025-12-19T22:56:48.650816-08:00","created_by":"stevey"},{"issue_id":"bd-2ep8","depends_on_id":"bd-rupw","type":"blocks","created_at":"2025-12-19T22:56:48.651136-08:00","created_by":"stevey"}]} +{"id":"bd-2fs7","title":"Move pour/ephemeral under bd mol subcommand","description":"For consistency, bd pour and bd ephemeral should become bd mol pour and bd mol ephemeral:\n\nCurrent:\n bd mol list # Available protos\n bd mol show \u003cid\u003e # Proto details\n bd pour \u003cproto\u003e # Create mol ← sticks out\n bd ephemeral \u003cproto\u003e # Create ephemeral ← sticks out \n bd mol bond \u003cproto\u003e \u003cparent\u003e # Attach to existing mol\n bd mol squash \u003cid\u003e # Condense to digest\n bd mol burn \u003cid\u003e # Discard\n\nProposed:\n bd mol list\n bd mol show \u003cid\u003e\n bd mol pour \u003cproto\u003e # Moved under mol\n bd mol ephemeral \u003cproto\u003e # Moved under mol\n bd mol bond \u003cproto\u003e \u003cparent\u003e\n bd mol squash \u003cid\u003e\n bd mol burn \u003cid\u003e\n\nAll molecule operations should be under bd mol for discoverability and consistency.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-26T23:36:23.945902-08:00","created_by":"stevey","updated_at":"2025-12-26T23:41:01.096333-08:00","closed_at":"2025-12-26T23:41:01.096333-08:00","close_reason":"Moved pour and ephemeral under bd mol subcommand for consistency","pinned":true} {"id":"bd-2l03","title":"Implement await type handlers (gh:run, gh:pr, timer, human, mail)","description":"Implement condition checking for each await type.\n\n## Handlers Needed\n- gh:run:\u003cid\u003e - Check GitHub Actions run status via gh CLI\n- gh:pr:\u003cid\u003e - Check PR merged/closed status via gh CLI \n- timer:\u003cduration\u003e - Simple elapsed time check\n- human:\u003cprompt\u003e - Check for human approval (via mail?)\n- mail:\u003cpattern\u003e - Check for mail matching pattern\n\n## Implementation Location\nThis is Deacon logic, so likely in Gas Town (gt) not beads.\n\n## Interface\n```go\ntype AwaitHandler interface {\n Check(awaitID string) (completed bool, result string, err error)\n}\n```","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-23T11:44:38.492837-08:00","updated_at":"2025-12-23T12:19:44.283318-08:00","closed_at":"2025-12-23T12:19:44.283318-08:00","close_reason":"Moved to gastown: gt-ng6g","dependencies":[{"issue_id":"bd-2l03","depends_on_id":"bd-udsi","type":"parent-child","created_at":"2025-12-23T11:44:52.990746-08:00","created_by":"daemon"},{"issue_id":"bd-2l03","depends_on_id":"bd-is6m","type":"blocks","created_at":"2025-12-23T11:44:56.510792-08:00","created_by":"daemon"}]} +{"id":"bd-2nl","title":"Refinery Patrol","description":"Merge queue processor patrol loop with verification gates.","status":"open","priority":2,"issue_type":"molecule","created_at":"2025-12-26T21:20:47.681814-08:00","created_by":"deacon","updated_at":"2025-12-26T21:20:47.681814-08:00"} {"id":"bd-2oo","title":"Edge Schema Consolidation: Unify all edges in dependencies table","description":"Consolidate all edge types into the dependency table per decision 004.\n\n## Changes\n- Add metadata column to dependencies table\n- Add thread_id column for conversation grouping\n- Remove redundant Issue fields: replies_to, relates_to, duplicate_of, superseded_by\n- Update all code to use dependencies API\n- Migration script for existing data\n- JSONL format change (breaking)\n\nReference: ~/gt/hop/decisions/004-edge-schema-consolidation.md","status":"closed","priority":0,"issue_type":"epic","created_at":"2025-12-18T02:01:48.785558-08:00","updated_at":"2025-12-18T02:49:10.61237-08:00","closed_at":"2025-12-18T02:49:10.61237-08:00","close_reason":"Phase 4 complete: all edge fields removed, dependencies API used exclusively"} {"id":"bd-2oo.1","title":"Add metadata and thread_id columns to dependencies table","description":"Schema changes:\n- ALTER TABLE dependencies ADD COLUMN metadata TEXT DEFAULT '{}'\n- ALTER TABLE dependencies ADD COLUMN thread_id TEXT DEFAULT ''\n- CREATE INDEX idx_dependencies_thread ON dependencies(thread_id) WHERE thread_id != ''","status":"closed","priority":0,"issue_type":"task","created_at":"2025-12-18T02:02:00.468223-08:00","updated_at":"2025-12-18T02:49:10.575133-08:00","closed_at":"2025-12-18T02:49:10.575133-08:00","close_reason":"Phase 4 complete: all edge fields removed, dependencies API used exclusively","dependencies":[{"issue_id":"bd-2oo.1","depends_on_id":"bd-2oo","type":"parent-child","created_at":"2025-12-18T02:02:00.470012-08:00","created_by":"daemon","metadata":"{}"}]} {"id":"bd-2oo.2","title":"Remove redundant edge fields from Issue struct","description":"Remove from Issue struct:\n- RepliesTo -\u003e dependency with type replies-to\n- RelatesTo -\u003e dependencies with type relates-to \n- DuplicateOf -\u003e dependency with type duplicates\n- SupersededBy -\u003e dependency with type supersedes\n\nKeep: Sender, Ephemeral (these are attributes, not relationships)","status":"closed","priority":0,"issue_type":"task","created_at":"2025-12-18T02:02:00.891206-08:00","updated_at":"2025-12-18T02:49:10.584381-08:00","closed_at":"2025-12-18T02:49:10.584381-08:00","close_reason":"Phase 4 complete: all edge fields removed, dependencies API used exclusively","dependencies":[{"issue_id":"bd-2oo.2","depends_on_id":"bd-2oo","type":"parent-child","created_at":"2025-12-18T02:02:00.891655-08:00","created_by":"daemon","metadata":"{}"}]} @@ -100,6 +103,7 @@ {"id":"bd-68bf","title":"Code review: bd mol bond implementation","description":"Review the mol bond command implementation before shipping.\n\nFocus areas:\n1. runMolBond() - polymorphic dispatch logic correctness\n2. bondProtoProto() - compound proto creation, dependency wiring\n3. bondProtoMol() / bondMolProto() - spawn and attach logic\n4. bondMolMol() - joining molecules, lineage tracking\n5. BondRef usage - is lineage tracked correctly?\n6. Error handling - are all failure modes covered?\n7. Edge cases - what could go wrong?\n\nFile: cmd/bd/mol.go (lines 485-859)\nCommit: 386b513e","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-21T10:13:09.425229-08:00","updated_at":"2025-12-21T11:18:14.206869-08:00","closed_at":"2025-12-21T11:18:14.206869-08:00","close_reason":"Reviewed and fixed label persistence bug","dependencies":[{"issue_id":"bd-68bf","depends_on_id":"bd-o91r","type":"discovered-from","created_at":"2025-12-21T10:13:09.426471-08:00","created_by":"daemon"}]} {"id":"bd-68e4","title":"doctor --fix should export when DB has more issues than JSONL","description":"When 'bd doctor' detects a count mismatch (DB has more issues than JSONL), it currently recommends 'bd sync --import-only', which imports JSONL into DB. But JSONL is the source of truth, not the DB.\n\n**Current behavior:**\n- Doctor detects: DB has 355 issues, JSONL has 292\n- Recommends: 'bd sync --import-only' \n- User runs it: Returns '0 created, 0 updated' (no-op, because JSONL hasn't changed)\n- User is stuck\n\n**Root cause:**\nThe doctor fix is one-directional (JSONL→DB) when it should be bidirectional. If DB has MORE issues, they haven't been exported yet - the fix should be 'bd export' (DB→JSONL), not import.\n\n**Desired fix:**\nIn fix.DBJSONLSync(), detect which has more data:\n- If DB \u003e JSONL: Run 'bd export' to sync JSONL (since DB is the working copy)\n- If JSONL \u003e DB: Run 'bd sync --import-only' to import (JSONL is source of truth)\n- If equal but timestamps differ: Detect based on file mtime\n\nThis makes 'bd doctor --fix' actually fix the problem instead of being a no-op.","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-21T11:17:20.994319182-07:00","updated_at":"2025-12-21T11:23:24.38523731-07:00","closed_at":"2025-12-21T11:23:24.38523731-07:00"} {"id":"bd-6a5z","title":"Add stale molecule check to bd doctor","description":"Extend bd doctor to detect stale molecules.\n\n**New check:**\n- Name: 'Stale Molecules'\n- Category: Workflow\n- Severity: Warning (don't fail overall check)\n\n**Detection:**\nReuse logic from bd mol stale command:\n- Find mols where Completed \u003e= Total but root is open\n- Filter to orphaned (not assigned, not pinned)\n- Extra weight if blocking other work\n\n**Output:**\n```\n⚠ Stale Molecules\n Found 2 complete-but-unclosed molecules:\n - bd-xyz: Version bump v0.36.0 (blocking 1 issue)\n - bd-uvw: Old patrol (not blocking)\n Fix: bd close \u003cid\u003e or bd mol squash \u003cid\u003e\n```\n\n**--fix behavior:**\n- Auto-close stale mols (with reason 'Auto-closed by bd doctor')\n- Or prompt interactively with -i flag\n\nDepends on: bd mol stale command","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-24T18:23:24.549941-08:00","updated_at":"2025-12-25T12:42:50.288442-08:00","closed_at":"2025-12-25T12:42:50.288442-08:00","close_reason":"Implemented stale molecules check in bd doctor","dependencies":[{"issue_id":"bd-6a5z","depends_on_id":"bd-anv2","type":"blocks","created_at":"2025-12-24T18:23:48.682552-08:00","created_by":"daemon"}]} +{"id":"bd-6df0","title":"Investigate Claude Code crash logging improvements","description":"## Problem\n\nClaude Code doesn't leave useful crash logs when it terminates unexpectedly. Investigation of a crash on 2025-12-26 showed:\n\n- Debug logs in ~/.claude/debug/ just stop mid-stream with no error/exit message\n- No signal handlers appear to log SIGTERM/SIGKILL/SIGINT\n- No dedicated crash log file exists\n- When Node.js crashes hard or gets killed, there's no record of why\n\n## What we found\n\n- Session debug log (02080b1a-...) stopped at 22:58:40 UTC mid-operation\n- No 'exit', 'error', 'crash', 'signal' entries at end of file\n- macOS DiagnosticReports showed Chrome crashes but no Node crashes\n- System logs showed no relevant kill/OOM events\n\n## Desired improvements\n\n1. Exit handlers that log graceful shutdown\n2. Signal handlers that log SIGTERM/SIGINT before exiting\n3. A dedicated crash log or at least a 'last known state' file\n4. Possibly CLI flags to enable verbose crash debugging\n\n## Investigation paths\n\n- Check if Claude Code has --debug or similar flags\n- Look at Node.js crash handling best practices\n- Consider if we can wrap claude invocations to capture crashes\n\n## Related\n\nThis came up while investigating why a crew worker session crashed at end of a code review task.","status":"open","priority":3,"issue_type":"task","created_at":"2025-12-26T15:20:03.578463-08:00","updated_at":"2025-12-26T15:20:03.578463-08:00","comments":[{"id":1,"issue_id":"bd-6df0","author":"stevey","text":"Found existing flags:\n- `--debug [filter]` - Enable debug mode with optional category filtering (e.g., 'api,hooks' or '!statsig,!file')\n- `--verbose` - Override verbose mode setting from config\n\nThese might help with diagnosing issues, but still won't capture hard crashes. The debug output goes to ~/.claude/debug/\u003csession-id\u003e.txt which is what we were already looking at.\n\nNext step: Could wrap claude invocation to capture exit codes and stderr, or look into Node.js --report-on-signal flags.","created_at":"2025-12-26T23:20:20Z"}]} {"id":"bd-6fe4622f","title":"Remove unreachable utility functions","description":"Several small utility functions are unreachable:\n\nFiles to clean:\n1. `internal/storage/sqlite/hash.go` - `computeIssueContentHash` (line 17)\n - Check if entire file can be deleted if only contains this function\n\n2. `internal/config/config.go` - `FileUsed` (line 151)\n - Delete unused config helper\n\n3. `cmd/bd/git_sync_test.go` - `verifyIssueOpen` (line 300)\n - Delete dead test helper\n\n4. `internal/compact/haiku.go` - `HaikuClient.SummarizeTier2` (line 81)\n - Tier 2 summarization not implemented\n - Options: implement feature OR delete method\n\nImpact: Removes 50-100 LOC depending on decisions","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-10-28T16:20:02.434573-07:00","updated_at":"2025-12-25T01:21:01.952723-08:00","close_reason":"Closed","deleted_at":"2025-12-25T01:21:01.952723-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} {"id":"bd-6gd","title":"Remove legacy MCP Agent Mail integration","description":"## Summary\n\nRemove the legacy MCP Agent Mail system that requires an external HTTP server. Keep the native `bd mail` system which stores messages as git-synced issues.\n\n## Background\n\nTwo mail systems exist in the codebase:\n1. **Legacy Agent Mail** (`bd message`) - External server dependency, complex setup\n2. **Native bd mail** (`bd mail`) - Built-in, git-synced, no dependencies\n\nThe legacy system causes confusion and is no longer needed. Gas Town's Town Mail will use the native `bd mail` system.\n\n## Files to Delete\n\n### CLI Command\n- [ ] `cmd/bd/message.go` - The `bd message` command implementation\n\n### MCP Integration\n- [ ] `integrations/beads-mcp/src/beads_mcp/mail.py` - HTTP wrapper for Agent Mail server\n- [ ] `integrations/beads-mcp/src/beads_mcp/mail_tools.py` - MCP tool definitions\n- [ ] `integrations/beads-mcp/tests/test_mail.py` - Tests for legacy mail\n\n### Documentation\n- [ ] `docs/AGENT_MAIL.md`\n- [ ] `docs/AGENT_MAIL_QUICKSTART.md`\n- [ ] `docs/AGENT_MAIL_DEPLOYMENT.md`\n- [ ] `docs/AGENT_MAIL_MULTI_WORKSPACE_SETUP.md`\n- [ ] `docs/adr/002-agent-mail-integration.md`\n\n## Code to Update\n\n- [ ] Remove `message` command registration from `cmd/bd/main.go`\n- [ ] Remove mail tool imports/registration from MCP server `__init__.py` or `server.py`\n- [ ] Check for any other references to Agent Mail in the codebase\n\n## Verification\n\n- [ ] `bd message` command no longer exists\n- [ ] `bd mail` command still works\n- [ ] MCP server starts without errors\n- [ ] Tests pass\n","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-17T23:04:04.099935-08:00","updated_at":"2025-12-25T01:21:01.952723-08:00","close_reason":"Removed legacy MCP Agent Mail integration. Kept native bd mail system.","deleted_at":"2025-12-25T01:21:01.952723-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} {"id":"bd-6ns7","title":"test hook pin","status":"tombstone","priority":2,"issue_type":"task","assignee":"stevey","created_at":"2025-12-23T04:39:16.619755-08:00","updated_at":"2025-12-23T04:51:29.436788-08:00","deleted_at":"2025-12-23T04:51:29.436788-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} @@ -109,7 +113,7 @@ {"id":"bd-6sm6","title":"Improve test coverage for internal/export (37.1% → 60%)","description":"The export package has only 37.1% test coverage. Export functionality needs good coverage to ensure data integrity.\n\nCurrent coverage: 37.1%\nTarget coverage: 60%","status":"closed","priority":2,"issue_type":"task","assignee":"beads/alpha","created_at":"2025-12-13T20:43:06.802277-08:00","updated_at":"2025-12-23T22:32:29.16846-08:00","closed_at":"2025-12-23T22:32:29.16846-08:00","close_reason":"Coverage already at 71.8% (target was 60%). Recent commits ba8beb53 and e3e0a044 added tests."} {"id":"bd-6ss","title":"Improve test coverage","description":"The test suite reports less than 45% code coverage. Identify the specific uncovered areas of the codebase, including modules, functions, or features. Rank them by potential impact on system reliability and business value, from most to least, and provide actionable recommendations for improving coverage in each area.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-18T06:54:23.036822442-07:00","updated_at":"2025-12-18T07:17:49.245940799-07:00","closed_at":"2025-12-18T07:17:49.245940799-07:00"} {"id":"bd-70an","title":"test pin","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-21T11:19:16.760214-08:00","updated_at":"2025-12-21T11:19:46.500688-08:00","closed_at":"2025-12-21T11:19:46.500688-08:00","close_reason":"test issue for pin fix"} -{"id":"bd-70c4","title":"Gate await fields cleared by --no-daemon CLI access (not multi-repo)","status":"open","priority":1,"issue_type":"bug","created_at":"2025-12-25T23:30:38.648182-08:00","updated_at":"2025-12-25T23:30:38.648182-08:00","comments":[{"id":1,"issue_id":"bd-70c4","author":"mayor","text":"## Summary\nGate await fields (await_type, await_id, timeout_ns, waiters) are cleared when a CLI command accesses the database directly (--no-daemon) while the daemon is running. This is separate from the multi-repo issue fixed in bd-gr4q.\n\n## Reproduction\n1. Start daemon: bd daemon --start\n2. Create gate: bd gate create --await timer:5s (fields stored correctly)\n3. Verify: sqlite3 .beads/beads.db shows timer|5s\n4. Run CLI with --no-daemon: bd show \u003cid\u003e --no-daemon --no-auto-import --no-auto-flush\n5. Check again: fields are now empty\n\n## Investigation Notes\n- NOT caused by autoImportIfNewer (verified with --no-auto-import flag)\n- NOT caused by HydrateFromMultiRepo (no multi-repo config, returns early)\n- NOT caused by molecule loader (only creates new issues)\n- NOT caused by migrations (gate_columns only adds columns)\n- No database triggers found\n\nThe clearing happens somewhere in sqlite.NewWithTimeout() initialization or command execution path.\n\n## Related\n- bd-gr4q fixed the multi-repo path but this is a different code path\n- The fix pattern (COALESCE/NULLIF) may need to be applied elsewhere","created_at":"2025-12-26T07:30:49Z"}]} +{"id":"bd-70c4","title":"Gate await fields cleared by --no-daemon CLI access (not multi-repo)","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-25T23:30:38.648182-08:00","updated_at":"2025-12-26T23:38:47.972075-08:00","closed_at":"2025-12-26T23:38:47.972075-08:00","close_reason":"Cannot reproduce: tested gate create + --no-daemon show, fields preserved. May have been fixed by recent daemon/routing fixes.","comments":[{"id":2,"issue_id":"bd-70c4","author":"mayor","text":"## Summary\nGate await fields (await_type, await_id, timeout_ns, waiters) are cleared when a CLI command accesses the database directly (--no-daemon) while the daemon is running. This is separate from the multi-repo issue fixed in bd-gr4q.\n\n## Reproduction\n1. Start daemon: bd daemon --start\n2. Create gate: bd gate create --await timer:5s (fields stored correctly)\n3. Verify: sqlite3 .beads/beads.db shows timer|5s\n4. Run CLI with --no-daemon: bd show \u003cid\u003e --no-daemon --no-auto-import --no-auto-flush\n5. Check again: fields are now empty\n\n## Investigation Notes\n- NOT caused by autoImportIfNewer (verified with --no-auto-import flag)\n- NOT caused by HydrateFromMultiRepo (no multi-repo config, returns early)\n- NOT caused by molecule loader (only creates new issues)\n- NOT caused by migrations (gate_columns only adds columns)\n- No database triggers found\n\nThe clearing happens somewhere in sqlite.NewWithTimeout() initialization or command execution path.\n\n## Related\n- bd-gr4q fixed the multi-repo path but this is a different code path\n- The fix pattern (COALESCE/NULLIF) may need to be applied elsewhere","created_at":"2025-12-26T07:30:49Z"}]} {"id":"bd-746","title":"Fix resolvePartialID stub in workflow.go","description":"The resolvePartialID function at workflow.go:921-925 is a stub that just returns the ID unchanged. Should use utils.ResolvePartialID for proper partial ID resolution in direct mode (non-daemon).","status":"tombstone","priority":2,"issue_type":"bug","created_at":"2025-12-17T22:22:57.586917-08:00","updated_at":"2025-12-25T01:21:01.952723-08:00","deleted_at":"2025-12-25T01:21:01.952723-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} {"id":"bd-74w1","title":"Consolidate duplicate path-finding utilities (findJSONLPath, findBeadsDir, findGitRoot)","description":"Code health review found these functions defined in multiple places:\n\n- findJSONLPath() in autoflush.go:45-73 and doctor/fix/migrate.go\n- findBeadsDir() in autoimport.go:197-239 (with git worktree handling)\n- findGitRoot() in autoimport.go:242-269 (Windows path conversion)\n\nThe beads package has public FindBeadsDir() and FindJSONLPath() APIs that should be used consistently.\n\nImpact: Bug fixes need to be applied in multiple places. Git worktree handling may not be replicated everywhere.\n\nFix: Consolidate all implementations to use the beads package APIs. Remove duplicates.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-16T18:17:16.694293-08:00","updated_at":"2025-12-22T21:13:46.83103-08:00","closed_at":"2025-12-22T21:13:46.83103-08:00","close_reason":"Consolidated duplicate path-finding utilities: findGitRoot() now delegates to git.GetRepoRoot(), findBeadsDir() replaced with beads.FindBeadsDir() across 8 files"} {"id":"bd-754r","title":"Merge: bd-thgk","description":"branch: polecat/Compactor\ntarget: main\nsource_issue: bd-thgk\nrig: beads","status":"closed","priority":1,"issue_type":"merge-request","created_at":"2025-12-23T13:41:43.965771-08:00","updated_at":"2025-12-23T19:12:08.345449-08:00","closed_at":"2025-12-23T19:12:08.345449-08:00","close_reason":"Stale merge-requests from orphaned polecat branches - refinery not processing"} @@ -183,7 +187,7 @@ {"id":"bd-au0.10","title":"Add global verbosity flags (--verbose, --quiet)","description":"Add consistent verbosity controls across all commands.\n\n**Current state:**\n- bd init has --quiet flag\n- No other commands have verbosity controls\n- Debug output controlled by BD_VERBOSE env var\n\n**Proposal:**\nAdd persistent flags:\n- --verbose / -v: Enable debug output\n- --quiet / -q: Suppress non-essential output\n\n**Implementation:**\n- Add to rootCmd.PersistentFlags()\n- Replace BD_VERBOSE checks with flag checks\n- Standardize output levels:\n * Quiet: Errors only\n * Normal: Errors + success messages\n * Verbose: Errors + success + debug info\n\n**Files to modify:**\n- cmd/bd/main.go (add flags)\n- internal/debug/debug.go (respect flags)\n- Update all commands to respect quiet mode\n\n**Testing:**\n- Verify --verbose shows debug output\n- Verify --quiet suppresses normal output\n- Ensure errors always show regardless of mode","status":"closed","priority":3,"issue_type":"task","created_at":"2025-11-21T21:08:21.600209-05:00","updated_at":"2025-12-25T22:34:40.197801-08:00","closed_at":"2025-12-25T22:34:40.197801-08:00","close_reason":"Already implemented: --verbose/-v and --quiet/-q persistent flags added, debug package has SetVerbose/SetQuiet/IsQuiet/PrintNormal functions, flags applied in PersistentPreRun","dependencies":[{"issue_id":"bd-au0.10","depends_on_id":"bd-au0","type":"parent-child","created_at":"2025-11-21T21:08:21.602557-05:00","created_by":"daemon","metadata":"{}"}]} {"id":"bd-au0.5","title":"Add date and priority filters to bd search","description":"Add date and priority filters to bd search for parity with bd list.\n\n## Current State\nbd search supports: --status, --type, --assignee, --label, --limit\nbd list supports: all of the above PLUS date ranges and priority filters\n\n## Filters to Add\n\n### Priority Filters\n```bash\nbd search \"query\" --priority 1 # Exact priority\nbd search \"query\" --priority-min 0 # P0 and above (higher priority)\nbd search \"query\" --priority-max 2 # P2 and below (lower priority)\n```\n\n### Date Filters\n```bash\nbd search \"query\" --created-after 2025-01-01\nbd search \"query\" --created-before 2025-12-31\nbd search \"query\" --updated-after 2025-01-01\nbd search \"query\" --closed-after 2025-01-01\n```\n\n### Content Filters\n```bash\nbd search \"query\" --desc-contains \"bug\"\nbd search \"query\" --notes-contains \"todo\"\nbd search \"query\" --empty-description # Issues with no description\nbd search \"query\" --no-assignee # Unassigned issues\nbd search \"query\" --no-labels # Issues without labels\n```\n\n## Files to Modify\n\n### 1. cmd/bd/search.go\nAdd flag definitions in init():\n```go\nsearchCmd.Flags().IntP(\"priority\", \"p\", -1, \"Filter by exact priority (0-4)\")\nsearchCmd.Flags().Int(\"priority-min\", -1, \"Filter by minimum priority\")\nsearchCmd.Flags().Int(\"priority-max\", -1, \"Filter by maximum priority\")\nsearchCmd.Flags().String(\"created-after\", \"\", \"Filter by creation date (YYYY-MM-DD)\")\nsearchCmd.Flags().String(\"created-before\", \"\", \"Filter by creation date\")\nsearchCmd.Flags().String(\"updated-after\", \"\", \"Filter by update date\")\nsearchCmd.Flags().String(\"updated-before\", \"\", \"Filter by update date\")\nsearchCmd.Flags().String(\"closed-after\", \"\", \"Filter by close date\")\nsearchCmd.Flags().String(\"closed-before\", \"\", \"Filter by close date\")\nsearchCmd.Flags().String(\"desc-contains\", \"\", \"Filter by description content\")\nsearchCmd.Flags().String(\"notes-contains\", \"\", \"Filter by notes content\")\nsearchCmd.Flags().Bool(\"empty-description\", false, \"Filter issues with empty description\")\nsearchCmd.Flags().Bool(\"no-assignee\", false, \"Filter unassigned issues\")\nsearchCmd.Flags().Bool(\"no-labels\", false, \"Filter issues without labels\")\n```\n\n### 2. internal/rpc/protocol.go\nUpdate SearchArgs struct:\n```go\ntype SearchArgs struct {\n Query string\n Filter types.IssueFilter\n // Already has most fields via IssueFilter\n}\n```\n\nNote: types.IssueFilter already has these fields - just need to wire them up!\n\n### 3. cmd/bd/search.go Run function\nParse flags and populate filter:\n```go\nif priority, _ := cmd.Flags().GetInt(\"priority\"); priority \u003e= 0 {\n filter.Priority = \u0026priority\n}\nif createdAfter, _ := cmd.Flags().GetString(\"created-after\"); createdAfter != \"\" {\n t, err := time.Parse(\"2006-01-02\", createdAfter)\n if err != nil {\n FatalError(\"invalid date format for --created-after: %v\", err)\n }\n filter.CreatedAfter = \u0026t\n}\n// ... similar for other flags\n```\n\n## Implementation Steps\n\n1. **Check types.IssueFilter** - verify all needed fields exist\n2. **Add flags to search.go** init()\n3. **Parse flags** in Run function\n4. **Pass to SearchIssues** via filter\n5. **Test all combinations**\n\n## Testing\n```bash\n# Create test issues\nbd create \"Test P1\" -p 1\nbd create \"Test P2\" -p 2 --description \"Has description\"\n\n# Test filters\nbd search \"\" --priority 1\nbd search \"\" --priority-min 0 --priority-max 1\nbd search \"\" --empty-description\nbd search \"\" --desc-contains \"description\"\n```\n\n## Success Criteria\n- All filters work in both direct and daemon mode\n- Date parsing handles YYYY-MM-DD format\n- --json output includes filtered results\n- Help text documents all new flags","status":"closed","priority":1,"issue_type":"task","assignee":"beads/Searcher","created_at":"2025-11-21T21:07:05.496726-05:00","updated_at":"2025-12-23T13:38:28.475606-08:00","closed_at":"2025-12-23T13:38:28.475606-08:00","close_reason":"Implemented all date, priority, and content filters for bd search","dependencies":[{"issue_id":"bd-au0.5","depends_on_id":"bd-au0","type":"parent-child","created_at":"2025-11-21T21:07:05.497762-05:00","created_by":"daemon","metadata":"{}"},{"issue_id":"bd-au0.5","depends_on_id":"bd-iz5t","type":"parent-child","created_at":"2025-12-23T12:44:07.657303-08:00","created_by":"daemon"}]} {"id":"bd-au0.6","title":"Add comprehensive filters to bd export","description":"Enhance bd export with filtering options for selective exports.\n\n**Currently only has:**\n- --status\n\n**Add filters:**\n- --label, --label-any\n- --assignee\n- --type\n- --priority, --priority-min, --priority-max\n- --created-after, --created-before\n- --updated-after, --updated-before\n\n**Use case:**\n- Export only open issues: bd export --status open\n- Export high-priority bugs: bd export --type bug --priority-max 1\n- Export recent issues: bd export --created-after 2025-01-01\n\n**Files to modify:**\n- cmd/bd/export.go\n- Reuse filter logic from list.go","status":"closed","priority":1,"issue_type":"task","assignee":"beads/dementus","created_at":"2025-11-21T21:07:19.431307-05:00","updated_at":"2025-12-23T23:44:45.602324-08:00","closed_at":"2025-12-23T23:44:45.602324-08:00","close_reason":"All filter flags already implemented and tested","dependencies":[{"issue_id":"bd-au0.6","depends_on_id":"bd-au0","type":"parent-child","created_at":"2025-11-21T21:07:19.432983-05:00","created_by":"daemon","metadata":"{}"}]} -{"id":"bd-au0.7","title":"Audit and standardize JSON output across all commands","description":"Ensure consistent JSON format and error handling when --json flag is used.\n\n**Scope:**\n1. Verify all commands respect --json flag\n2. Standardize success response format\n3. Standardize error response format\n4. Document JSON schemas\n\n**Commands to audit:**\n- Core CRUD: create, update, delete, show, list, search ✓\n- Queries: ready, blocked, stale, count, stats, status\n- Deps: dep add/remove/tree/cycles\n- Labels: label commands\n- Comments: comments add/list/delete\n- Epics: epic status/close-eligible\n- Export/import: already support --json ✓\n\n**Testing:**\n- Success cases return valid JSON\n- Error cases return valid JSON (not plain text)\n- Consistent field naming (snake_case vs camelCase)\n- Array vs object wrapping consistency","notes":"## Audit Complete (2025-12-25)\n\n### Findings\n\n**✓ All commands support --json flag**\n- Query commands: ready, blocked, stale, count, stats, status\n- Dep commands: add, remove, tree, cycles \n- Label commands: add, remove, list, list-all\n- Comments: list, add\n- Epic: status, close-eligible\n\n**✓ Field naming is consistent**\n- All fields use snake_case: created_at, issue_type, dependency_count, etc.\n\n**✗ Error output is INCONSISTENT**\n- Only bd show uses FatalErrorRespectJSON (returns JSON errors)\n- All other commands use fmt.Fprintf(os.Stderr, ...) (returns plain text)\n\n### Files needing fixes\n\n| File | stderr writes | Commands |\n|------|---------------|----------|\n| show.go | 51 | update, close, edit |\n| dep.go | 41 | dep add/remove/tree/cycles |\n| label.go | 19 | label add/remove/list |\n| comments.go | ~10 | comments add/list |\n| epic.go | ~5 | epic status/close-eligible |\n\n### Follow-up\n\nCreated epic bd-28sq to track fixing all error handlers.","status":"closed","priority":1,"issue_type":"task","assignee":"beads/rictus","created_at":"2025-11-21T21:07:35.304424-05:00","updated_at":"2025-12-25T13:32:32.460786-08:00","closed_at":"2025-12-25T13:32:32.460786-08:00","close_reason":"Audit complete. Created bd-28sq epic to fix error output inconsistencies.","dependencies":[{"issue_id":"bd-au0.7","depends_on_id":"bd-au0","type":"parent-child","created_at":"2025-11-21T21:07:35.305663-05:00","created_by":"daemon","metadata":"{}"}],"comments":[{"id":2,"issue_id":"bd-au0.7","author":"stevey","text":"Progress on JSON standardization:\n\n## Completed\n1. **Fixed `bd comments list` null output** - Now returns `[]` instead of `null` for empty comments\n2. **Added `FatalErrorRespectJSON` helper** in errors.go - Pattern for JSON-aware error output\n3. **Fixed flag shadowing** - Removed local `--json` flags from show/update/close that shadowed the global persistent flag\n4. **Updated show command** - Error handlers now use `FatalErrorRespectJSON` as reference implementation\n\n## Audit Results\n- Query commands (ready, blocked, stale, count, stats, status): ✓ All support --json correctly\n- Dep commands (tree, cycles): ✓ All support --json correctly \n- Label commands: ✓ Returns [] for empty\n- Comments: ✓ Fixed null→[]\n- Epic commands (status, close-eligible): ✓ All support --json correctly\n\n## Remaining Work\n- Other commands (list, create, etc.) still use `fmt.Fprintf(os.Stderr, ...)` for errors - could be updated to use `FatalErrorRespectJSON` for JSON error output\n- JSON schema documentation not yet created","created_at":"2025-12-24T07:53:38Z"}]} +{"id":"bd-au0.7","title":"Audit and standardize JSON output across all commands","description":"Ensure consistent JSON format and error handling when --json flag is used.\n\n**Scope:**\n1. Verify all commands respect --json flag\n2. Standardize success response format\n3. Standardize error response format\n4. Document JSON schemas\n\n**Commands to audit:**\n- Core CRUD: create, update, delete, show, list, search ✓\n- Queries: ready, blocked, stale, count, stats, status\n- Deps: dep add/remove/tree/cycles\n- Labels: label commands\n- Comments: comments add/list/delete\n- Epics: epic status/close-eligible\n- Export/import: already support --json ✓\n\n**Testing:**\n- Success cases return valid JSON\n- Error cases return valid JSON (not plain text)\n- Consistent field naming (snake_case vs camelCase)\n- Array vs object wrapping consistency","notes":"## Audit Complete (2025-12-25)\n\n### Findings\n\n**✓ All commands support --json flag**\n- Query commands: ready, blocked, stale, count, stats, status\n- Dep commands: add, remove, tree, cycles \n- Label commands: add, remove, list, list-all\n- Comments: list, add\n- Epic: status, close-eligible\n\n**✓ Field naming is consistent**\n- All fields use snake_case: created_at, issue_type, dependency_count, etc.\n\n**✗ Error output is INCONSISTENT**\n- Only bd show uses FatalErrorRespectJSON (returns JSON errors)\n- All other commands use fmt.Fprintf(os.Stderr, ...) (returns plain text)\n\n### Files needing fixes\n\n| File | stderr writes | Commands |\n|------|---------------|----------|\n| show.go | 51 | update, close, edit |\n| dep.go | 41 | dep add/remove/tree/cycles |\n| label.go | 19 | label add/remove/list |\n| comments.go | ~10 | comments add/list |\n| epic.go | ~5 | epic status/close-eligible |\n\n### Follow-up\n\nCreated epic bd-28sq to track fixing all error handlers.","status":"closed","priority":1,"issue_type":"task","assignee":"beads/rictus","created_at":"2025-11-21T21:07:35.304424-05:00","updated_at":"2025-12-25T13:32:32.460786-08:00","closed_at":"2025-12-25T13:32:32.460786-08:00","close_reason":"Audit complete. Created bd-28sq epic to fix error output inconsistencies.","dependencies":[{"issue_id":"bd-au0.7","depends_on_id":"bd-au0","type":"parent-child","created_at":"2025-11-21T21:07:35.305663-05:00","created_by":"daemon","metadata":"{}"}],"comments":[{"id":3,"issue_id":"bd-au0.7","author":"stevey","text":"Progress on JSON standardization:\n\n## Completed\n1. **Fixed `bd comments list` null output** - Now returns `[]` instead of `null` for empty comments\n2. **Added `FatalErrorRespectJSON` helper** in errors.go - Pattern for JSON-aware error output\n3. **Fixed flag shadowing** - Removed local `--json` flags from show/update/close that shadowed the global persistent flag\n4. **Updated show command** - Error handlers now use `FatalErrorRespectJSON` as reference implementation\n\n## Audit Results\n- Query commands (ready, blocked, stale, count, stats, status): ✓ All support --json correctly\n- Dep commands (tree, cycles): ✓ All support --json correctly \n- Label commands: ✓ Returns [] for empty\n- Comments: ✓ Fixed null→[]\n- Epic commands (status, close-eligible): ✓ All support --json correctly\n\n## Remaining Work\n- Other commands (list, create, etc.) still use `fmt.Fprintf(os.Stderr, ...)` for errors - could be updated to use `FatalErrorRespectJSON` for JSON error output\n- JSON schema documentation not yet created","created_at":"2025-12-24T07:53:38Z"}]} {"id":"bd-au0.8","title":"Improve clean vs cleanup command naming/documentation","description":"Clarify the difference between bd clean and bd cleanup to reduce user confusion.\n\n**Current state:**\n- bd clean: Remove temporary artifacts (.beads/bd.sock, logs, etc.)\n- bd cleanup: Delete old closed issues from database\n\n**Options:**\n1. Rename for clarity:\n - bd clean → bd clean-temp\n - bd cleanup → bd cleanup-issues\n \n2. Keep names but improve help text and documentation\n\n3. Add prominent warnings in help output\n\n**Preferred approach:** Option 2 (improve documentation)\n- Update short/long descriptions in commands\n- Add examples to help text\n- Update README.md\n- Add cross-references in help output\n\n**Files to modify:**\n- cmd/bd/clean.go\n- cmd/bd/cleanup.go\n- README.md or ADVANCED.md","status":"closed","priority":2,"issue_type":"task","assignee":"beads/dementus","created_at":"2025-11-21T21:07:49.960534-05:00","updated_at":"2025-12-23T23:48:00.594734-08:00","closed_at":"2025-12-23T23:48:00.594734-08:00","close_reason":"Documentation already comprehensive: both commands have clear short descriptions, long explanations, cross-references, examples, and explicit disclaimers about what each does NOT do","dependencies":[{"issue_id":"bd-au0.8","depends_on_id":"bd-au0","type":"parent-child","created_at":"2025-11-21T21:07:49.962743-05:00","created_by":"daemon","metadata":"{}"}]} {"id":"bd-au0.9","title":"Review and document rarely-used commands","description":"Document use cases or consider deprecation for infrequently-used commands.\n\n**Commands to review:**\n1. bd rename-prefix - How often is this used? Document use cases\n2. bd detect-pollution - Consider integrating into bd validate\n3. bd migrate-hash-ids - One-time migration, keep but document as legacy\n\n**For each command:**\n- Document typical use cases\n- Add examples to help text\n- Consider if it should be a subcommand instead\n- Add deprecation warning if appropriate\n\n**Not changing:**\n- duplicates ✓ (useful for data quality)\n- repair-deps ✓ (useful for fixing broken refs)\n- restore ✓ (critical for compacted issues)\n- compact ✓ (performance feature)\n\n**Deliverable:**\n- Updated help text\n- Documentation in ADVANCED.md\n- Deprecation plan if needed","status":"closed","priority":3,"issue_type":"task","assignee":"beads/dementus","created_at":"2025-11-21T21:08:05.588275-05:00","updated_at":"2025-12-23T23:50:04.180989-08:00","closed_at":"2025-12-23T23:50:04.180989-08:00","close_reason":"All three commands already have comprehensive docs: USE CASES, EXAMPLES, and appropriate warnings (LEGACY/rare operation notes)","dependencies":[{"issue_id":"bd-au0.9","depends_on_id":"bd-au0","type":"parent-child","created_at":"2025-11-21T21:08:05.59003-05:00","created_by":"daemon","metadata":"{}"}]} {"id":"bd-awmf","title":"Merge: bd-dtl8","description":"branch: polecat/dag\ntarget: main\nsource_issue: bd-dtl8\nrig: beads","status":"closed","priority":1,"issue_type":"merge-request","created_at":"2025-12-23T20:47:15.147476-08:00","updated_at":"2025-12-23T21:21:57.690692-08:00","closed_at":"2025-12-23T21:21:57.690692-08:00","close_reason":"stale - no code pushed"} @@ -206,6 +210,7 @@ {"id":"bd-bijf","title":"Merge: bd-l13p","description":"branch: polecat/nux\ntarget: main\nsource_issue: bd-l13p\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-23T16:41:32.467246-08:00","updated_at":"2025-12-23T19:12:08.348252-08:00","closed_at":"2025-12-23T19:12:08.348252-08:00","close_reason":"Stale merge-requests from orphaned polecat branches - refinery not processing"} {"id":"bd-bivq","title":"Merge: bd-9usz","description":"branch: polecat/slit\ntarget: main\nsource_issue: bd-9usz\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-23T20:42:19.995419-08:00","updated_at":"2025-12-23T21:21:57.700579-08:00","closed_at":"2025-12-23T21:21:57.700579-08:00","close_reason":"stale - no code pushed"} {"id":"bd-bkul","title":"Simplify wisp architecture: single DB with ephemeral flag","description":"\n## Problem\n\nThe current wisp architecture uses a separate .beads-wisp/ directory with its own database. This creates unnecessary complexity:\n\n- Separate directory structure and database initialization\n- Parallel routing logic in both beads and gastown\n- Merge queries from two stores for inbox/list operations\n- All this for what is essentially a boolean flag\n\n## Current State\n\nGastown mail code has to:\n- resolveWispDir() vs resolveBeadsDir()\n- Query both, merge results\n- Route deletes to correct store\n\n## Proposed Solution\n\nSingle database with an ephemeral boolean field on Issue. Behavior:\n- bd create --ephemeral sets the flag\n- JSONL export SKIPS ephemeral issues (they never sync)\n- bd list shows all by default, --ephemeral / --persistent filters\n- bd mol squash clears the flag (promotes to permanent, now exports)\n- bd wisp gc deletes old ephemeral issues from the db\n- No separate directory, no separate routing\n\n## Migration\n\n1. Update beads to support ephemeral flag in main db\n2. Update bd wisp commands to work with flag instead of separate dir\n3. Update gastown mail to use simple --ephemeral flag (remove all dual-routing)\n4. Deprecate .beads-wisp/ directory pattern\n\n## Acceptance Criteria\n\n- Single database for all issues (ephemeral and persistent)\n- ephemeral field on Issue type\n- JSONL export skips ephemeral issues\n- bd create --ephemeral works\n- bd mol squash promotes ephemeral to persistent\n- Gastown mail uses simple flag, no dual-routing\n","status":"closed","priority":0,"issue_type":"feature","created_at":"2025-12-24T20:06:27.980055-08:00","updated_at":"2025-12-24T20:43:07.065124-08:00","closed_at":"2025-12-24T20:43:07.065124-08:00","close_reason":"Implemented: single DB with Wisp flag, deprecated wisp storage functions"} +{"id":"bd-blh0","title":"gt nudge doesn't work with crew addresses","description":"## Bug\n\n`gt nudge beads/crew/dave \"message\"` fails because it uses the polecat session manager which produces wrong session names.\n\n## Expected\nSession name: `gt-beads-crew-dave` (hyphen)\n\n## Actual\nSession name: `gt-beads-crew/dave` (slash, from polecat manager)\n\n## Root Cause\n\nIn nudge.go line 46-57:\n```go\nrigName, polecatName, err := parseAddress(target) // beads, crew/dave\nsessionName := mgr.SessionName(polecatName) // gt-beads-crew/dave (WRONG)\n```\n\nShould detect `crew/` prefix in polecatName and use `crewSessionName(rigName, crewName)` instead.\n\n## Fix\n\n```go\nif strings.HasPrefix(polecatName, \"crew/\") {\n crewName := strings.TrimPrefix(polecatName, \"crew/\")\n sessionName = crewSessionName(rigName, crewName)\n} else {\n sessionName = mgr.SessionName(polecatName)\n}\n```","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-26T15:43:32.222784-08:00","updated_at":"2025-12-26T23:36:38.520091-08:00","closed_at":"2025-12-26T23:36:38.520091-08:00","close_reason":"Already fixed in gastown commit 54f0932"} {"id":"bd-bqcc","title":"Consolidate maintenance commands into bd doctor --fix","description":"Per rsnodgrass in GH#692:\n\u003e \"The biggest improvement to beads from an ergonomics perspective would be to prune down commands. We have a lot of 'maintenance' commands that probably should just be folded into 'bd doctor --fix' automatically.\"\n\nCurrent maintenance commands that could be consolidated:\n- clean - Clean up temporary git merge artifacts\n- cleanup - Delete closed issues and prune expired tombstones\n- compact - Compact old closed issues\n- detect-pollution - Detect and clean test issues\n- migrate-* (5 commands) - Various migration utilities\n- repair-deps - Fix orphaned dependency references\n- validate - Database health checks\n\nProposal:\n1. Make `bd doctor` the single entry point for health checks\n2. Add `bd doctor --fix` to auto-fix common issues\n3. Deprecate (but keep working) individual commands\n4. Add `bd doctor --all` for comprehensive maintenance\n\nThis would reduce cognitive load for users - they just need to remember 'bd doctor'.\n\nNote: This is higher impact but also higher risk - needs careful design to avoid breaking existing workflows.","status":"closed","priority":2,"issue_type":"feature","assignee":"beads/capable","created_at":"2025-12-22T14:27:31.466556-08:00","updated_at":"2025-12-23T01:33:25.732363-08:00","closed_at":"2025-12-23T01:33:25.732363-08:00","close_reason":"Merged to main"} {"id":"bd-bw6","title":"Fix G104 errors unhandled in internal/storage/sqlite/queries.go:1181","description":"Linting issue: G104: Errors unhandled (gosec) at internal/storage/sqlite/queries.go:1181:4. Error: rows.Close()","status":"tombstone","priority":0,"issue_type":"bug","created_at":"2025-12-07T15:35:09.008444133-07:00","updated_at":"2025-12-25T01:21:01.952723-08:00","deleted_at":"2025-12-25T01:21:01.952723-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} {"id":"bd-bwk2","title":"Centralize error handling patterns in storage layer","description":"80+ instances of inconsistent error handling across sqlite.go with mix of %w, %v, and no wrapping.\n\nLocation: internal/storage/sqlite/sqlite.go (throughout)\n\nProblem:\n- Some use fmt.Errorf(\"op failed: %w\", err) - correct wrapping\n- Some use fmt.Errorf(\"op failed: %v\", err) - loses error chain\n- Some return err directly - no context\n- Hard to debug production issues\n- Can't distinguish error types\n\nSolution: Create internal/storage/sqlite/errors.go:\n- Define sentinel errors (ErrNotFound, ErrInvalidID, etc.)\n- Create wrapDBError(op string, err error) helper\n- Convert sql.ErrNoRows to ErrNotFound\n- Always wrap with operation context\n\nImpact: Lost error context; inconsistent messages; hard to debug\n\nEffort: 5-7 hours","status":"closed","priority":1,"issue_type":"task","created_at":"2025-11-16T14:51:54.974909-08:00","updated_at":"2025-12-21T21:44:37.237175-08:00","closed_at":"2025-12-21T21:44:37.237175-08:00","close_reason":"Already implemented: errors.go exists with sentinel errors (ErrNotFound, ErrInvalidID, ErrConflict, ErrCycle), wrapDBError/wrapDBErrorf helpers that convert sql.ErrNoRows to ErrNotFound, and IsNotFound/IsConflict/IsCycle checkers. 41 uses of wrapDBError, 347 uses of proper %w wrapping, 0 uses of %v. Added one minor fix to CheckpointWAL."} @@ -307,6 +312,8 @@ {"id":"bd-hy9p","title":"Add --body-file flag to bd create for reading descriptions from files","description":"## Problem\n\nCreating issues with long/complex descriptions via CLI requires shell escaping gymnastics:\n\n```bash\n# Current workaround - awkward heredoc quoting\nbd create --title=\"...\" --description=\"$(cat \u003c\u003c'EOF'\n...markdown...\nEOF\n)\"\n\n# Often fails with quote escaping errors in eval context\n# Agents resort to writing temp files then reading them\n```\n\n## Proposed Solution\n\nAdd `--body-file` and `--description-file` flags to read description from a file, matching `gh` CLI pattern.\n\n```bash\n# Natural pattern that aligns with training data\ncat \u003e /tmp/desc.md \u003c\u003c 'EOF'\n...markdown content...\nEOF\n\nbd create --title=\"...\" --body-file=/tmp/desc.md\n```\n\n## Implementation\n\n### 1. Add new flags to `bd create`\n\n```go\ncreateCmd.Flags().String(\"body-file\", \"\", \"Read description from file (use - for stdin)\")\ncreateCmd.Flags().String(\"description-file\", \"\", \"Alias for --body-file\")\n```\n\n### 2. Flag precedence\n\n- If `--body-file` or `--description-file` is provided, read from file\n- If value is `-`, read from stdin\n- Otherwise fall back to `--body` or `--description` flag\n- If neither provided, description is empty (current behavior)\n\n### 3. Error handling\n\n- File doesn't exist → clear error message\n- File not readable → clear error message\n- stdin specified but not available → clear error message\n\n## Benefits\n\n✅ **Matches training data**: `gh issue create --body-file file.txt` is a common pattern\n✅ **No shell escaping issues**: File content is read directly\n✅ **Works with any content**: Markdown, special characters, quotes, etc.\n✅ **Agent-friendly**: Agents already write complex content to temp files\n✅ **User-friendly**: Easier for humans too when pasting long descriptions\n\n## Related Commands\n\nConsider adding similar support to:\n- `bd update --body-file` (for updating descriptions)\n- `bd comment --body-file` (if/when we add comments)\n\n## Examples\n\n```bash\n# From file\nbd create --title=\"Add new feature\" --body-file=feature.md\n\n# From stdin\necho \"Quick description\" | bd create --title=\"Bug fix\" --body-file=-\n\n# With other flags\nbd create \\\n --title=\"Security issue\" \\\n --type=bug \\\n --priority=0 \\\n --body-file=security-report.md \\\n --label=security\n```\n\n## Testing\n\n- Test with normal files\n- Test with stdin (`-`)\n- Test with non-existent files (error handling)\n- Test with binary files (should handle gracefully)\n- Test with empty files (valid - empty description)\n- Test that `--description-file` and `--body-file` are equivalent aliases","status":"tombstone","priority":1,"issue_type":"feature","created_at":"2025-11-22T00:02:08.762684-08:00","updated_at":"2025-12-25T01:21:01.952723-08:00","deleted_at":"2025-12-25T01:21:01.952723-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"feature"} {"id":"bd-hzvz","title":"Update info.go versionChanges","description":"Add entry to versionChanges in cmd/bd/info.go with agent-actionable changes for 0.30.7","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T22:56:48.649359-08:00","updated_at":"2025-12-19T22:57:31.604229-08:00","closed_at":"2025-12-19T22:57:31.604229-08:00","dependencies":[{"issue_id":"bd-hzvz","depends_on_id":"bd-8pyn","type":"parent-child","created_at":"2025-12-19T22:56:48.652068-08:00","created_by":"stevey"},{"issue_id":"bd-hzvz","depends_on_id":"bd-2ep8","type":"blocks","created_at":"2025-12-19T22:56:48.652376-08:00","created_by":"stevey"}]} {"id":"bd-i0rx","title":"Merge: bd-ao0s","description":"branch: polecat/rictus\ntarget: main\nsource_issue: bd-ao0s\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-20T01:13:42.716658-08:00","updated_at":"2025-12-20T23:17:26.993744-08:00","closed_at":"2025-12-20T23:17:26.993744-08:00","close_reason":"Branches nuked, MRs obsolete"} +{"id":"bd-i5l","title":"Witness Patrol","description":"Per-rig worker monitor patrol loop with progressive nudging.","status":"open","priority":2,"issue_type":"molecule","created_at":"2025-12-26T21:20:47.650732-08:00","created_by":"deacon","updated_at":"2025-12-26T21:20:47.650732-08:00"} +{"id":"bd-i7a6","title":"Test actor flag","status":"open","priority":4,"issue_type":"task","created_at":"2025-12-26T20:47:28.470006-08:00","updated_at":"2025-12-26T20:47:28.470006-08:00"} {"id":"bd-ia3g","title":"BondRef.ProtoID field name is misleading for mol+mol bonds","description":"In bondMolMol, the BondRef.ProtoID field is used to store molecule IDs:\n\n```go\nBondedFrom: append(molA.BondedFrom, types.BondRef{\n ProtoID: molB.ID, // This is a molecule, not a proto\n ...\n})\n```\n\nThis is semantically confusing since ProtoID suggests it should only hold proto references.\n\n**Options:**\n1. Rename ProtoID to SourceID (breaking change, needs migration)\n2. Add documentation clarifying ProtoID can hold molecule IDs in bond context\n3. Leave as-is, accept the naming is imprecise\n\nLow priority since it's just naming, not functionality.","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-21T10:23:00.755067-08:00","updated_at":"2025-12-25T14:30:47.455867-08:00"} {"id":"bd-ibl9","title":"Merge: bd-4qfb","description":"branch: polecat/Polish\ntarget: main\nsource_issue: bd-4qfb\nrig: beads","status":"closed","priority":2,"issue_type":"merge-request","created_at":"2025-12-23T13:37:57.255125-08:00","updated_at":"2025-12-23T19:12:08.352249-08:00","closed_at":"2025-12-23T19:12:08.352249-08:00","close_reason":"Stale merge-requests from orphaned polecat branches - refinery not processing"} {"id":"bd-icfe","title":"gt spawn/crew setup should create .beads/redirect for worktrees","description":"Crew clones and polecats need a .beads/redirect file pointing to the shared beads database (../../mayor/rig/.beads). Currently:\n\n- redirect files can get deleted by git clean\n- not auto-created during gt spawn or worktree setup\n- missing redirects cause 'no beads database found' errors\n\nFound missing in: gastown/joe, beads/zoey (after git clean)\n\nFix options:\n1. gt spawn creates redirect during worktree setup\n2. gt prime regenerates missing redirects\n3. bd commands auto-detect worktree and find shared beads\n\nThis should be standard Gas Town rig configuration.","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-21T01:30:26.115872-08:00","updated_at":"2025-12-21T17:51:25.740811-08:00","closed_at":"2025-12-21T17:51:25.740811-08:00","close_reason":"Moved to gastown: gt-b6qm"} @@ -355,6 +362,7 @@ {"id":"bd-kwro.7","title":"Identity Configuration","description":"Implement identity system for sender field.\n\nConfiguration sources (in priority order):\n1. --identity flag on commands\n2. BEADS_IDENTITY environment variable\n3. .beads/config.json: {\"identity\": \"worker-name\"}\n4. Default: git user.name or hostname\n\nNew config file support:\n- .beads/config.json for per-repo settings\n- identity field for messaging\n\nHelper function:\n- GetIdentity() string - resolves identity from sources\n\nUpdate bd mail send to use GetIdentity() for sender field.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-16T03:02:17.603608-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} {"id":"bd-kwro.8","title":"Hooks System","description":"Implement hook system for extensibility.\n\nHook directory: .beads/hooks/\nHook files (executable scripts):\n- on_create - runs after bd create\n- on_update - runs after bd update \n- on_close - runs after bd close\n- on_message - runs after bd mail send\n\nHook invocation:\n- Pass issue ID as first argument\n- Pass event type as second argument\n- Pass JSON issue data on stdin\n- Run asynchronously (dont block command)\n\nExample hook (GGT notification):\n #!/bin/bash\n gt notify --event=$2 --issue=$1\n\nThis allows GGT to register notification handlers without Beads knowing about GGT.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-16T03:02:23.086393-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} {"id":"bd-kwro.9","title":"Cleanup: --ephemeral flag","description":"Update bd cleanup to handle ephemeral issues.\n\nNew flag:\n- bd cleanup --ephemeral - deletes all CLOSED issues with ephemeral=true\n\nBehavior:\n- Only deletes if status=closed AND ephemeral=true\n- Respects --dry-run flag\n- Reports count of deleted ephemeral issues\n\nThis allows swarm cleanup to remove transient messages without affecting permanent issues.","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-12-16T03:02:28.563871-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-kx1j","title":"Review jordanhubbard chaos testing PR #752","description":"Review jordanhubbard chaos testing PR #752\n\nFINDINGS:\n- Implementation quality: HIGH\n- Recommendation: MERGE WITH MODIFICATIONS\n- Mods: No hard coverage threshold, chaos tests on releases only\n\nDECISION: User agrees. Next steps:\n1. Add chaos tests to release-bump formula\n2. Merge PR #752\n\nReview doc: docs/pr-752-chaos-testing-review.md","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-26T17:22:18.219501-08:00","updated_at":"2025-12-26T23:14:35.902878-08:00","closed_at":"2025-12-26T17:38:14.904621-08:00"} {"id":"bd-kyll","title":"Add daemon-side delete operation tests","description":"Follow-up epic for PR #626: Add comprehensive test coverage for delete operations at the daemon/RPC layer. PR #626 successfully added storage layer tests but identified gaps in daemon-side delete operations and RPC integration testing.\n\n## Scope\nTests needed for:\n1. deleteViaDaemon (cmd/bd/delete.go:21) - RPC client-side deletion command\n2. Daemon RPC delete handler - Server-side deletion via daemon\n3. createTombstone wrapper (cmd/bd/delete.go:335) - Tombstone creation wrapper\n4. deleteIssue wrapper (cmd/bd/delete.go:349) - Direct deletion wrapper\n\n## Coverage targets\n- Delete via RPC daemon (both success and error paths)\n- Cascade deletion through daemon\n- Force deletion through daemon\n- Dry-run mode validation\n- Tombstone creation and verification\n- Error handling and edge cases","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-18T13:08:26.039663309-07:00","updated_at":"2025-12-25T01:44:03.584007-08:00","closed_at":"2025-12-25T01:44:03.584007-08:00","close_reason":"All child tasks completed"} {"id":"bd-kyo","title":"Run tests and linting","description":"Run the full test suite and linter:\n\n```bash\nTMPDIR=/tmp go test -short ./...\ngolangci-lint run ./...\n```\n\nFix any failures. Linting warnings acceptable (see LINTING.md).","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-18T22:42:59.290588-08:00","updated_at":"2025-12-24T16:25:30.300951-08:00","dependencies":[{"issue_id":"bd-kyo","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T22:43:16.370234-08:00","created_by":"daemon"},{"issue_id":"bd-kyo","depends_on_id":"bd-8hy","type":"blocks","created_at":"2025-12-18T22:43:20.570742-08:00","created_by":"daemon"}],"deleted_at":"2025-12-24T16:25:30.300951-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} {"id":"bd-kzda","title":"Implement conditional bond type for mol bond","description":"The mol bond command accepts 'conditional' as a bond type but doesn't implement any conditional-specific behavior. It currently behaves identically to 'parallel'.\n\n**Expected behavior:**\nConditional bonds should mean 'B runs only if A fails' per the help text (mol.go:318).\n\n**Implementation needed:**\n- Add failure-condition dependency handling\n- Possibly new dependency type or status-based blocking\n- Update bondProtoProto, bondProtoMol, bondMolMol to handle conditional\n\n**Alternative:**\nRemove 'conditional' from valid bond types until implemented.\n\nThis is new functionality, not a regression.","status":"closed","priority":3,"issue_type":"feature","assignee":"beads/toast","created_at":"2025-12-21T10:23:01.966367-08:00","updated_at":"2025-12-23T01:33:25.734264-08:00","closed_at":"2025-12-23T01:33:25.734264-08:00","close_reason":"Merged to main"} @@ -362,6 +370,7 @@ {"id":"bd-l7y3","title":"bd mol bond --pour should set Wisp=false","description":"In mol_bond.go bondProtoMol(), opts.Wisp is hardcoded to true (line 392). This ignores the --pour flag. When user specifies --pour to make an issue persistent, the Wisp field should be false so the issue is not marked for bulk deletion.\n\nCurrent behavior:\n- --pour flag correctly selects regular storage (not wisp storage)\n- But opts.Wisp=true means spawned issues are still marked for cleanup when closed\n\nExpected behavior:\n- --pour should set Wisp=false so persistent issues are not auto-cleaned\n\nComparison with mol_spawn.go (line 204):\n wisp := !pour // Correctly respects --pour flag\n result, err := spawnMolecule(ctx, store, subgraph, vars, assignee, actor, wisp)\n\nFix: Pass pour flag to bondProtoMol and set opts.Wisp = !pour","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-23T15:15:00.562346-08:00","updated_at":"2025-12-23T15:25:22.53144-08:00","closed_at":"2025-12-23T15:25:22.53144-08:00","close_reason":"Fixed - pour parameter now passed through bondProtoMol chain"} {"id":"bd-ldb0","title":"Rename ephemeral → wisp throughout codebase","description":"## The Change\n\nRename 'ephemeral' to 'wisp' throughout the beads codebase.\n\n## Why\n\n**Ephemeral** is:\n- 4 syllables (too long)\n- Greek/academic (doesn't match bond/burn/squash)\n- Overused in tech (K8s, networking, storage)\n- Passive/descriptive\n\n**Wisp** is:\n- 1 syllable (matches bond/burn/squash)\n- Evocative - you can SEE a wisp\n- Steam engine metaphor - Gas Town is engines, steam wisps rise and dissipate\n- Will-o'-the-wisp - transient spirits that guide then vanish\n- Unique - nobody else uses it\n\n## The Steam Engine Metaphor\n\n```\nEngine does work → generates steam\nSteam wisps rise → execution trace\nSteam condenses → digest (distillate)\nSteam dissipates → cleaned up (burned)\n```\n\n## Full Vocabulary\n\n| Term | Meaning |\n|------|---------|\n| bond | Attach proto to work (creates wisps) |\n| wisp | Temporary execution step |\n| squash | Condense wisps into digest |\n| burn | Destroy wisps without record |\n| digest | Permanent condensed record |\n\n## Changes Required\n\n### Code\n- `Ephemeral bool` → `Wisp bool` in types/issue.go\n- `--ephemeral` flag → remove (wisp is default)\n- `--persistent` flag → keep as opt-out\n- `bd cleanup --ephemeral` → `bd cleanup --wisps`\n- Update all references in mol_*.go files\n\n### Docs\n- Update all documentation\n- Update CLAUDE.md examples\n- Update CLI help text\n\n### Database Migration\n- Add migration to rename field (or keep internal name, just change API)\n\n## Example Usage After\n\n```bash\nbd mol bond mol-polecat-work # Creates wisps (default)\nbd mol bond mol-xxx --persistent # Creates permanent issues\nbd mol squash bd-xxx # Condenses wisps → digest\nbd cleanup --wisps # Clean old wisps\nbd list --wisps # Show wisp issues\n```","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-21T14:44:41.576068-08:00","updated_at":"2025-12-22T00:32:31.153738-08:00","closed_at":"2025-12-22T00:32:31.153738-08:00","close_reason":"Renamed ephemeral → wisp throughout codebase"} {"id":"bd-lfak","title":"bd preflight: PR readiness checks for contributors","description":"## Vision\n\nEncode project-specific institutional knowledge into executable checks. CONTRIBUTING.md is documentation that's read once and forgotten; `bd preflight` is documentation that runs at exactly the right moment.\n\n## Problem Statement\n\nContributors face a \"last mile\" problem - they do the work but stumble on project-specific gotchas at PR time:\n- Nix vendorHash gets stale when go.sum changes\n- Beads artifacts leak into PRs (see bd-umbf for namespace solution)\n- Version mismatches between version.go and default.nix\n- Tests/lint not run locally before pushing\n- Other project-specific checks that only surface when CI fails\n\nThese are too obscure to remember, exist in docs nobody reads end-to-end, and waste CI round-trips.\n\n## Why beads?\n\nBeads already has a foothold in the contributor workflow. It knows:\n- Git state (staged files, branch, dirty status)\n- Project structure\n- The specific issue being worked on\n- Project-specific configuration\n\n## Proposed Interface\n\n### Tier 1: Checklist Mode (v1)\n\n $ bd preflight\n PR Readiness Checklist:\n\n [ ] Tests pass: go test -short ./...\n [ ] Lint passes: golangci-lint run ./...\n [ ] No beads pollution: check .beads/issues.jsonl diff\n [ ] Nix hash current: go.sum unchanged or vendorHash updated\n [ ] Version sync: version.go matches default.nix\n\n Run 'bd preflight --check' to validate automatically.\n\n### Tier 2: Check Mode (v2)\n\n $ bd preflight --check\n ✓ Tests pass\n ✓ Lint passes\n ⚠ Beads pollution: 3 issues in diff - are these project issues or personal?\n ✗ Nix hash stale: go.sum changed, vendorHash needs update\n Fix: sha256-KRR6dXzsSw8OmEHGBEVDBOoIgfoZ2p0541T9ayjGHlI=\n ✓ Version sync\n\n 1 error, 1 warning. Run 'bd preflight --fix' to auto-fix where possible.\n\n### Tier 3: Fix Mode (v3)\n\n $ bd preflight --fix\n ✓ Updated vendorHash in default.nix\n ⚠ Cannot auto-fix beads pollution - manual review needed\n\n## Checks to Implement\n\n| Check | Description | Auto-fixable |\n|-------|-------------|--------------|\n| tests | Run go test -short ./... | No |\n| lint | Run golangci-lint | Partial (gofmt) |\n| beads-pollution | Detect personal issues in diff | No (see bd-umbf) |\n| nix-hash | Detect stale vendorHash | Yes (if nix available) |\n| version-sync | version.go matches default.nix | Yes |\n| no-debug | No TODO/FIXME/console.log | Warn only |\n| clean-stage | No unintended files staged | Warn only |\n\n## Future: Configuration\n\nMake checks configurable per-project via .beads/preflight.yaml:\n\n preflight:\n checks:\n - name: tests\n run: go test -short ./...\n required: true\n - name: no-secrets\n pattern: \"**/*.env\"\n staged: deny\n - name: custom-check\n run: ./scripts/validate.sh\n\nThis lets any project using beads define their own preflight checks.\n\n## Implementation Phases\n\n### Phase 1: Static Checklist\n- Implement bd preflight with hardcoded checklist for beads\n- No execution, just prints what to check\n- Update CONTRIBUTING.md to reference it\n\n### Phase 2: Automated Checks\n- Implement bd preflight --check\n- Run tests, lint, detect stale hashes\n- Clear pass/fail/warn output\n\n### Phase 3: Auto-fix\n- Implement bd preflight --fix\n- Fix vendorHash, version sync\n- Integrate with bd-umbf solution for pollution\n\n### Phase 4: Configuration\n- .beads/preflight.yaml support\n- Make it useful for other projects using beads\n- Plugin/hook system for custom checks\n\n## Dependencies\n\n- bd-umbf: Namespace isolation for beads pollution (blocking for full solution)\n\n## Success Metrics\n\n- Fewer CI failures on first PR push\n- Reduced \"fix nix hash\" commits\n- Contributors report preflight caught issues before CI","status":"open","priority":2,"issue_type":"epic","created_at":"2025-12-13T18:01:39.587078-08:00","updated_at":"2025-12-13T18:01:39.587078-08:00","dependencies":[{"issue_id":"bd-lfak","depends_on_id":"bd-umbf","type":"blocks","created_at":"2025-12-13T18:01:46.059901-08:00","created_by":"daemon","metadata":"{}"}]} +{"id":"bd-lfiu","title":"bd dep add: Auto-resolve cross-rig IDs using routes.jsonl","description":"Currently, adding a dependency to an issue in another rig requires verbose external reference syntax:\n\n```bash\n# This fails - can't resolve bd-* from gastown context\nbd dep add gt-xyz bd-abc\n\n# This works but is verbose\nbd dep add gt-xyz external:beads:bd-abc\n```\n\nThe town-level routing (~/gt/.beads/routes.jsonl) already knows how to map prefixes to rigs:\n```json\n{\"prefix\": \"gt-\", \"path\": \"gastown/mayor/rig\"}\n{\"prefix\": \"bd-\", \"path\": \"beads/mayor/rig\"}\n```\n\nEnhancement: When `bd dep add` encounters an ID with a foreign prefix, it should:\n1. Check routes.jsonl for the prefix mapping\n2. Auto-resolve to external:\u003cproject\u003e:\u003cid\u003e internally\n3. Allow the simpler `bd dep add gt-xyz bd-abc` syntax\n\nThis would make cross-rig dependencies much more ergonomic.","status":"in_progress","priority":3,"issue_type":"feature","assignee":"beads/dave","created_at":"2025-12-26T20:20:40.814713-08:00","updated_at":"2025-12-26T23:39:47.263248-08:00"} {"id":"bd-likt","title":"Add daemon RPC support for gate commands","description":"Add daemon RPC support for gate commands.\n\n## Current State\nGate commands require --no-daemon flag because they use direct SQLite access:\n- Gate create needs to write await_type, await_id, timeout_ns, waiters fields\n- Gate wait needs to update waiters JSON array\n- Daemon RPC doesnt have methods for these operations\n\n## Implementation\n\n### 1. Add RPC methods to internal/rpc/protocol.go\n\n```go\n// Gate operations\ntype GateCreateArgs struct {\n Title string \\`json:\"title\"\\`\n AwaitType string \\`json:\"await_type\"\\`\n AwaitID string \\`json:\"await_id\"\\`\n Timeout time.Duration \\`json:\"timeout\"\\`\n Waiters []string \\`json:\"waiters\"\\`\n}\n\ntype GateCreateResult struct {\n Issue *types.Issue \\`json:\"issue\"\\`\n}\n\ntype GateListArgs struct {\n All bool \\`json:\"all\"\\` // Include closed gates\n}\n\ntype GateListResult struct {\n Gates []*types.Issue \\`json:\"gates\"\\`\n}\n\ntype GateWaitArgs struct {\n GateID string \\`json:\"gate_id\"\\`\n Waiters []string \\`json:\"waiters\"\\` // Additional waiters to add\n}\n\ntype GateWaitResult struct {\n Gate *types.Issue \\`json:\"gate\"\\`\n AddedCount int \\`json:\"added_count\"\\`\n}\n```\n\n### 2. Add handler methods to internal/daemon/rpc_handler.go\n\n```go\nfunc (h *RPCHandler) GateCreate(ctx context.Context, args *rpc.GateCreateArgs) (*rpc.GateCreateResult, error) {\n now := time.Now()\n gate := \u0026types.Issue{\n Title: args.Title,\n IssueType: types.TypeGate,\n Status: types.StatusOpen,\n Priority: 1,\n Assignee: \"deacon/\",\n Wisp: true,\n AwaitType: args.AwaitType,\n AwaitID: args.AwaitID,\n Timeout: args.Timeout,\n Waiters: args.Waiters,\n CreatedAt: now,\n UpdatedAt: now,\n }\n gate.ContentHash = gate.ComputeContentHash()\n \n if err := h.store.CreateIssue(ctx, gate, h.actor); err != nil {\n return nil, err\n }\n \n return \u0026rpc.GateCreateResult{Issue: gate}, nil\n}\n\nfunc (h *RPCHandler) GateList(ctx context.Context, args *rpc.GateListArgs) (*rpc.GateListResult, error) {\n gateType := types.TypeGate\n filter := types.IssueFilter{IssueType: \u0026gateType}\n if !args.All {\n openStatus := types.StatusOpen\n filter.Status = \u0026openStatus\n }\n \n gates, err := h.store.SearchIssues(ctx, \"\", filter)\n if err != nil {\n return nil, err\n }\n \n return \u0026rpc.GateListResult{Gates: gates}, nil\n}\n\nfunc (h *RPCHandler) GateWait(ctx context.Context, args *rpc.GateWaitArgs) (*rpc.GateWaitResult, error) {\n gate, err := h.store.GetIssue(ctx, args.GateID)\n if err != nil {\n return nil, err\n }\n if gate.IssueType != types.TypeGate {\n return nil, fmt.Errorf(\"%s is not a gate\", args.GateID)\n }\n \n // Merge waiters (dedupe)\n waiterSet := make(map[string]bool)\n for _, w := range gate.Waiters {\n waiterSet[w] = true\n }\n added := 0\n for _, w := range args.Waiters {\n if !waiterSet[w] {\n gate.Waiters = append(gate.Waiters, w)\n waiterSet[w] = true\n added++\n }\n }\n \n if added \u003e 0 {\n // Update via store\n updates := map[string]interface{}{\n \"waiters\": gate.Waiters,\n }\n if err := h.store.UpdateIssue(ctx, args.GateID, updates, h.actor); err != nil {\n return nil, err\n }\n }\n \n return \u0026rpc.GateWaitResult{Gate: gate, AddedCount: added}, nil\n}\n```\n\n### 3. Register methods in daemon\n\nIn internal/daemon/server.go, register the new methods:\n```go\nrpc.RegisterMethod(\"gate.create\", h.GateCreate)\nrpc.RegisterMethod(\"gate.list\", h.GateList)\nrpc.RegisterMethod(\"gate.wait\", h.GateWait)\n```\n\n### 4. Add client methods to internal/rpc/client.go\n\n```go\nfunc (c *Client) GateCreate(ctx context.Context, args *GateCreateArgs) (*GateCreateResult, error) {\n var result GateCreateResult\n err := c.Call(ctx, \"gate.create\", args, \u0026result)\n return \u0026result, err\n}\n\nfunc (c *Client) GateList(ctx context.Context, args *GateListArgs) (*GateListResult, error) {\n var result GateListResult\n err := c.Call(ctx, \"gate.list\", args, \u0026result)\n return \u0026result, err\n}\n\nfunc (c *Client) GateWait(ctx context.Context, args *GateWaitArgs) (*GateWaitResult, error) {\n var result GateWaitResult\n err := c.Call(ctx, \"gate.wait\", args, \u0026result)\n return \u0026result, err\n}\n```\n\n### 5. Update cmd/bd/gate.go to use daemon\n\n```go\n// In gateCreateCmd Run:\nif daemonClient != nil {\n result, err := daemonClient.GateCreate(ctx, \u0026rpc.GateCreateArgs{\n Title: title,\n AwaitType: awaitType,\n AwaitID: awaitID,\n Timeout: timeout,\n Waiters: notifyAddrs,\n })\n if err != nil {\n FatalError(\"gate create: %v\", err)\n }\n gate = result.Issue\n} else {\n // Existing direct store code\n}\n```\n\n## Files to Modify\n\n1. **internal/rpc/protocol.go** - Add Gate*Args/Result types\n2. **internal/daemon/rpc_handler.go** - Add handler methods\n3. **internal/daemon/server.go** - Register methods\n4. **internal/rpc/client.go** - Add client methods\n5. **cmd/bd/gate.go** - Use daemon client when available\n\n## Testing\n\n```bash\n# Start daemon\nbd daemon start\n\n# Test via daemon (should work without --no-daemon)\nbd gate create --await timer:5m --notify beads/dave\nbd gate list\nbd gate wait \u003cid\u003e --notify beads/alice\n\n# Verify daemon handled it\nbd daemons logs . | grep gate\n```\n\n## Success Criteria\n- All gate commands work without --no-daemon\n- Same behavior in daemon vs direct mode\n- Waiters array updates correctly via RPC\n- Tests pass for RPC gate operations","status":"closed","priority":3,"issue_type":"task","assignee":"beads/Gater","created_at":"2025-12-23T12:13:25.778412-08:00","updated_at":"2025-12-23T13:45:58.398604-08:00","closed_at":"2025-12-23T13:45:58.398604-08:00","close_reason":"Implemented daemon RPC support for all gate commands","dependencies":[{"issue_id":"bd-likt","depends_on_id":"bd-udsi","type":"discovered-from","created_at":"2025-12-23T12:13:36.174822-08:00","created_by":"daemon"},{"issue_id":"bd-likt","depends_on_id":"bd-iz5t","type":"parent-child","created_at":"2025-12-23T12:44:07.891992-08:00","created_by":"daemon"}]} {"id":"bd-lk39","title":"Add composite index (issue_id, event_type) on events table","description":"GetCloseReason and GetCloseReasonsForIssues filter by both issue_id and event_type.\n\n**Query (queries.go:355-358):**\n```sql\nSELECT comment FROM events\nWHERE issue_id = ? AND event_type = ?\nORDER BY created_at DESC LIMIT 1\n```\n\n**Problem:** Currently uses idx_events_issue but must filter event_type in memory.\n\n**Solution:** Add migration:\n```sql\nCREATE INDEX IF NOT EXISTS idx_events_issue_type ON events(issue_id, event_type);\n```\n\n**Priority:** Low - events table is typically small relative to issues.","status":"closed","priority":4,"issue_type":"task","created_at":"2025-12-22T22:58:54.070587-08:00","updated_at":"2025-12-22T23:15:13.841988-08:00","closed_at":"2025-12-22T23:15:13.841988-08:00","close_reason":"Implemented in migration 026_additional_indexes.go","dependencies":[{"issue_id":"bd-lk39","depends_on_id":"bd-h0we","type":"discovered-from","created_at":"2025-12-22T22:58:54.071286-08:00","created_by":"daemon"}]} {"id":"bd-llfl","title":"Improve test coverage for cmd/bd CLI (26.2% → 50%)","description":"The main CLI package (cmd/bd) has only 26.2% test coverage. CLI commands should have at least 50% coverage to ensure reliability.\n\nKey areas with low/no coverage:\n- daemon_autostart.go (multiple 0% functions)\n- compact.go (several 0% functions)\n- Various command handlers\n\nCurrent coverage: 26.2%\nTarget coverage: 50%","notes":"## Progress Update (2025-12-23)\n\n### Tests Added\nAdded 683 lines of new tests across 3 files:\n- cmd/bd/daemon_config_test.go (144 lines)\n- cmd/bd/utils_test.go (484 lines) \n- cmd/bd/autostart_test.go (55 additional lines)\n\n### Functions Now Tested\n- daemon_config.go: ensureBeadsDir, getPIDFilePath, getLogFilePath, getSocketPathForPID\n- daemon_autostart.go: determineSocketPath, isDaemonRunningQuiet\n- activity.go: printEvent\n- cleanup.go: showCleanupDeprecationHint\n- upgrade.go: pluralize\n- wisp.go: formatTimeAgo\n- list.go: pinIndicator, sortIssues\n- hooks.go: FormatHookWarnings, isRebaseInProgress, hasBeadsJSONL\n- template.go: extractIDSuffix\n- thanks.go: getContributorsSorted\n\n### Coverage Results\n- Before: 22.5%\n- After: 23.1%\n- Delta: +0.6%\n\n### Remaining Work\nMost remaining untested code (77%) involves:\n1. Daemon/RPC operations (runDaemonLoop, tryAutoStartDaemon, etc.)\n2. Command handlers that require database/daemon setup\n3. Git operations (runPreCommitHook, runPostMergeHook, etc.)\n\nTo reach 50%, would need to:\n- Add integration tests with mocked daemon\n- Add scripttest tests for command handlers\n- Add more database-dependent tests\n\nCommit: 4f949c19","status":"in_progress","priority":2,"issue_type":"task","assignee":"beads/charlie","created_at":"2025-12-13T20:43:03.123341-08:00","updated_at":"2025-12-23T22:45:57.860498-08:00"} @@ -406,6 +415,7 @@ {"id":"bd-nqyp","title":"mol-beads-release","description":"Release checklist for beads version {{version}}.\n\nThis molecule ensures all release steps are completed properly.\nVariable: {{version}} - target version (e.g., 0.35.0)\n\n## Step: update-release-notes\nUpdate cmd/bd/info.go with release notes for {{version}}.\n\nAdd a new VersionChange entry at the top of versionChanges slice:\n```go\n{\n Version: \"{{version}}\",\n Date: \"YYYY-MM-DD\",\n Changes: []string{\n \"NEW: Feature description\",\n \"FIX: Bug fix description\",\n \"IMPROVED: Enhancement description\",\n },\n},\n```\n\nRun `git log --oneline v\u003cprevious\u003e..HEAD` to see what changed.\n\n## Step: update-changelog\nUpdate CHANGELOG.md with detailed release notes.\n\nAdd a new section after [Unreleased]:\n```markdown\n## [{{version}}] - YYYY-MM-DD\n\n### Added\n- **Feature name** (issue-id) - Description\n\n### Changed\n- **Change description** (issue-id)\n\n### Fixed\n- **Bug fix** (issue-id) - Description\n```\n\nSort by importance, not chronologically.\nNeeds: update-release-notes\n\n## Step: bump-version\nRun the version bump script.\n\n```bash\n./scripts/bump-version.sh {{version}}\n```\n\nThis updates version in all files:\n- cmd/bd/version.go\n- .claude-plugin/*.json\n- integrations/beads-mcp/pyproject.toml\n- npm-package/package.json\n- Hook templates\n\nNeeds: update-changelog\n\n## Step: run-tests\nRun tests and verify lint passes.\n\n```bash\ngo test -short ./...\n```\n\nCI will run full lint, but fix any obvious issues first.\nNeeds: bump-version\n\n## Step: commit-release\nCommit the release changes.\n\n```bash\ngit add -A\ngit commit -m \"chore: bump version to v{{version}}\"\n```\n\nNeeds: run-tests\n\n## Step: push-and-tag\nPush commit and create release tag.\n\n```bash\ngit push origin main\ngit tag v{{version}}\ngit push origin v{{version}}\n```\n\nThis triggers GitHub Actions release workflow.\nNeeds: commit-release\n\n## Step: wait-for-ci\nWait for GitHub Actions to complete.\n\nMonitor: https://github.com/steveyegge/beads/actions\n\nCI will:\n- Build binaries via GoReleaser\n- Create GitHub Release with assets\n- Publish to npm (@beads/bd)\n- Publish to PyPI (beads-mcp)\n- Update Homebrew tap\n\nWait until all jobs succeed (~5-10 min).\nNeeds: push-and-tag\n\n## Step: verify-release\nVerify the release is complete.\n\n```bash\n# Check GitHub release\ngh release view v{{version}}\n\n# Check Homebrew\nbrew update \u0026\u0026 brew info steveyegge/beads/bd\n\n# Check npm\nnpm view @beads/bd version\n\n# Check PyPI\npip index versions beads-mcp\n```\n\nNeeds: wait-for-ci\n\n## Step: update-local\nUpdate local installations.\n\n```bash\n# Upgrade Homebrew\nbrew upgrade steveyegge/beads/bd\n\n# Or install from source\n./scripts/bump-version.sh {{version}} --install\n\n# Install MCP locally\npip install -e integrations/beads-mcp\n\n# Restart daemons\npkill -f \"bd daemon\" || true\n```\n\nVerify: `bd --version` shows {{version}}\nNeeds: verify-release\n\n## Step: manual-publish\n(Optional) Manual publish if CI failed.\n\n```bash\n# npm (requires npm login)\n./scripts/bump-version.sh {{version}} --publish-npm\n\n# PyPI (requires TWINE credentials)\n./scripts/bump-version.sh {{version}} --publish-pypi\n\n# Or both\n./scripts/bump-version.sh {{version}} --publish-all\n```\n\nOnly needed if CI publishing failed.\nNeeds: wait-for-ci","status":"open","priority":2,"issue_type":"molecule","created_at":"2025-12-23T11:29:39.087936-08:00","updated_at":"2025-12-23T11:29:39.087936-08:00","labels":["template"]} {"id":"bd-nuh1","title":"GH#403: bd doctor --fix circular error message","description":"bd doctor --fix suggests running bd doctor --fix for deletions manifest issue. Fix to provide actual resolution. See GitHub issue #403.","status":"tombstone","priority":2,"issue_type":"bug","created_at":"2025-12-16T01:03:16.290018-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} {"id":"bd-nurq","title":"Implement bd mol current command","description":"Show what molecule the agent should currently be working on. Referenced by gt-um6q, gt-lz13. Needed for molecule navigation workflow in templates.","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-12-23T00:17:54.069983-08:00","updated_at":"2025-12-23T01:23:59.523404-08:00","closed_at":"2025-12-23T01:23:59.523404-08:00","close_reason":"Implementation already existed, added tests (TestGetMoleculeProgress, TestFindParentMolecule, TestAdvanceToNextStep*), rebuilt and installed binary"} +{"id":"bd-o18s","title":"Rename 'wisp' back to 'ephemeral' in beads API","description":"The beads API uses 'wisp' terminology (Wisp field, bd wisp command) but the underlying SQLite column is 'ephemeral'. \n\nThis creates cognitive overhead since wisp is a Gas Town concept.\n\nRename to use 'ephemeral' consistently:\n- types.Issue.Wisp → types.Issue.Ephemeral\n- JSON field: wisp → ephemeral \n- CLI: bd wisp → bd ephemeral (or just use flags on existing commands)\n\nThe SQLite column already uses 'ephemeral' so no schema migration needed.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-26T20:16:36.627876-08:00","updated_at":"2025-12-26T21:04:10.212439-08:00","closed_at":"2025-12-26T21:04:10.212439-08:00","close_reason":"Renamed 'wisp' to 'ephemeral' throughout the codebase"} {"id":"bd-o34a","title":"Design auto-squash behavior for wisps","description":"Explore the design space for automatic wisp squashing.\n\n**Context:**\nWisps are ephemeral molecules that should be squashed (digest) or burned (no trace)\nwhen complete. Currently this is manual. Should it be automatic?\n\n**Questions to answer:**\n1. When should auto-squash trigger?\n - On molecule completion?\n - On session end/handoff?\n - On patrol detection?\n \n2. What's the default summary for auto-squash?\n - Generic: 'Auto-squashed on completion'\n - Step-based: List closed steps\n - AI-generated: Require agent to provide\n\n3. Should this be configurable?\n - Per-molecule setting in formula?\n - Global config: auto_squash: true/false\n - Per-wisp flag at creation time?\n\n4. Who decides - Beads or Gas Town?\n - Beads: Provides operators (squash, burn)\n - Gas Town: Makes policy decisions\n - Proposal: GT patrol molecules call bd mol squash\n\n**Constraints:**\n- Don't lose important context (summary matters)\n- Don't create noise in digest history\n- Respect agent's intent (some wisps should burn, not squash)\n\n**Recommendation:**\nGas Town patrol molecules should have explicit squash/burn steps.\nBeads provides primitives, GT makes policy decisions.\nAuto-squash at Beads level is probably wrong layer.","status":"closed","priority":4,"issue_type":"task","created_at":"2025-12-24T18:23:24.833877-08:00","updated_at":"2025-12-25T22:56:59.210809-08:00","closed_at":"2025-12-25T22:56:59.210809-08:00","close_reason":"Already resolved: Gas Town handles squash/burn policy via templates, Beads provides primitives. Design matches recommendation in issue."} {"id":"bd-o4qy","title":"Improve CheckStaleness error handling","description":"## Problem\n\nCheckStaleness returns 'false' (not stale) for multiple error conditions instead of returning errors. This masks problems.\n\n**Location:** internal/autoimport/autoimport.go:253-285\n\n## Edge Cases That Return False\n\n1. **Invalid last_import_time format** (line 259-262)\n2. **No JSONL file found** (line 267-277) \n3. **JSONL stat fails** (line 279-282)\n\n## Fix\n\nReturn errors for abnormal conditions:\n\n```go\nlastImportTime, err := time.Parse(time.RFC3339, lastImportStr)\nif err != nil {\n return false, fmt.Errorf(\"corrupted last_import_time: %w\", err)\n}\n\nif jsonlPath == \"\" {\n return false, fmt.Errorf(\"no JSONL file found\")\n}\n\nstat, err := os.Stat(jsonlPath)\nif err != nil {\n return false, fmt.Errorf(\"cannot stat JSONL: %w\", err)\n}\n```\n\n## Impact\nMedium - edge cases are rare but should be handled\n\n## Effort \n30 minutes - requires updating callers in RPC server","status":"tombstone","priority":2,"issue_type":"bug","created_at":"2025-11-20T20:17:27.606219-05:00","updated_at":"2025-12-25T01:21:01.952723-08:00","dependencies":[{"issue_id":"bd-o4qy","depends_on_id":"bd-2q6d","type":"blocks","created_at":"2025-11-20T20:18:26.81065-05:00","created_by":"stevey","metadata":"{}"}],"deleted_at":"2025-12-25T01:21:01.952723-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} {"id":"bd-o55a","title":"GH#509: bd doesn't find .beads when running from nested worktrees","description":"When worktrees are nested under main repo (.worktrees/feature/), bd stops at worktree git root instead of continuing to find .beads in parent. See GitHub issue #509 for detailed fix suggestion.","status":"tombstone","priority":2,"issue_type":"bug","created_at":"2025-12-16T01:03:20.281591-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"bug"} @@ -449,6 +459,7 @@ {"id":"bd-pdr2","title":"Consider backwards compatibility for ready() and list() return type change","description":"PR #481 changed the return types of `ready()` and `list()` from `list[Issue]` to `list[IssueMinimal] | CompactedResult`. This is a breaking change for MCP clients.\n\n## Impact Assessment\nBreaking change affects:\n- Any MCP client expecting `list[Issue]` from ready()\n- Any MCP client expecting `list[Issue]` from list()\n- Client code that accesses full Issue fields (description, design, acceptance_criteria, timestamps, dependencies, dependents)\n\n## Current Behavior\n- ready() returns `list[IssueMinimal] | CompactedResult`\n- list() returns `list[IssueMinimal] | CompactedResult`\n- show() still returns full `Issue` (good)\n\n## Considerations\n**Pros of current approach:**\n- Forces clients to use show() for full details (good for context efficiency)\n- Simple mental model (always use show for full data)\n- Documentation warns about this\n\n**Cons:**\n- Clients expecting list[Issue] will break\n- No graceful degradation option\n- No migration period\n\n## Potential Solutions\n1. Add optional parameter `full_details=false` to ready/list (would increase payload)\n2. Create separate tools: ready_minimal/list_minimal + ready_full/list_full\n3. Accept breaking change and document upgrade path (current approach)\n4. Version the MCP server and document migration guide\n\n## Recommendation\nCurrent approach (solution 3) is reasonable if:\n- Changelog clearly documents the breaking change\n- Migration guide provided to clients\n- Error handling is graceful for clients expecting specific fields","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-14T14:24:56.460465-08:00","updated_at":"2025-12-14T14:24:56.460465-08:00","dependencies":[{"issue_id":"bd-pdr2","depends_on_id":"bd-otf4","type":"discovered-from","created_at":"2025-12-14T14:24:56.461959-08:00","created_by":"stevey","metadata":"{}"}]} {"id":"bd-pe4s","title":"JSON test issue","description":"Line 1\nLine 2\nLine 3","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-12-16T16:14:36.969074-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} {"id":"bd-pgcs","title":"Clean up orphaned child issues (bd-cb64c226.*, bd-cbed9619.*)","description":"## Problem\n\nEvery bd command shows warnings about 12 orphaned child issues:\n- bd-cb64c226.1, .6, .8, .9, .10, .12, .13\n- bd-cbed9619.1, .2, .3, .4, .5\n\nThese are hierarchical IDs (parent.child format) where the parent issues no longer exist.\n\n## Impact\n\n- Clutters output of every bd command\n- Confusing for users\n- Indicates incomplete cleanup of deleted parent issues\n\n## Proposed Solution\n\n1. Delete the orphaned issues since their parents no longer exist:\n ```bash\n bd delete bd-cb64c226.1 bd-cb64c226.6 bd-cb64c226.8 ...\n ```\n\n2. Or convert them to top-level issues if they contain useful content\n\n## Investigation Needed\n\n- What were the parent issues bd-cb64c226 and bd-cbed9619?\n- Why were they deleted without their children?\n- Should bd delete cascade to children automatically?","status":"tombstone","priority":2,"issue_type":"task","created_at":"2025-12-16T23:06:17.240571-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"task"} +{"id":"bd-pgh","title":"Deacon Patrol","description":"Mayor's daemon patrol loop for handling callbacks, health checks, and cleanup.","status":"open","priority":2,"issue_type":"molecule","created_at":"2025-12-26T21:20:47.62144-08:00","created_by":"deacon","updated_at":"2025-12-26T21:20:47.62144-08:00"} {"id":"bd-phtv","title":"bd pin: pinned field overwritten by subsequent bd commands","description":"## Summary\n\nThe `bd pin` command correctly sets `pinned=1` in SQLite, but any subsequent `bd` command (including read-only commands like `bd show`) resets `pinned` to 0.\n\n## Reproduction Steps\n\n```bash\nbd --no-daemon pin \u003cissue-id\u003e --for=max\nsqlite3 .beads/beads.db \"SELECT id, pinned FROM issues WHERE id=\\\"\u003cissue-id\u003e\\\"\"\n# Shows pinned=1 ✓\n\nbd --no-daemon show \u003cissue-id\u003e --json\nsqlite3 .beads/beads.db \"SELECT id, pinned FROM issues WHERE id=\\\"\u003cissue-id\u003e\\\"\"\n# Shows pinned=0 ✗ WRONG\n```\n\n## Root Cause Investigation\n\n### Prime Suspects\n\n1. **JSONL import overwrites DB** - The `pinned` field has `omitempty` so false values arent in JSONL. When JSONL is imported, it overwrites the DB pinned=1 with default pinned=0.\n\n2. **Files to check:**\n - `internal/importer/importer.go` - ImportIssue() may unconditionally set all fields\n - `internal/storage/sqlite/issues.go` - UpsertIssue() may not preserve pinned\n - `cmd/bd/main.go` - ensureStoreActive() may trigger import\n\n### Debug Steps\n\n```bash\n# Add debug logging to track what is writing pinned=0\ngrep -rn \"pinned\" internal/storage/sqlite/*.go\ngrep -rn \"Pinned\" internal/importer/*.go\n```\n\n## Likely Fix\n\nIn `internal/importer/importer.go` or `internal/storage/sqlite/issues.go`:\n\n```go\n// When upserting from JSONL, preserve pinned field if already set\nfunc (s *SQLiteStorage) UpsertIssue(ctx context.Context, issue *types.Issue) error {\n // Check if issue exists and is pinned\n existing, _ := s.GetIssue(ctx, issue.ID)\n if existing != nil \u0026\u0026 existing.Pinned \u0026\u0026 !issue.Pinned {\n // Preserve existing pinned status\n issue.Pinned = existing.Pinned\n }\n // ... rest of upsert\n}\n```\n\nOR the import should skip fields that are omitempty and not present in JSONL:\n\n```go\n// In importer, only update fields that are explicitly set in JSONL\n// Pinned with omitempty means absent = dont change, not absent = false\n```\n\n## Testing\n\n```bash\n# After fix:\nbd --no-daemon pin \u003cissue-id\u003e --for=max\nbd --no-daemon show \u003cissue-id\u003e --json # Should not reset pinned\nbd list --pinned # Should show the pinned issue\nbd hook --agent max # Should show pinned work\n```\n\n## Files to Modify\n\n1. **internal/importer/importer.go** - Preserve pinned on import\n2. **internal/storage/sqlite/issues.go** - UpsertIssue preserve pinned\n3. **Add test** in internal/importer/importer_test.go\n\n## Success Criteria\n- `bd pin` survives subsequent bd commands\n- `bd list --pinned` shows pinned issues\n- `bd hook --agent X` shows pinned work\n- Existing tests still pass","status":"closed","priority":1,"issue_type":"bug","assignee":"beads/Pinner","created_at":"2025-12-23T12:32:20.046988-08:00","updated_at":"2025-12-23T13:47:49.936021-08:00","closed_at":"2025-12-23T13:47:49.936021-08:00","close_reason":"Fixed two code paths in importer.go and multirepo.go that overwrote pinned field. Tests pass. May need follow-up if bug persists.","labels":["export:pinned-field-fix"],"dependencies":[{"issue_id":"bd-phtv","depends_on_id":"bd-iz5t","type":"parent-child","created_at":"2025-12-23T12:44:07.140151-08:00","created_by":"daemon"}]} {"id":"bd-phwd","title":"Add timeout message for long-running git push operations","description":"When git push hangs waiting for credential/browser auth, show a periodic message to the user instead of appearing frozen. Add timeout messaging after N seconds of inactivity during git operations.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-21T11:44:57.318984535-07:00","updated_at":"2025-12-21T11:46:05.218023559-07:00","closed_at":"2025-12-21T11:46:05.218023559-07:00"} {"id":"bd-psg","title":"Add tests for dependency management","description":"Key dependency functions like mergeBidirectionalTrees, GetDependencyTree, and DetectCycles have low or no coverage. These are essential for maintaining data integrity in the dependency graph.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T07:00:43.458548462-07:00","updated_at":"2025-12-19T09:54:57.018745301-07:00","closed_at":"2025-12-18T10:24:56.271508339-07:00","dependencies":[{"issue_id":"bd-psg","depends_on_id":"bd-6ss","type":"discovered-from","created_at":"2025-12-18T07:00:43.463910911-07:00","created_by":"matt"}]} @@ -534,9 +545,11 @@ {"id":"bd-umbf","title":"Design contributor namespace isolation for beads pollution prevention","description":"## Problem\n\nWhen contributors work on beads-the-project using beads-the-tool, their personal work-tracking issues leak into PRs. The .beads/issues.jsonl is intentionally tracked (it's the project's issue database), but contributors' local issues pollute the diff.\n\nThis is a recursion problem unique to self-hosting projects.\n\n## Possible Solutions to Explore\n\n1. **Contributor namespaces** - Each contributor gets a private prefix (e.g., `bd-steve-xxxx`) that's gitignored or filtered\n2. **Separate database** - Contributors use BEADS_DIR pointing elsewhere for personal tracking\n3. **Issue ownership/visibility flags** - Mark issues as \"local-only\" vs \"project\"\n4. **Prefix-based filtering** - Configure which prefixes are committed vs ignored\n\n## Design Considerations\n\n- Should be zero-friction for contributors (no manual setup)\n- Must not break existing workflows\n- Needs to work with sync/collaboration features\n- Consider: what if a \"personal\" issue graduates to \"project\" issue?\n\n## Expansion Needed\n\nThis is a placeholder. Needs detailed design exploration before implementation.","status":"open","priority":2,"issue_type":"feature","created_at":"2025-12-13T18:00:29.638743-08:00","updated_at":"2025-12-13T18:00:41.345673-08:00"} {"id":"bd-uqfn","title":"Work on beads-wkt: Output control parameters for MCP tool...","description":"Work on beads-wkt: Output control parameters for MCP tools (GH#622). Add brief, fields, max_description_length params to ready/list/show. When done, submit MR (not PR) to integration branch for Refinery.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-19T22:57:10.675535-08:00","updated_at":"2025-12-20T00:49:51.929271-08:00","closed_at":"2025-12-19T23:28:25.362931-08:00","close_reason":"Implemented output control parameters for MCP tools (GH#622)"} {"id":"bd-usro","title":"Rename 'template instantiate' to 'mol bond'","description":"Rename the template instantiation command to match molecule metaphor.\n\nCurrent: bd template instantiate \u003cid\u003e --var key=value\nTarget: bd mol bond \u003cid\u003e --var key=value\n\nChanges needed:\n- Add 'mol' command group (or extend existing)\n- Add 'bond' subcommand that wraps template instantiate logic\n- Keep 'template instantiate' as deprecated alias for backward compat\n- Update help text and docs to use molecule terminology\n\nThe 'bond' verb captures:\n1. Chemistry metaphor (molecules bond to form structures)\n2. Dependency linking (child issues bonded in a DAG)\n3. Short and active\n\nSee also: molecule execution model in Gas Town","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-12-20T16:56:37.582795-08:00","updated_at":"2025-12-20T23:22:43.567337-08:00","closed_at":"2025-12-20T23:22:43.567337-08:00","close_reason":"Implemented mol command: catalog, show, bond"} +{"id":"bd-uu8p","title":"Routing bypassed in daemon mode","description":"## Bug\n\nPrefix-based routing (routes.jsonl) only works in direct mode. When the daemon is running, it bypasses routing logic because:\n\n1. Daemon mode resolves IDs via RPC (`daemonClient.ResolveID`)\n2. Daemon uses its own store, which only knows about local beads\n3. The routing logic in `resolveAndGetIssueWithRouting` is only called in direct mode\n\n## Reproduction\n\n```bash\n# With daemon running:\ncd ~/gt\nbd show gt-1py3y # Fails - daemon can't find it\n\n# With daemon bypassed:\nbd --no-daemon show gt-1py3y # Works - uses routing\n```\n\n## Fix Options\n\n### Option A: Client-side routing detection (Recommended)\nBefore calling daemon RPC, check if the ID prefix matches a route to a different beads dir. If so, use direct mode with routing for that ID.\n\n```go\n// In show.go, before daemon mode:\nif daemonClient != nil \u0026\u0026 needsRouting(id) {\n // Fall back to direct mode for this ID\n result, err := resolveAndGetIssueWithRouting(ctx, store, id)\n ...\n}\n```\n\n### Option B: Daemon-side routing\nAdd routing support to the daemon RPC server. More complex - daemon would need to:\n- Load routes.jsonl on startup\n- Open connections to multiple databases\n- Route requests based on ID prefix\n\n### Option C: Hybrid\nDaemon returns \"not found + prefix hint\", client retries with routing.\n\n## Recommendation\n\nOption A is simplest. The client already has routing logic; just use it when we detect a routed prefix.","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-12-26T14:52:00.452285-08:00","updated_at":"2025-12-26T14:54:51.572289-08:00","closed_at":"2025-12-26T14:54:51.572289-08:00","close_reason":"Fixed by bypassing daemon for routed IDs"} {"id":"bd-uutv","title":"Work on beads-rs0: Namepool configuration for themed pole...","description":"Work on beads-rs0: Namepool configuration for themed polecat names. See bd show beads-rs0 for full details.","status":"closed","priority":2,"issue_type":"task","assignee":"beads/polecat-02","created_at":"2025-12-19T21:49:48.129778-08:00","updated_at":"2025-12-19T21:59:25.565894-08:00","closed_at":"2025-12-19T21:59:25.565894-08:00","close_reason":"Completed work on beads-rs0: Implemented themed namepool feature"} {"id":"bd-uwkp","title":"Phase 2.4: Git merge driver optimization for TOON format","description":"Optimize git 3-way merge for TOON line-oriented format.\n\n## Overview\nTOON is line-oriented (unlike binary formats), enabling smarter git merge strategies. Implement custom merge driver to handle TOON-specific merge patterns.\n\n## Required Work\n\n### 2.4.1 TOON Merge Driver\n- [ ] Create .git/info/attributes entry for *.toon files\n- [ ] Implement custom merge driver script/command\n- [ ] Handle tabular format row merges (line-based 3-way)\n- [ ] Handle YAML-style format merges\n- [ ] Conflict markers for unsolvable conflicts\n\n### 2.4.2 Merge Patterns\n- [ ] Row addition: both branches add different rows → union\n- [ ] Row deletion: one branch deletes, other modifies → conflict (manual review)\n- [ ] Row modification: concurrent field changes → intelligent merge or conflict\n- [ ] Field ordering changes: ignore (TOON format resilient to order)\n\n### 2.4.3 Testing \u0026 Documentation\n- [ ] Unit tests for merge scenarios (3-way merge logic)\n- [ ] Integration tests with actual git merges\n- [ ] Conflict scenario testing\n- [ ] Documentation of merge strategy\n\n## Success Criteria\n- Git merge handles TOON conflicts intelligently\n- Fewer manual merge conflicts than JSONL\n- Round-trip preserved through merges\n- All 70+ tests still passing\n- Git history stays clean (minimal conflict markers)","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T14:43:14.339238776-07:00","updated_at":"2025-12-21T14:42:26.434306-08:00","closed_at":"2025-12-21T14:42:26.434306-08:00","close_reason":"TOON approach declined","dependencies":[{"issue_id":"bd-uwkp","depends_on_id":"bd-iic1","type":"discovered-from","created_at":"2025-12-19T14:43:14.34427988-07:00","created_by":"daemon"}]} {"id":"bd-uz8r","title":"Phase 2.3: TOON deletion tracking","description":"Implement deletion tracking in TOON format.\n\n## Overview\nPhase 2.2 switched storage to TOON format. Phase 2.3 adds deletion tracking in TOON format for propagating deletions across clones.\n\n## Required Work\n\n### 2.3.1 Deletion Tracking (TOON Format)\n- [ ] Implement deletions.toon file (tracking deleted issue records)\n- [ ] Add DeleteTracker struct to record deleted issue IDs and metadata\n- [ ] Update bdt delete command to record in deletions.toon\n- [ ] Design deletion record format (ID, timestamp, reason, hash)\n- [ ] Implement auto-prune of old deletion records (configurable TTL)\n\n### 2.3.2 Sync Propagation\n- [ ] Load deletions.toon during import\n- [ ] Remove deleted issues from local database when imported from remote\n- [ ] Handle edge cases (delete same issue in multiple clones)\n- [ ] Deletion ordering and conflict resolution\n\n### 2.3.3 Testing\n- [ ] Unit tests for deletion tracking\n- [ ] Integration tests for deletion propagation\n- [ ] Multi-clone deletion scenarios\n- [ ] TTL expiration tests\n\n## Success Criteria\n- deletions.toon stores deletion records in TOON format\n- Deletions propagate across clones via git sync\n- Old records auto-prune after TTL\n- All 70+ tests still passing\n- bdt delete command works seamlessly","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-19T14:37:23.722066816-07:00","updated_at":"2025-12-21T14:42:27.491932-08:00","closed_at":"2025-12-21T14:42:27.491932-08:00","close_reason":"TOON approach declined","dependencies":[{"issue_id":"bd-uz8r","depends_on_id":"bd-iic1","type":"discovered-from","created_at":"2025-12-19T14:37:23.726825771-07:00","created_by":"daemon"}]} +{"id":"bd-v8ku","title":"bd: Add town-level activity signal in PersistentPreRun","description":"Add activity signaling to beads so Gas Town daemon can detect bd usage.\n\nIn cmd/bd/main.go PersistentPreRun, add a call to write activity to\nthe Gas Town daemon directory if running inside a Gas Town workspace.\n\nThe signal file is ~/gt/daemon/activity.json (or detected town root).\n\nFormat:\n{\n \"last_command\": \"bd create ...\",\n \"actor\": \"gastown/crew/max\",\n \"timestamp\": \"2025-12-26T19:30:00Z\"\n}\n\nShould be best-effort (silent failure) to avoid breaking bd outside Gas Town.\n\nCross-rig ref: gastown gt-ws8ol (Deacon exponential backoff epic)","status":"closed","priority":2,"issue_type":"task","created_at":"2025-12-26T19:25:13.537055-08:00","updated_at":"2025-12-26T19:28:44.919491-08:00","closed_at":"2025-12-26T19:28:44.919491-08:00","close_reason":"Implemented activity signaling in PersistentPreRun"} {"id":"bd-vgi5","title":"Push version bump to GitHub","description":"git push origin main - triggers CI but no release yet.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-18T22:43:05.363604-08:00","updated_at":"2025-12-24T16:25:30.019895-08:00","dependencies":[{"issue_id":"bd-vgi5","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T22:43:16.87736-08:00","created_by":"daemon"},{"issue_id":"bd-vgi5","depends_on_id":"bd-3ggb","type":"blocks","created_at":"2025-12-18T22:43:21.078208-08:00","created_by":"daemon"}],"deleted_at":"2025-12-24T16:25:30.019895-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"} {"id":"bd-vks2","title":"bd dep tree doesn't display external dependencies","description":"GetDependencyTree (dependencies.go:464-624) uses a recursive CTE that JOINs with the issues table, which means external refs (external:project:capability) are invisible in the tree output.\n\nWhen an issue has an external blocking dependency, running 'bd dep tree \u003cid\u003e' won't show it.\n\nOptions:\n1. Query dependencies table separately for external refs and display them as leaf nodes\n2. Add a synthetic 'external' node type that shows the ref and resolution status\n3. Document that external deps aren't shown in tree view (use bd show for full deps)\n\nLower priority since bd show \u003cid\u003e displays all dependencies including external refs.","status":"closed","priority":3,"issue_type":"task","created_at":"2025-12-21T23:45:27.121934-08:00","updated_at":"2025-12-22T22:30:19.083652-08:00","closed_at":"2025-12-22T22:30:19.083652-08:00","close_reason":"Implemented: GetDependencyTree now fetches external deps and adds them as synthetic leaf nodes with resolution status. Added test TestGetDependencyTreeExternalDeps. Updated formatTreeNode to display external deps specially.","dependencies":[{"issue_id":"bd-vks2","depends_on_id":"bd-zmmy","type":"discovered-from","created_at":"2025-12-21T23:45:27.122511-08:00","created_by":"daemon"}]} {"id":"bd-vpan","title":"Re: Thread Test 2","description":"Got your message. Testing reply feature.","status":"tombstone","priority":2,"issue_type":"message","created_at":"2025-12-16T18:21:29.144352-08:00","updated_at":"2025-12-17T16:11:17.070763-08:00","dependencies":[{"issue_id":"bd-vpan","depends_on_id":"bd-x36g","type":"replies-to","created_at":"2025-12-18T13:45:31.137191-08:00","created_by":"migration"}],"deleted_at":"2025-12-17T16:11:17.070763-08:00","deleted_by":"batch delete","delete_reason":"batch delete","original_type":"message"} diff --git a/Makefile b/Makefile index 4ea2c17c..cc0cd528 100644 --- a/Makefile +++ b/Makefile @@ -13,7 +13,7 @@ build: # Run all tests (skips known broken tests listed in .test-skip) test: @echo "Running tests..." - @./scripts/test.sh + @TEST_COVER=1 ./scripts/test.sh # Run performance benchmarks (10K and 20K issue databases with automatic CPU profiling) # Generates CPU profile: internal/storage/sqlite/bench-cpu-.prof diff --git a/cmd/bd/autoflush.go b/cmd/bd/autoflush.go index 8d3902a9..73f571c8 100644 --- a/cmd/bd/autoflush.go +++ b/cmd/bd/autoflush.go @@ -671,7 +671,7 @@ func flushToJSONLWithState(state flushState) { issues := make([]*types.Issue, 0, len(issueMap)) wispsSkipped := 0 for _, issue := range issueMap { - if issue.Wisp { + if issue.Ephemeral { wispsSkipped++ continue } diff --git a/cmd/bd/cleanup.go b/cmd/bd/cleanup.go index be42c60b..dd7fe371 100644 --- a/cmd/bd/cleanup.go +++ b/cmd/bd/cleanup.go @@ -15,7 +15,7 @@ type CleanupEmptyResponse struct { DeletedCount int `json:"deleted_count"` Message string `json:"message"` Filter string `json:"filter,omitempty"` - Wisp bool `json:"wisp,omitempty"` + Ephemeral bool `json:"ephemeral,omitempty"` } // Hard delete mode: bypass tombstone TTL safety, use --older-than days directly @@ -56,7 +56,7 @@ Delete issues closed more than 30 days ago: bd cleanup --older-than 30 --force Delete only closed wisps (transient molecules): - bd cleanup --wisp --force + bd cleanup --ephemeral --force Preview what would be deleted/pruned: bd cleanup --dry-run @@ -80,7 +80,7 @@ SEE ALSO: cascade, _ := cmd.Flags().GetBool("cascade") olderThanDays, _ := cmd.Flags().GetInt("older-than") hardDelete, _ := cmd.Flags().GetBool("hard") - wispOnly, _ := cmd.Flags().GetBool("wisp") + wispOnly, _ := cmd.Flags().GetBool("ephemeral") // Calculate custom TTL for --hard mode // When --hard is set, use --older-than days as the tombstone TTL cutoff @@ -129,7 +129,7 @@ SEE ALSO: // Add wisp filter if specified (bd-kwro.9) if wispOnly { wispTrue := true - filter.Wisp = &wispTrue + filter.Ephemeral = &wispTrue } // Get all closed issues matching filter @@ -165,7 +165,7 @@ SEE ALSO: result.Filter = fmt.Sprintf("older than %d days", olderThanDays) } if wispOnly { - result.Wisp = true + result.Ephemeral = true } outputJSON(result) } else { @@ -270,6 +270,6 @@ func init() { cleanupCmd.Flags().Bool("cascade", false, "Recursively delete all dependent issues") cleanupCmd.Flags().Int("older-than", 0, "Only delete issues closed more than N days ago (0 = all closed issues)") cleanupCmd.Flags().Bool("hard", false, "Bypass tombstone TTL safety; use --older-than days as cutoff") - cleanupCmd.Flags().Bool("wisp", false, "Only delete closed wisps (transient molecules)") + cleanupCmd.Flags().Bool("ephemeral", false, "Only delete closed wisps (transient molecules)") rootCmd.AddCommand(cleanupCmd) } diff --git a/cmd/bd/cli_coverage_show_test.go b/cmd/bd/cli_coverage_show_test.go new file mode 100644 index 00000000..6eb4d661 --- /dev/null +++ b/cmd/bd/cli_coverage_show_test.go @@ -0,0 +1,426 @@ +//go:build e2e + +package main + +import ( + "bytes" + "context" + "encoding/json" + "io" + "os" + "path/filepath" + "strings" + "sync" + "testing" + "time" + + "github.com/steveyegge/beads/internal/storage/sqlite" + "github.com/steveyegge/beads/internal/types" +) + +var cliCoverageMutex sync.Mutex + +func runBDForCoverage(t *testing.T, dir string, args ...string) (stdout string, stderr string) { + t.Helper() + + cliCoverageMutex.Lock() + defer cliCoverageMutex.Unlock() + + // Add --no-daemon to all commands except init. + if len(args) > 0 && args[0] != "init" { + args = append([]string{"--no-daemon"}, args...) + } + + oldStdout := os.Stdout + oldStderr := os.Stderr + oldDir, _ := os.Getwd() + oldArgs := os.Args + + if err := os.Chdir(dir); err != nil { + t.Fatalf("chdir %s: %v", dir, err) + } + + rOut, wOut, _ := os.Pipe() + rErr, wErr, _ := os.Pipe() + os.Stdout = wOut + os.Stderr = wErr + + // Ensure direct mode. + oldNoDaemon, noDaemonWasSet := os.LookupEnv("BEADS_NO_DAEMON") + os.Setenv("BEADS_NO_DAEMON", "1") + defer func() { + if noDaemonWasSet { + _ = os.Setenv("BEADS_NO_DAEMON", oldNoDaemon) + } else { + os.Unsetenv("BEADS_NO_DAEMON") + } + }() + + // Mark tests explicitly. + oldTestMode, testModeWasSet := os.LookupEnv("BEADS_TEST_MODE") + os.Setenv("BEADS_TEST_MODE", "1") + defer func() { + if testModeWasSet { + _ = os.Setenv("BEADS_TEST_MODE", oldTestMode) + } else { + os.Unsetenv("BEADS_TEST_MODE") + } + }() + + // Ensure all commands (including init) operate on the temp workspace DB. + db := filepath.Join(dir, ".beads", "beads.db") + beadsDir := filepath.Join(dir, ".beads") + oldBeadsDir, beadsDirWasSet := os.LookupEnv("BEADS_DIR") + os.Setenv("BEADS_DIR", beadsDir) + defer func() { + if beadsDirWasSet { + _ = os.Setenv("BEADS_DIR", oldBeadsDir) + } else { + os.Unsetenv("BEADS_DIR") + } + }() + + oldDB, dbWasSet := os.LookupEnv("BEADS_DB") + os.Setenv("BEADS_DB", db) + defer func() { + if dbWasSet { + _ = os.Setenv("BEADS_DB", oldDB) + } else { + os.Unsetenv("BEADS_DB") + } + }() + oldBDDB, bdDBWasSet := os.LookupEnv("BD_DB") + os.Setenv("BD_DB", db) + defer func() { + if bdDBWasSet { + _ = os.Setenv("BD_DB", oldBDDB) + } else { + os.Unsetenv("BD_DB") + } + }() + + // Ensure actor is set so label operations record audit fields. + oldActor, actorWasSet := os.LookupEnv("BD_ACTOR") + os.Setenv("BD_ACTOR", "test-user") + defer func() { + if actorWasSet { + _ = os.Setenv("BD_ACTOR", oldActor) + } else { + os.Unsetenv("BD_ACTOR") + } + }() + oldBeadsActor, beadsActorWasSet := os.LookupEnv("BEADS_ACTOR") + os.Setenv("BEADS_ACTOR", "test-user") + defer func() { + if beadsActorWasSet { + _ = os.Setenv("BEADS_ACTOR", oldBeadsActor) + } else { + os.Unsetenv("BEADS_ACTOR") + } + }() + + rootCmd.SetArgs(args) + os.Args = append([]string{"bd"}, args...) + + err := rootCmd.Execute() + + // Close and clean up all global state to prevent contamination between tests. + if store != nil { + store.Close() + store = nil + } + if daemonClient != nil { + daemonClient.Close() + daemonClient = nil + } + + // Reset all global flags and state (keep aligned with integration cli_fast_test). + dbPath = "" + actor = "" + jsonOutput = false + noDaemon = false + noAutoFlush = false + noAutoImport = false + sandboxMode = false + noDb = false + autoFlushEnabled = true + storeActive = false + flushFailureCount = 0 + lastFlushError = nil + if flushManager != nil { + _ = flushManager.Shutdown() + flushManager = nil + } + rootCtx = nil + rootCancel = nil + + // Give SQLite time to release file locks. + time.Sleep(10 * time.Millisecond) + + _ = wOut.Close() + _ = wErr.Close() + os.Stdout = oldStdout + os.Stderr = oldStderr + _ = os.Chdir(oldDir) + os.Args = oldArgs + rootCmd.SetArgs(nil) + + var outBuf, errBuf bytes.Buffer + _, _ = io.Copy(&outBuf, rOut) + _, _ = io.Copy(&errBuf, rErr) + _ = rOut.Close() + _ = rErr.Close() + + stdout = outBuf.String() + stderr = errBuf.String() + + if err != nil { + t.Fatalf("bd %v failed: %v\nStdout: %s\nStderr: %s", args, err, stdout, stderr) + } + + return stdout, stderr +} + +func extractJSONPayload(s string) string { + if i := strings.IndexAny(s, "[{"); i >= 0 { + return s[i:] + } + return s +} + +func parseCreatedIssueID(t *testing.T, out string) string { + t.Helper() + + p := extractJSONPayload(out) + var m map[string]interface{} + if err := json.Unmarshal([]byte(p), &m); err != nil { + t.Fatalf("parse create JSON: %v\n%s", err, out) + } + id, _ := m["id"].(string) + if id == "" { + t.Fatalf("missing id in create output: %s", out) + } + return id +} + +func TestCoverage_ShowUpdateClose(t *testing.T) { + if testing.Short() { + t.Skip("skipping CLI coverage test in short mode") + } + + dir := t.TempDir() + runBDForCoverage(t, dir, "init", "--prefix", "test", "--quiet") + + out, _ := runBDForCoverage(t, dir, "create", "Show coverage issue", "-p", "1", "--json") + id := parseCreatedIssueID(t, out) + + // Exercise update label flows (add -> set -> add/remove). + runBDForCoverage(t, dir, "update", id, "--add-label", "old", "--json") + runBDForCoverage(t, dir, "update", id, "--set-labels", "a,b", "--add-label", "c", "--remove-label", "a", "--json") + runBDForCoverage(t, dir, "update", id, "--remove-label", "old", "--json") + + // Show JSON output and verify labels were applied. + showOut, _ := runBDForCoverage(t, dir, "show", "--allow-stale", id, "--json") + showPayload := extractJSONPayload(showOut) + + var details []map[string]interface{} + if err := json.Unmarshal([]byte(showPayload), &details); err != nil { + // Some commands may emit a single object; fall back to object parse. + var single map[string]interface{} + if err2 := json.Unmarshal([]byte(showPayload), &single); err2 != nil { + t.Fatalf("parse show JSON: %v / %v\n%s", err, err2, showOut) + } + details = []map[string]interface{}{single} + } + if len(details) != 1 { + t.Fatalf("expected 1 issue, got %d", len(details)) + } + labelsAny, ok := details[0]["labels"] + if !ok { + t.Fatalf("expected labels in show output: %s", showOut) + } + labelsBytes, _ := json.Marshal(labelsAny) + labelsStr := string(labelsBytes) + if !strings.Contains(labelsStr, "b") || !strings.Contains(labelsStr, "c") { + t.Fatalf("expected labels b and c, got %s", labelsStr) + } + if strings.Contains(labelsStr, "a") || strings.Contains(labelsStr, "old") { + t.Fatalf("expected labels a and old to be absent, got %s", labelsStr) + } + + // Show text output. + showText, _ := runBDForCoverage(t, dir, "show", "--allow-stale", id) + if !strings.Contains(showText, "Show coverage issue") { + t.Fatalf("expected show output to contain title, got: %s", showText) + } + + // Multi-ID show should print both issues. + out2, _ := runBDForCoverage(t, dir, "create", "Second issue", "-p", "2", "--json") + id2 := parseCreatedIssueID(t, out2) + multi, _ := runBDForCoverage(t, dir, "show", "--allow-stale", id, id2) + if !strings.Contains(multi, "Show coverage issue") || !strings.Contains(multi, "Second issue") { + t.Fatalf("expected multi-show output to include both titles, got: %s", multi) + } + if !strings.Contains(multi, "─") { + t.Fatalf("expected multi-show output to include a separator line, got: %s", multi) + } + + // Close and verify JSON output. + closeOut, _ := runBDForCoverage(t, dir, "close", id, "--reason", "Done", "--json") + closePayload := extractJSONPayload(closeOut) + var closed []map[string]interface{} + if err := json.Unmarshal([]byte(closePayload), &closed); err != nil { + t.Fatalf("parse close JSON: %v\n%s", err, closeOut) + } + if len(closed) != 1 { + t.Fatalf("expected 1 closed issue, got %d", len(closed)) + } + if status, _ := closed[0]["status"].(string); status != string(types.StatusClosed) { + t.Fatalf("expected status closed, got %q", status) + } +} + +func TestCoverage_TemplateAndPinnedProtections(t *testing.T) { + if testing.Short() { + t.Skip("skipping CLI coverage test in short mode") + } + + dir := t.TempDir() + runBDForCoverage(t, dir, "init", "--prefix", "test", "--quiet") + + // Create a pinned issue and verify close requires --force. + out, _ := runBDForCoverage(t, dir, "create", "Pinned issue", "-p", "1", "--json") + pinnedID := parseCreatedIssueID(t, out) + runBDForCoverage(t, dir, "update", pinnedID, "--status", string(types.StatusPinned), "--json") + _, closeErr := runBDForCoverage(t, dir, "close", pinnedID, "--reason", "Done") + if !strings.Contains(closeErr, "cannot close pinned issue") { + t.Fatalf("expected pinned close to be rejected, stderr: %s", closeErr) + } + + forceOut, _ := runBDForCoverage(t, dir, "close", pinnedID, "--force", "--reason", "Done", "--json") + forcePayload := extractJSONPayload(forceOut) + var closed []map[string]interface{} + if err := json.Unmarshal([]byte(forcePayload), &closed); err != nil { + t.Fatalf("parse close JSON: %v\n%s", err, forceOut) + } + if len(closed) != 1 { + t.Fatalf("expected 1 closed issue, got %d", len(closed)) + } + + // Insert a template issue directly and verify update/close protect it. + dbFile := filepath.Join(dir, ".beads", "beads.db") + s, err := sqlite.New(context.Background(), dbFile) + if err != nil { + t.Fatalf("sqlite.New: %v", err) + } + ctx := context.Background() + template := &types.Issue{ + Title: "Template issue", + Status: types.StatusOpen, + Priority: 2, + IssueType: types.TypeTask, + IsTemplate: true, + } + if err := s.CreateIssue(ctx, template, "test-user"); err != nil { + s.Close() + t.Fatalf("CreateIssue: %v", err) + } + created, err := s.GetIssue(ctx, template.ID) + if err != nil { + s.Close() + t.Fatalf("GetIssue(template): %v", err) + } + if created == nil || !created.IsTemplate { + s.Close() + t.Fatalf("expected inserted issue to be IsTemplate=true, got %+v", created) + } + _ = s.Close() + + showOut, _ := runBDForCoverage(t, dir, "show", "--allow-stale", template.ID, "--json") + showPayload := extractJSONPayload(showOut) + var showDetails []map[string]interface{} + if err := json.Unmarshal([]byte(showPayload), &showDetails); err != nil { + t.Fatalf("parse show JSON: %v\n%s", err, showOut) + } + if len(showDetails) != 1 { + t.Fatalf("expected 1 issue from show, got %d", len(showDetails)) + } + // Re-open the DB after running the CLI to confirm is_template persisted. + s2, err := sqlite.New(context.Background(), dbFile) + if err != nil { + t.Fatalf("sqlite.New (reopen): %v", err) + } + postShow, err := s2.GetIssue(context.Background(), template.ID) + _ = s2.Close() + if err != nil { + t.Fatalf("GetIssue(template, post-show): %v", err) + } + if postShow == nil || !postShow.IsTemplate { + t.Fatalf("expected template to remain IsTemplate=true post-show, got %+v", postShow) + } + if v, ok := showDetails[0]["is_template"]; ok { + if b, ok := v.(bool); !ok || !b { + t.Fatalf("expected show JSON is_template=true, got %v", v) + } + } else { + t.Fatalf("expected show JSON to include is_template=true, got: %s", showOut) + } + + _, updErr := runBDForCoverage(t, dir, "update", template.ID, "--title", "New title") + if !strings.Contains(updErr, "cannot update template") { + t.Fatalf("expected template update to be rejected, stderr: %s", updErr) + } + _, closeTemplateErr := runBDForCoverage(t, dir, "close", template.ID, "--reason", "Done") + if !strings.Contains(closeTemplateErr, "cannot close template") { + t.Fatalf("expected template close to be rejected, stderr: %s", closeTemplateErr) + } +} + +func TestCoverage_ShowThread(t *testing.T) { + if testing.Short() { + t.Skip("skipping CLI coverage test in short mode") + } + + dir := t.TempDir() + runBDForCoverage(t, dir, "init", "--prefix", "test", "--quiet") + + dbFile := filepath.Join(dir, ".beads", "beads.db") + s, err := sqlite.New(context.Background(), dbFile) + if err != nil { + t.Fatalf("sqlite.New: %v", err) + } + ctx := context.Background() + + root := &types.Issue{Title: "Root message", IssueType: types.TypeMessage, Status: types.StatusOpen, Sender: "alice", Assignee: "bob"} + reply1 := &types.Issue{Title: "Re: Root", IssueType: types.TypeMessage, Status: types.StatusOpen, Sender: "bob", Assignee: "alice"} + reply2 := &types.Issue{Title: "Re: Re: Root", IssueType: types.TypeMessage, Status: types.StatusOpen, Sender: "alice", Assignee: "bob"} + if err := s.CreateIssue(ctx, root, "test-user"); err != nil { + s.Close() + t.Fatalf("CreateIssue root: %v", err) + } + if err := s.CreateIssue(ctx, reply1, "test-user"); err != nil { + s.Close() + t.Fatalf("CreateIssue reply1: %v", err) + } + if err := s.CreateIssue(ctx, reply2, "test-user"); err != nil { + s.Close() + t.Fatalf("CreateIssue reply2: %v", err) + } + if err := s.AddDependency(ctx, &types.Dependency{IssueID: reply1.ID, DependsOnID: root.ID, Type: types.DepRepliesTo, ThreadID: root.ID}, "test-user"); err != nil { + s.Close() + t.Fatalf("AddDependency reply1->root: %v", err) + } + if err := s.AddDependency(ctx, &types.Dependency{IssueID: reply2.ID, DependsOnID: reply1.ID, Type: types.DepRepliesTo, ThreadID: root.ID}, "test-user"); err != nil { + s.Close() + t.Fatalf("AddDependency reply2->reply1: %v", err) + } + _ = s.Close() + + out, _ := runBDForCoverage(t, dir, "show", "--allow-stale", reply2.ID, "--thread") + if !strings.Contains(out, "Thread") || !strings.Contains(out, "Total: 3 messages") { + t.Fatalf("expected thread output, got: %s", out) + } + if !strings.Contains(out, root.ID) || !strings.Contains(out, reply1.ID) || !strings.Contains(out, reply2.ID) { + t.Fatalf("expected thread output to include message IDs, got: %s", out) + } +} diff --git a/cmd/bd/cook.go b/cmd/bd/cook.go index 3bc05758..7d72003e 100644 --- a/cmd/bd/cook.go +++ b/cmd/bd/cook.go @@ -353,7 +353,7 @@ func runCook(cmd *cobra.Command, args []string) { if len(bondPoints) > 0 { fmt.Printf(" Bond points: %s\n", strings.Join(bondPoints, ", ")) } - fmt.Printf("\nTo use: bd pour %s --var =\n", result.ProtoID) + fmt.Printf("\nTo use: bd mol pour %s --var =\n", result.ProtoID) } // cookFormulaResult holds the result of cooking diff --git a/cmd/bd/create.go b/cmd/bd/create.go index e097b065..eb3a45e0 100644 --- a/cmd/bd/create.go +++ b/cmd/bd/create.go @@ -107,7 +107,7 @@ var createCmd = &cobra.Command{ waitsForGate, _ := cmd.Flags().GetString("waits-for-gate") forceCreate, _ := cmd.Flags().GetBool("force") repoOverride, _ := cmd.Flags().GetString("repo") - wisp, _ := cmd.Flags().GetBool("wisp") + wisp, _ := cmd.Flags().GetBool("ephemeral") // Get estimate if provided var estimatedMinutes *int @@ -222,7 +222,8 @@ var createCmd = &cobra.Command{ Dependencies: deps, WaitsFor: waitsFor, WaitsForGate: waitsForGate, - Wisp: wisp, + Ephemeral: wisp, + CreatedBy: getActorWithGit(), } resp, err := daemonClient.Create(createArgs) @@ -267,7 +268,7 @@ var createCmd = &cobra.Command{ Assignee: assignee, ExternalRef: externalRefPtr, EstimatedMinutes: estimatedMinutes, - Wisp: wisp, + Ephemeral: wisp, CreatedBy: getActorWithGit(), // GH#748: track who created the issue } @@ -447,7 +448,7 @@ func init() { createCmd.Flags().Bool("force", false, "Force creation even if prefix doesn't match database prefix") createCmd.Flags().String("repo", "", "Target repository for issue (overrides auto-routing)") createCmd.Flags().IntP("estimate", "e", 0, "Time estimate in minutes (e.g., 60 for 1 hour)") - createCmd.Flags().Bool("wisp", false, "Create as wisp (ephemeral, not exported to JSONL)") + createCmd.Flags().Bool("ephemeral", false, "Create as ephemeral (ephemeral, not exported to JSONL)") // Note: --json flag is defined as a persistent flag in main.go, not here rootCmd.AddCommand(createCmd) } diff --git a/cmd/bd/daemon_autoimport_test.go b/cmd/bd/daemon_autoimport_test.go index 07aaef46..e959ba51 100644 --- a/cmd/bd/daemon_autoimport_test.go +++ b/cmd/bd/daemon_autoimport_test.go @@ -30,36 +30,36 @@ func TestDaemonAutoImportAfterGitPull(t *testing.T) { t.Fatal(err) } defer os.RemoveAll(tempDir) - + // Create "remote" repository remoteDir := filepath.Join(tempDir, "remote") if err := os.MkdirAll(remoteDir, 0750); err != nil { t.Fatalf("Failed to create remote dir: %v", err) } - + // Initialize remote git repo - runGitCmd(t, remoteDir, "init", "--bare") - + runGitCmd(t, remoteDir, "init", "--bare", "-b", "master") + // Create "clone1" repository (Agent A) clone1Dir := filepath.Join(tempDir, "clone1") runGitCmd(t, tempDir, "clone", remoteDir, clone1Dir) configureGit(t, clone1Dir) - + // Initialize beads in clone1 clone1BeadsDir := filepath.Join(clone1Dir, ".beads") if err := os.MkdirAll(clone1BeadsDir, 0750); err != nil { t.Fatalf("Failed to create .beads dir: %v", err) } - + clone1DBPath := filepath.Join(clone1BeadsDir, "test.db") clone1Store := newTestStore(t, clone1DBPath) defer clone1Store.Close() - + ctx := context.Background() if err := clone1Store.SetMetadata(ctx, "issue_prefix", "test"); err != nil { t.Fatalf("Failed to set prefix: %v", err) } - + // Create an open issue in clone1 issue := &types.Issue{ Title: "Test daemon auto-import", @@ -73,39 +73,39 @@ func TestDaemonAutoImportAfterGitPull(t *testing.T) { t.Fatalf("Failed to create issue: %v", err) } issueID := issue.ID - + // Export to JSONL jsonlPath := filepath.Join(clone1BeadsDir, "issues.jsonl") if err := exportIssuesToJSONL(ctx, clone1Store, jsonlPath); err != nil { t.Fatalf("Failed to export: %v", err) } - + // Commit and push from clone1 runGitCmd(t, clone1Dir, "add", ".beads") runGitCmd(t, clone1Dir, "commit", "-m", "Add test issue") runGitCmd(t, clone1Dir, "push", "origin", "master") - + // Create "clone2" repository (Agent B) clone2Dir := filepath.Join(tempDir, "clone2") runGitCmd(t, tempDir, "clone", remoteDir, clone2Dir) configureGit(t, clone2Dir) - + // Initialize empty database in clone2 clone2BeadsDir := filepath.Join(clone2Dir, ".beads") clone2DBPath := filepath.Join(clone2BeadsDir, "test.db") clone2Store := newTestStore(t, clone2DBPath) defer clone2Store.Close() - + if err := clone2Store.SetMetadata(ctx, "issue_prefix", "test"); err != nil { t.Fatalf("Failed to set prefix: %v", err) } - + // Import initial JSONL in clone2 clone2JSONLPath := filepath.Join(clone2BeadsDir, "issues.jsonl") if err := importJSONLToStore(ctx, clone2Store, clone2DBPath, clone2JSONLPath); err != nil { t.Fatalf("Failed to import: %v", err) } - + // Verify issue exists in clone2 initialIssue, err := clone2Store.GetIssue(ctx, issueID) if err != nil { @@ -114,27 +114,27 @@ func TestDaemonAutoImportAfterGitPull(t *testing.T) { if initialIssue.Status != types.StatusOpen { t.Errorf("Expected status open, got %s", initialIssue.Status) } - + // NOW THE CRITICAL TEST: Agent A closes the issue and pushes t.Run("DaemonAutoImportsAfterGitPull", func(t *testing.T) { // Agent A closes the issue if err := clone1Store.CloseIssue(ctx, issueID, "Completed", "agent-a"); err != nil { t.Fatalf("Failed to close issue: %v", err) } - + // Agent A exports to JSONL if err := exportIssuesToJSONL(ctx, clone1Store, jsonlPath); err != nil { t.Fatalf("Failed to export after close: %v", err) } - + // Agent A commits and pushes runGitCmd(t, clone1Dir, "add", ".beads/issues.jsonl") runGitCmd(t, clone1Dir, "commit", "-m", "Close issue") runGitCmd(t, clone1Dir, "push", "origin", "master") - + // Agent B does git pull (updates JSONL on disk) runGitCmd(t, clone2Dir, "pull") - + // Wait for filesystem to settle after git operations // Windows has lower filesystem timestamp precision (typically 100ms) // and file I/O may be slower, so we need a longer delay @@ -143,23 +143,23 @@ func TestDaemonAutoImportAfterGitPull(t *testing.T) { } else { time.Sleep(50 * time.Millisecond) } - + // Start daemon server in clone2 socketPath := filepath.Join(clone2BeadsDir, "bd.sock") os.Remove(socketPath) // Ensure clean state - + server := rpc.NewServer(socketPath, clone2Store, clone2Dir, clone2DBPath) - + // Start server in background serverCtx, serverCancel := context.WithCancel(context.Background()) defer serverCancel() - + go func() { if err := server.Start(serverCtx); err != nil { t.Logf("Server error: %v", err) } }() - + // Wait for server to be ready for i := 0; i < 50; i++ { time.Sleep(10 * time.Millisecond) @@ -167,7 +167,7 @@ func TestDaemonAutoImportAfterGitPull(t *testing.T) { break } } - + // Simulate a daemon request (like "bd show ") // The daemon should auto-import the updated JSONL before responding client, err := rpc.TryConnect(socketPath) @@ -178,15 +178,15 @@ func TestDaemonAutoImportAfterGitPull(t *testing.T) { t.Fatal("Client is nil") } defer client.Close() - + client.SetDatabasePath(clone2DBPath) // Route to correct database - + // Make a request that triggers auto-import check resp, err := client.Execute("show", map[string]string{"id": issueID}) if err != nil { t.Fatalf("Failed to get issue from daemon: %v", err) } - + // Parse response var issue types.Issue issueJSON, err := json.Marshal(resp.Data) @@ -196,25 +196,25 @@ func TestDaemonAutoImportAfterGitPull(t *testing.T) { if err := json.Unmarshal(issueJSON, &issue); err != nil { t.Fatalf("Failed to unmarshal issue: %v", err) } - + status := issue.Status - + // CRITICAL ASSERTION: Daemon should return CLOSED status from JSONL // not stale OPEN status from SQLite if status != types.StatusClosed { t.Errorf("DAEMON AUTO-IMPORT FAILED: Expected status 'closed' but got '%s'", status) t.Errorf("This means daemon is serving stale SQLite data instead of auto-importing JSONL") - + // Double-check JSONL has correct status jsonlData, _ := os.ReadFile(clone2JSONLPath) t.Logf("JSONL content: %s", string(jsonlData)) - + // Double-check what's in SQLite directIssue, _ := clone2Store.GetIssue(ctx, issueID) t.Logf("SQLite status: %s", directIssue.Status) } }) - + // Additional test: Verify multiple rapid changes t.Run("DaemonHandlesRapidUpdates", func(t *testing.T) { // Agent A updates priority @@ -223,18 +223,18 @@ func TestDaemonAutoImportAfterGitPull(t *testing.T) { }, "agent-a"); err != nil { t.Fatalf("Failed to update priority: %v", err) } - + if err := exportIssuesToJSONL(ctx, clone1Store, jsonlPath); err != nil { t.Fatalf("Failed to export: %v", err) } - + runGitCmd(t, clone1Dir, "add", ".beads/issues.jsonl") runGitCmd(t, clone1Dir, "commit", "-m", "Update priority") runGitCmd(t, clone1Dir, "push", "origin", "master") - + // Agent B pulls runGitCmd(t, clone2Dir, "pull") - + // Query via daemon - should see priority 0 // (Execute forces auto-import synchronously) socketPath := filepath.Join(clone2BeadsDir, "bd.sock") @@ -243,18 +243,18 @@ func TestDaemonAutoImportAfterGitPull(t *testing.T) { t.Fatalf("Failed to connect to daemon: %v", err) } defer client.Close() - + client.SetDatabasePath(clone2DBPath) // Route to correct database - + resp, err := client.Execute("show", map[string]string{"id": issueID}) if err != nil { t.Fatalf("Failed to get issue from daemon: %v", err) } - + var issue types.Issue issueJSON, _ := json.Marshal(resp.Data) json.Unmarshal(issueJSON, &issue) - + if issue.Priority != 0 { t.Errorf("Expected priority 0 after auto-import, got %d", issue.Priority) } @@ -273,23 +273,23 @@ func TestDaemonAutoImportDataCorruption(t *testing.T) { t.Fatal(err) } defer os.RemoveAll(tempDir) - + // Setup remote and two clones remoteDir := filepath.Join(tempDir, "remote") os.MkdirAll(remoteDir, 0750) - runGitCmd(t, remoteDir, "init", "--bare") - + runGitCmd(t, remoteDir, "init", "--bare", "-b", "master") + clone1Dir := filepath.Join(tempDir, "clone1") runGitCmd(t, tempDir, "clone", remoteDir, clone1Dir) configureGit(t, clone1Dir) - + clone2Dir := filepath.Join(tempDir, "clone2") runGitCmd(t, tempDir, "clone", remoteDir, clone2Dir) configureGit(t, clone2Dir) - + // Initialize beads in both clones ctx := context.Background() - + // Clone1 setup clone1BeadsDir := filepath.Join(clone1Dir, ".beads") os.MkdirAll(clone1BeadsDir, 0750) @@ -297,7 +297,7 @@ func TestDaemonAutoImportDataCorruption(t *testing.T) { clone1Store := newTestStore(t, clone1DBPath) defer clone1Store.Close() clone1Store.SetMetadata(ctx, "issue_prefix", "test") - + // Clone2 setup clone2BeadsDir := filepath.Join(clone2Dir, ".beads") os.MkdirAll(clone2BeadsDir, 0750) @@ -305,7 +305,7 @@ func TestDaemonAutoImportDataCorruption(t *testing.T) { clone2Store := newTestStore(t, clone2DBPath) defer clone2Store.Close() clone2Store.SetMetadata(ctx, "issue_prefix", "test") - + // Agent A creates issue and pushes issue2 := &types.Issue{ Title: "Shared issue", @@ -317,18 +317,18 @@ func TestDaemonAutoImportDataCorruption(t *testing.T) { } clone1Store.CreateIssue(ctx, issue2, "agent-a") issueID := issue2.ID - + clone1JSONLPath := filepath.Join(clone1BeadsDir, "issues.jsonl") exportIssuesToJSONL(ctx, clone1Store, clone1JSONLPath) runGitCmd(t, clone1Dir, "add", ".beads") runGitCmd(t, clone1Dir, "commit", "-m", "Initial issue") runGitCmd(t, clone1Dir, "push", "origin", "master") - + // Agent B pulls and imports runGitCmd(t, clone2Dir, "pull") clone2JSONLPath := filepath.Join(clone2BeadsDir, "issues.jsonl") importJSONLToStore(ctx, clone2Store, clone2DBPath, clone2JSONLPath) - + // THE CORRUPTION SCENARIO: // 1. Agent A closes the issue and pushes clone1Store.CloseIssue(ctx, issueID, "Done", "agent-a") @@ -336,31 +336,31 @@ func TestDaemonAutoImportDataCorruption(t *testing.T) { runGitCmd(t, clone1Dir, "add", ".beads/issues.jsonl") runGitCmd(t, clone1Dir, "commit", "-m", "Close issue") runGitCmd(t, clone1Dir, "push", "origin", "master") - + // 2. Agent B does git pull (JSONL updated on disk) runGitCmd(t, clone2Dir, "pull") - + // Wait for filesystem to settle after git operations time.Sleep(50 * time.Millisecond) - + // 3. Agent B daemon exports STALE data (if auto-import doesn't work) // This would overwrite Agent A's closure with old "open" status - + // Start daemon in clone2 socketPath := filepath.Join(clone2BeadsDir, "bd.sock") os.Remove(socketPath) - + server := rpc.NewServer(socketPath, clone2Store, clone2Dir, clone2DBPath) - + serverCtx, serverCancel := context.WithCancel(context.Background()) defer serverCancel() - + go func() { if err := server.Start(serverCtx); err != nil { t.Logf("Server error: %v", err) } }() - + // Wait for server for i := 0; i < 50; i++ { time.Sleep(10 * time.Millisecond) @@ -368,43 +368,43 @@ func TestDaemonAutoImportDataCorruption(t *testing.T) { break } } - + // Trigger daemon operation (should auto-import first) client, err := rpc.TryConnect(socketPath) if err != nil { t.Fatalf("Failed to connect: %v", err) } defer client.Close() - + client.SetDatabasePath(clone2DBPath) - + resp, err := client.Execute("show", map[string]string{"id": issueID}) if err != nil { t.Fatalf("Failed to get issue: %v", err) } - + var issue types.Issue issueJSON, _ := json.Marshal(resp.Data) json.Unmarshal(issueJSON, &issue) - + status := issue.Status - + // If daemon didn't auto-import, this would be "open" (stale) // With the fix, it should be "closed" (fresh from JSONL) if status != types.StatusClosed { t.Errorf("DATA CORRUPTION DETECTED: Daemon has stale status '%s' instead of 'closed'", status) t.Error("If daemon exports this stale data, it will overwrite Agent A's changes on next push") } - + // Now simulate daemon export (which happens on timer) // With auto-import working, this export should have fresh data exportIssuesToJSONL(ctx, clone2Store, clone2JSONLPath) - + // Read back JSONL to verify it has correct status data, _ := os.ReadFile(clone2JSONLPath) var exportedIssue types.Issue json.NewDecoder(bytes.NewReader(data)).Decode(&exportedIssue) - + if exportedIssue.Status != types.StatusClosed { t.Errorf("CORRUPTION: Exported JSONL has wrong status '%s', would overwrite remote", exportedIssue.Status) } diff --git a/cmd/bd/daemon_autostart.go b/cmd/bd/daemon_autostart.go index d858c7d8..4fdd0cc0 100644 --- a/cmd/bd/daemon_autostart.go +++ b/cmd/bd/daemon_autostart.go @@ -31,6 +31,19 @@ var ( daemonStartFailures int ) +var ( + executableFn = os.Executable + execCommandFn = exec.Command + openFileFn = os.OpenFile + findProcessFn = os.FindProcess + removeFileFn = os.Remove + configureDaemonProcessFn = configureDaemonProcess + waitForSocketReadinessFn = waitForSocketReadiness + startDaemonProcessFn = startDaemonProcess + isDaemonRunningFn = isDaemonRunning + sendStopSignalFn = sendStopSignal +) + // shouldAutoStartDaemon checks if daemon auto-start is enabled func shouldAutoStartDaemon() bool { // Check BEADS_NO_DAEMON first (escape hatch for single-user workflows) @@ -53,7 +66,6 @@ func shouldAutoStartDaemon() bool { return config.GetBool("auto-start-daemon") // Defaults to true } - // restartDaemonForVersionMismatch stops the old daemon and starts a new one // Returns true if restart was successful func restartDaemonForVersionMismatch() bool { @@ -67,17 +79,17 @@ func restartDaemonForVersionMismatch() bool { // Check if daemon is running and stop it forcedKill := false - if isRunning, pid := isDaemonRunning(pidFile); isRunning { + if isRunning, pid := isDaemonRunningFn(pidFile); isRunning { debug.Logf("stopping old daemon (PID %d)", pid) - process, err := os.FindProcess(pid) + process, err := findProcessFn(pid) if err != nil { debug.Logf("failed to find process: %v", err) return false } // Send stop signal - if err := sendStopSignal(process); err != nil { + if err := sendStopSignalFn(process); err != nil { debug.Logf("failed to signal daemon: %v", err) return false } @@ -85,14 +97,14 @@ func restartDaemonForVersionMismatch() bool { // Wait for daemon to stop, then force kill for i := 0; i < daemonShutdownAttempts; i++ { time.Sleep(daemonShutdownPollInterval) - if isRunning, _ := isDaemonRunning(pidFile); !isRunning { + if isRunning, _ := isDaemonRunningFn(pidFile); !isRunning { debug.Logf("old daemon stopped successfully") break } } // Force kill if still running - if isRunning, _ := isDaemonRunning(pidFile); isRunning { + if isRunning, _ := isDaemonRunningFn(pidFile); isRunning { debug.Logf("force killing old daemon") _ = process.Kill() forcedKill = true @@ -101,19 +113,19 @@ func restartDaemonForVersionMismatch() bool { // Clean up stale socket and PID file after force kill or if not running if forcedKill || !isDaemonRunningQuiet(pidFile) { - _ = os.Remove(socketPath) - _ = os.Remove(pidFile) + _ = removeFileFn(socketPath) + _ = removeFileFn(pidFile) } // Start new daemon with current binary version - exe, err := os.Executable() + exe, err := executableFn() if err != nil { debug.Logf("failed to get executable path: %v", err) return false } args := []string{"daemon", "--start"} - cmd := exec.Command(exe, args...) + cmd := execCommandFn(exe, args...) cmd.Env = append(os.Environ(), "BD_DAEMON_FOREGROUND=1") // Set working directory to database directory so daemon finds correct DB @@ -121,9 +133,9 @@ func restartDaemonForVersionMismatch() bool { cmd.Dir = filepath.Dir(dbPath) } - configureDaemonProcess(cmd) + configureDaemonProcessFn(cmd) - devNull, err := os.OpenFile(os.DevNull, os.O_RDWR, 0) + devNull, err := openFileFn(os.DevNull, os.O_RDWR, 0) if err == nil { cmd.Stdin = devNull cmd.Stdout = devNull @@ -140,7 +152,7 @@ func restartDaemonForVersionMismatch() bool { go func() { _ = cmd.Wait() }() // Wait for daemon to be ready using shared helper - if waitForSocketReadiness(socketPath, 5*time.Second) { + if waitForSocketReadinessFn(socketPath, 5*time.Second) { debug.Logf("new daemon started successfully") return true } @@ -153,7 +165,7 @@ func restartDaemonForVersionMismatch() bool { // isDaemonRunningQuiet checks if daemon is running without output func isDaemonRunningQuiet(pidFile string) bool { - isRunning, _ := isDaemonRunning(pidFile) + isRunning, _ := isDaemonRunningFn(pidFile) return isRunning } @@ -185,7 +197,7 @@ func tryAutoStartDaemon(socketPath string) bool { } socketPath = determineSocketPath(socketPath) - return startDaemonProcess(socketPath) + return startDaemonProcessFn(socketPath) } func debugLog(msg string, args ...interface{}) { @@ -269,21 +281,21 @@ func determineSocketPath(socketPath string) string { } func startDaemonProcess(socketPath string) bool { - binPath, err := os.Executable() + binPath, err := executableFn() if err != nil { binPath = os.Args[0] } args := []string{"daemon", "--start"} - cmd := exec.Command(binPath, args...) + cmd := execCommandFn(binPath, args...) setupDaemonIO(cmd) if dbPath != "" { cmd.Dir = filepath.Dir(dbPath) } - configureDaemonProcess(cmd) + configureDaemonProcessFn(cmd) if err := cmd.Start(); err != nil { recordDaemonStartFailure() debugLog("failed to start daemon: %v", err) @@ -292,7 +304,7 @@ func startDaemonProcess(socketPath string) bool { go func() { _ = cmd.Wait() }() - if waitForSocketReadiness(socketPath, 5*time.Second) { + if waitForSocketReadinessFn(socketPath, 5*time.Second) { recordDaemonStartSuccess() return true } @@ -306,7 +318,7 @@ func startDaemonProcess(socketPath string) bool { } func setupDaemonIO(cmd *exec.Cmd) { - devNull, err := os.OpenFile(os.DevNull, os.O_RDWR, 0) + devNull, err := openFileFn(os.DevNull, os.O_RDWR, 0) if err == nil { cmd.Stdout = devNull cmd.Stderr = devNull diff --git a/cmd/bd/daemon_autostart_unit_test.go b/cmd/bd/daemon_autostart_unit_test.go new file mode 100644 index 00000000..625cedf6 --- /dev/null +++ b/cmd/bd/daemon_autostart_unit_test.go @@ -0,0 +1,331 @@ +package main + +import ( + "bytes" + "context" + "io" + "os" + "os/exec" + "path/filepath" + "runtime" + "testing" + "time" + + "github.com/steveyegge/beads/internal/config" +) + +func tempSockDir(t *testing.T) string { + t.Helper() + + base := "/tmp" + if runtime.GOOS == windowsOS { + base = os.TempDir() + } else if _, err := os.Stat(base); err != nil { + base = os.TempDir() + } + + d, err := os.MkdirTemp(base, "bd-sock-*") + if err != nil { + t.Fatalf("MkdirTemp: %v", err) + } + t.Cleanup(func() { _ = os.RemoveAll(d) }) + return d +} + +func startTestRPCServer(t *testing.T) (socketPath string, cleanup func()) { + t.Helper() + + tmpDir := tempSockDir(t) + beadsDir := filepath.Join(tmpDir, ".beads") + if err := os.MkdirAll(beadsDir, 0o750); err != nil { + t.Fatalf("MkdirAll: %v", err) + } + + socketPath = filepath.Join(beadsDir, "bd.sock") + db := filepath.Join(beadsDir, "test.db") + store := newTestStore(t, db) + + ctx, cancel := context.WithCancel(context.Background()) + log := newTestLogger() + + server, _, err := startRPCServer(ctx, socketPath, store, tmpDir, db, log) + if err != nil { + cancel() + t.Fatalf("startRPCServer: %v", err) + } + + cleanup = func() { + cancel() + if server != nil { + _ = server.Stop() + } + } + + return socketPath, cleanup +} + +func captureStderr(t *testing.T, fn func()) string { + t.Helper() + + old := os.Stderr + r, w, err := os.Pipe() + if err != nil { + t.Fatalf("os.Pipe: %v", err) + } + os.Stderr = w + + var buf bytes.Buffer + done := make(chan struct{}) + go func() { + _, _ = io.Copy(&buf, r) + close(done) + }() + + fn() + _ = w.Close() + os.Stderr = old + <-done + _ = r.Close() + + return buf.String() +} + +func TestDaemonAutostart_AcquireStartLock_CreatesAndCleansStale(t *testing.T) { + tmpDir := t.TempDir() + lockPath := filepath.Join(tmpDir, "bd.sock.startlock") + pid, err := readPIDFromFile(lockPath) + if err == nil || pid != 0 { + // lock doesn't exist yet; expect read to fail. + } + + if !acquireStartLock(lockPath, filepath.Join(tmpDir, "bd.sock")) { + t.Fatalf("expected acquireStartLock to succeed") + } + got, err := readPIDFromFile(lockPath) + if err != nil { + t.Fatalf("readPIDFromFile: %v", err) + } + if got != os.Getpid() { + t.Fatalf("expected lock PID %d, got %d", os.Getpid(), got) + } + + // Stale lock: dead/unreadable PID should be removed and recreated. + if err := os.WriteFile(lockPath, []byte("0\n"), 0o600); err != nil { + t.Fatalf("WriteFile: %v", err) + } + if !acquireStartLock(lockPath, filepath.Join(tmpDir, "bd.sock")) { + t.Fatalf("expected acquireStartLock to succeed on stale lock") + } + got, err = readPIDFromFile(lockPath) + if err != nil { + t.Fatalf("readPIDFromFile: %v", err) + } + if got != os.Getpid() { + t.Fatalf("expected recreated lock PID %d, got %d", os.Getpid(), got) + } +} + +func TestDaemonAutostart_SocketHealthAndReadiness(t *testing.T) { + socketPath, cleanup := startTestRPCServer(t) + defer cleanup() + + if !canDialSocket(socketPath, 500*time.Millisecond) { + t.Fatalf("expected canDialSocket to succeed") + } + if !isDaemonHealthy(socketPath) { + t.Fatalf("expected isDaemonHealthy to succeed") + } + if !waitForSocketReadiness(socketPath, 500*time.Millisecond) { + t.Fatalf("expected waitForSocketReadiness to succeed") + } + + missing := filepath.Join(tempSockDir(t), "missing.sock") + if canDialSocket(missing, 50*time.Millisecond) { + t.Fatalf("expected canDialSocket to fail") + } + if waitForSocketReadiness(missing, 200*time.Millisecond) { + t.Fatalf("expected waitForSocketReadiness to time out") + } +} + +func TestDaemonAutostart_HandleExistingSocket(t *testing.T) { + socketPath, cleanup := startTestRPCServer(t) + defer cleanup() + + if !handleExistingSocket(socketPath) { + t.Fatalf("expected handleExistingSocket true for running daemon") + } +} + +func TestDaemonAutostart_HandleExistingSocket_StaleCleansUp(t *testing.T) { + tmpDir := t.TempDir() + beadsDir := filepath.Join(tmpDir, ".beads") + if err := os.MkdirAll(beadsDir, 0o750); err != nil { + t.Fatalf("MkdirAll: %v", err) + } + + socketPath := filepath.Join(beadsDir, "bd.sock") + pidFile := filepath.Join(beadsDir, "daemon.pid") + if err := os.WriteFile(socketPath, []byte("not-a-socket"), 0o600); err != nil { + t.Fatalf("WriteFile socket: %v", err) + } + if err := os.WriteFile(pidFile, []byte("0\n"), 0o600); err != nil { + t.Fatalf("WriteFile pid: %v", err) + } + + if handleExistingSocket(socketPath) { + t.Fatalf("expected false for stale socket") + } + if _, err := os.Stat(socketPath); !os.IsNotExist(err) { + t.Fatalf("expected socket removed") + } + if _, err := os.Stat(pidFile); !os.IsNotExist(err) { + t.Fatalf("expected pidfile removed") + } +} + +func TestDaemonAutostart_TryAutoStartDaemon_EarlyExits(t *testing.T) { + oldFailures := daemonStartFailures + oldLast := lastDaemonStartAttempt + defer func() { + daemonStartFailures = oldFailures + lastDaemonStartAttempt = oldLast + }() + + daemonStartFailures = 1 + lastDaemonStartAttempt = time.Now() + if tryAutoStartDaemon(filepath.Join(t.TempDir(), "bd.sock")) { + t.Fatalf("expected tryAutoStartDaemon to skip due to backoff") + } + + daemonStartFailures = 0 + lastDaemonStartAttempt = time.Time{} + socketPath, cleanup := startTestRPCServer(t) + defer cleanup() + if !tryAutoStartDaemon(socketPath) { + t.Fatalf("expected tryAutoStartDaemon true when daemon already healthy") + } +} + +func TestDaemonAutostart_MiscHelpers(t *testing.T) { + if determineSocketPath("/x") != "/x" { + t.Fatalf("determineSocketPath should be identity") + } + + if err := config.Initialize(); err != nil { + t.Fatalf("config.Initialize: %v", err) + } + old := config.GetDuration("flush-debounce") + defer config.Set("flush-debounce", old) + + config.Set("flush-debounce", 0) + if got := getDebounceDuration(); got != 5*time.Second { + t.Fatalf("expected default debounce 5s, got %v", got) + } + config.Set("flush-debounce", 2*time.Second) + if got := getDebounceDuration(); got != 2*time.Second { + t.Fatalf("expected debounce 2s, got %v", got) + } +} + +func TestDaemonAutostart_EmitVerboseWarning(t *testing.T) { + old := daemonStatus + defer func() { daemonStatus = old }() + + daemonStatus.SocketPath = "/tmp/bd.sock" + for _, tt := range []struct { + reason string + shouldWrite bool + }{ + {FallbackConnectFailed, true}, + {FallbackHealthFailed, true}, + {FallbackAutoStartDisabled, true}, + {FallbackAutoStartFailed, true}, + {FallbackDaemonUnsupported, true}, + {FallbackWorktreeSafety, false}, + {FallbackFlagNoDaemon, false}, + } { + t.Run(tt.reason, func(t *testing.T) { + daemonStatus.FallbackReason = tt.reason + out := captureStderr(t, emitVerboseWarning) + if tt.shouldWrite && out == "" { + t.Fatalf("expected output") + } + if !tt.shouldWrite && out != "" { + t.Fatalf("expected no output, got %q", out) + } + }) + } +} + +func TestDaemonAutostart_StartDaemonProcess_Stubbed(t *testing.T) { + oldExec := execCommandFn + oldWait := waitForSocketReadinessFn + oldCfg := configureDaemonProcessFn + defer func() { + execCommandFn = oldExec + waitForSocketReadinessFn = oldWait + configureDaemonProcessFn = oldCfg + }() + + execCommandFn = func(string, ...string) *exec.Cmd { + return exec.Command(os.Args[0], "-test.run=^$") + } + waitForSocketReadinessFn = func(string, time.Duration) bool { return true } + configureDaemonProcessFn = func(*exec.Cmd) {} + + if !startDaemonProcess(filepath.Join(t.TempDir(), "bd.sock")) { + t.Fatalf("expected startDaemonProcess true when readiness stubbed") + } +} + +func TestDaemonAutostart_RestartDaemonForVersionMismatch_Stubbed(t *testing.T) { + oldExec := execCommandFn + oldWait := waitForSocketReadinessFn + oldRun := isDaemonRunningFn + oldCfg := configureDaemonProcessFn + defer func() { + execCommandFn = oldExec + waitForSocketReadinessFn = oldWait + isDaemonRunningFn = oldRun + configureDaemonProcessFn = oldCfg + }() + + tmpDir := t.TempDir() + beadsDir := filepath.Join(tmpDir, ".beads") + if err := os.MkdirAll(beadsDir, 0o750); err != nil { + t.Fatalf("MkdirAll: %v", err) + } + oldDB := dbPath + defer func() { dbPath = oldDB }() + dbPath = filepath.Join(beadsDir, "test.db") + + pidFile, err := getPIDFilePath() + if err != nil { + t.Fatalf("getPIDFilePath: %v", err) + } + sock := getSocketPath() + if err := os.WriteFile(pidFile, []byte("999999\n"), 0o600); err != nil { + t.Fatalf("WriteFile pid: %v", err) + } + if err := os.WriteFile(sock, []byte("stale"), 0o600); err != nil { + t.Fatalf("WriteFile sock: %v", err) + } + + execCommandFn = func(string, ...string) *exec.Cmd { + return exec.Command(os.Args[0], "-test.run=^$") + } + waitForSocketReadinessFn = func(string, time.Duration) bool { return true } + isDaemonRunningFn = func(string) (bool, int) { return false, 0 } + configureDaemonProcessFn = func(*exec.Cmd) {} + + if !restartDaemonForVersionMismatch() { + t.Fatalf("expected restartDaemonForVersionMismatch true when stubbed") + } + if _, err := os.Stat(pidFile); !os.IsNotExist(err) { + t.Fatalf("expected pidfile removed") + } + if _, err := os.Stat(sock); !os.IsNotExist(err) { + t.Fatalf("expected socket removed") + } +} diff --git a/cmd/bd/daemon_debouncer_test.go b/cmd/bd/daemon_debouncer_test.go index 33658277..69e36a7d 100644 --- a/cmd/bd/daemon_debouncer_test.go +++ b/cmd/bd/daemon_debouncer_test.go @@ -157,23 +157,26 @@ func TestDebouncer_MultipleSequentialTriggerCycles(t *testing.T) { }) t.Cleanup(debouncer.Cancel) - debouncer.Trigger() - time.Sleep(40 * time.Millisecond) - if got := atomic.LoadInt32(&count); got != 1 { - t.Errorf("first cycle: got %d, want 1", got) + awaitCount := func(want int32) { + deadline := time.Now().Add(500 * time.Millisecond) + for time.Now().Before(deadline) { + if got := atomic.LoadInt32(&count); got >= want { + return + } + time.Sleep(5 * time.Millisecond) + } + got := atomic.LoadInt32(&count) + t.Fatalf("timeout waiting for count=%d (got %d)", want, got) } debouncer.Trigger() - time.Sleep(40 * time.Millisecond) - if got := atomic.LoadInt32(&count); got != 2 { - t.Errorf("second cycle: got %d, want 2", got) - } + awaitCount(1) debouncer.Trigger() - time.Sleep(40 * time.Millisecond) - if got := atomic.LoadInt32(&count); got != 3 { - t.Errorf("third cycle: got %d, want 3", got) - } + awaitCount(2) + + debouncer.Trigger() + awaitCount(3) } func TestDebouncer_CancelImmediatelyAfterTrigger(t *testing.T) { diff --git a/cmd/bd/daemon_sync_branch_test.go b/cmd/bd/daemon_sync_branch_test.go index d6731347..186c8533 100644 --- a/cmd/bd/daemon_sync_branch_test.go +++ b/cmd/bd/daemon_sync_branch_test.go @@ -48,12 +48,12 @@ func TestSyncBranchCommitAndPush_NotConfigured(t *testing.T) { // Create test issue issue := &types.Issue{ - Title: "Test issue", - Status: types.StatusOpen, - Priority: 1, - IssueType: types.TypeTask, - CreatedAt: time.Now(), - UpdatedAt: time.Now(), + Title: "Test issue", + Status: types.StatusOpen, + Priority: 1, + IssueType: types.TypeTask, + CreatedAt: time.Now(), + UpdatedAt: time.Now(), } if err := store.CreateIssue(ctx, issue, "test"); err != nil { t.Fatalf("Failed to create issue: %v", err) @@ -122,12 +122,12 @@ func TestSyncBranchCommitAndPush_Success(t *testing.T) { // Create test issue issue := &types.Issue{ - Title: "Test sync branch issue", - Status: types.StatusOpen, - Priority: 1, - IssueType: types.TypeTask, - CreatedAt: time.Now(), - UpdatedAt: time.Now(), + Title: "Test sync branch issue", + Status: types.StatusOpen, + Priority: 1, + IssueType: types.TypeTask, + CreatedAt: time.Now(), + UpdatedAt: time.Now(), } if err := store.CreateIssue(ctx, issue, "test"); err != nil { t.Fatalf("Failed to create issue: %v", err) @@ -228,12 +228,12 @@ func TestSyncBranchCommitAndPush_EnvOverridesDB(t *testing.T) { // Create test issue and export JSONL issue := &types.Issue{ - Title: "Env override issue", - Status: types.StatusOpen, - Priority: 1, - IssueType: types.TypeTask, - CreatedAt: time.Now(), - UpdatedAt: time.Now(), + Title: "Env override issue", + Status: types.StatusOpen, + Priority: 1, + IssueType: types.TypeTask, + CreatedAt: time.Now(), + UpdatedAt: time.Now(), } if err := store.CreateIssue(ctx, issue, "test"); err != nil { t.Fatalf("Failed to create issue: %v", err) @@ -303,12 +303,12 @@ func TestSyncBranchCommitAndPush_NoChanges(t *testing.T) { } issue := &types.Issue{ - Title: "Test issue", - Status: types.StatusOpen, - Priority: 1, - IssueType: types.TypeTask, - CreatedAt: time.Now(), - UpdatedAt: time.Now(), + Title: "Test issue", + Status: types.StatusOpen, + Priority: 1, + IssueType: types.TypeTask, + CreatedAt: time.Now(), + UpdatedAt: time.Now(), } if err := store.CreateIssue(ctx, issue, "test"); err != nil { t.Fatalf("Failed to create issue: %v", err) @@ -380,12 +380,12 @@ func TestSyncBranchCommitAndPush_WorktreeHealthCheck(t *testing.T) { } issue := &types.Issue{ - Title: "Test issue", - Status: types.StatusOpen, - Priority: 1, - IssueType: types.TypeTask, - CreatedAt: time.Now(), - UpdatedAt: time.Now(), + Title: "Test issue", + Status: types.StatusOpen, + Priority: 1, + IssueType: types.TypeTask, + CreatedAt: time.Now(), + UpdatedAt: time.Now(), } if err := store.CreateIssue(ctx, issue, "test"); err != nil { t.Fatalf("Failed to create issue: %v", err) @@ -497,7 +497,7 @@ func TestSyncBranchPull_Success(t *testing.T) { if err := os.MkdirAll(remoteDir, 0755); err != nil { t.Fatalf("Failed to create remote dir: %v", err) } - runGitCmd(t, remoteDir, "init", "--bare") + runGitCmd(t, remoteDir, "init", "--bare", "-b", "master") // Create clone1 (will push changes) clone1Dir := filepath.Join(tmpDir, "clone1") @@ -528,12 +528,12 @@ func TestSyncBranchPull_Success(t *testing.T) { // Create issue in clone1 issue := &types.Issue{ - Title: "Test sync pull issue", - Status: types.StatusOpen, - Priority: 1, - IssueType: types.TypeTask, - CreatedAt: time.Now(), - UpdatedAt: time.Now(), + Title: "Test sync pull issue", + Status: types.StatusOpen, + Priority: 1, + IssueType: types.TypeTask, + CreatedAt: time.Now(), + UpdatedAt: time.Now(), } if err := store1.CreateIssue(ctx, issue, "test"); err != nil { t.Fatalf("Failed to create issue: %v", err) @@ -639,7 +639,7 @@ func TestSyncBranchIntegration_EndToEnd(t *testing.T) { tmpDir := t.TempDir() remoteDir := filepath.Join(tmpDir, "remote") os.MkdirAll(remoteDir, 0755) - runGitCmd(t, remoteDir, "init", "--bare") + runGitCmd(t, remoteDir, "init", "--bare", "-b", "master") // Clone1: Agent A clone1Dir := filepath.Join(tmpDir, "clone1") @@ -660,12 +660,12 @@ func TestSyncBranchIntegration_EndToEnd(t *testing.T) { // Agent A creates issue issue := &types.Issue{ - Title: "E2E test issue", - Status: types.StatusOpen, - Priority: 1, - IssueType: types.TypeTask, - CreatedAt: time.Now(), - UpdatedAt: time.Now(), + Title: "E2E test issue", + Status: types.StatusOpen, + Priority: 1, + IssueType: types.TypeTask, + CreatedAt: time.Now(), + UpdatedAt: time.Now(), } store1.CreateIssue(ctx, issue, "agent-a") issueID := issue.ID @@ -914,7 +914,7 @@ func TestSyncBranchMultipleConcurrentClones(t *testing.T) { tmpDir := t.TempDir() remoteDir := filepath.Join(tmpDir, "remote") os.MkdirAll(remoteDir, 0755) - runGitCmd(t, remoteDir, "init", "--bare") + runGitCmd(t, remoteDir, "init", "--bare", "-b", "master") syncBranch := "beads-sync" @@ -1454,7 +1454,7 @@ func TestGitPushFromWorktree_FetchRebaseRetry(t *testing.T) { // Create a "remote" bare repository remoteDir := t.TempDir() - runGitCmd(t, remoteDir, "init", "--bare") + runGitCmd(t, remoteDir, "init", "--bare", "-b", "master") // Create first clone (simulates another developer's clone) clone1Dir := t.TempDir() @@ -1524,7 +1524,7 @@ func TestGitPushFromWorktree_FetchRebaseRetry(t *testing.T) { // Now try to push from worktree - this should trigger the fetch-rebase-retry logic // because the remote has commits that the local worktree doesn't have - err := gitPushFromWorktree(ctx, worktreePath, "beads-sync") + err := gitPushFromWorktree(ctx, worktreePath, "beads-sync", "") if err != nil { t.Fatalf("gitPushFromWorktree failed: %v (expected fetch-rebase-retry to succeed)", err) } diff --git a/cmd/bd/delete_rpc_test.go b/cmd/bd/delete_rpc_test.go index de8b862d..82211b76 100644 --- a/cmd/bd/delete_rpc_test.go +++ b/cmd/bd/delete_rpc_test.go @@ -8,6 +8,7 @@ import ( "context" "encoding/json" "io" + "log/slog" "os" "path/filepath" "strings" @@ -897,11 +898,7 @@ func setupDaemonTestEnvForDelete(t *testing.T) (context.Context, context.CancelF ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second) - log := daemonLogger{ - logFunc: func(format string, args ...interface{}) { - t.Logf("[daemon] "+format, args...) - }, - } + log := daemonLogger{logger: slog.New(slog.NewTextHandler(io.Discard, &slog.HandlerOptions{Level: slog.LevelInfo}))} server, _, err := startRPCServer(ctx, socketPath, testStore, tmpDir, testDBPath, log) if err != nil { diff --git a/cmd/bd/dep.go b/cmd/bd/dep.go index c2657bdb..fb283d19 100644 --- a/cmd/bd/dep.go +++ b/cmd/bd/dep.go @@ -5,9 +5,11 @@ import ( "encoding/json" "fmt" "os" + "path/filepath" "strings" "github.com/spf13/cobra" + "github.com/steveyegge/beads/internal/routing" "github.com/steveyegge/beads/internal/rpc" "github.com/steveyegge/beads/internal/storage/sqlite" "github.com/steveyegge/beads/internal/types" @@ -15,6 +17,14 @@ import ( "github.com/steveyegge/beads/internal/utils" ) +// getBeadsDir returns the .beads directory path, derived from the global dbPath. +func getBeadsDir() string { + if dbPath != "" { + return filepath.Dir(dbPath) + } + return "" +} + // isChildOf returns true if childID is a hierarchical child of parentID. // For example, "bd-abc.1" is a child of "bd-abc", and "bd-abc.1.2" is a child of "bd-abc.1". func isChildOf(childID, parentID string) bool { @@ -88,9 +98,15 @@ Examples: resolveArgs = &rpc.ResolveIDArgs{ID: args[1]} resp, err = daemonClient.ResolveID(resolveArgs) if err != nil { - FatalErrorRespectJSON("resolving dependency ID %s: %v", args[1], err) - } - if err := json.Unmarshal(resp.Data, &toID); err != nil { + // Resolution failed - try auto-converting to external ref (bd-lfiu) + beadsDir := getBeadsDir() + if extRef := routing.ResolveToExternalRef(args[1], beadsDir); extRef != "" { + toID = extRef + isExternalRef = true + } else { + FatalErrorRespectJSON("resolving dependency ID %s: %v", args[1], err) + } + } else if err := json.Unmarshal(resp.Data, &toID); err != nil { FatalErrorRespectJSON("unmarshaling resolved ID: %v", err) } } @@ -111,7 +127,14 @@ Examples: } else { toID, err = utils.ResolvePartialID(ctx, store, args[1]) if err != nil { - FatalErrorRespectJSON("resolving dependency ID %s: %v", args[1], err) + // Resolution failed - try auto-converting to external ref (bd-lfiu) + beadsDir := getBeadsDir() + if extRef := routing.ResolveToExternalRef(args[1], beadsDir); extRef != "" { + toID = extRef + isExternalRef = true + } else { + FatalErrorRespectJSON("resolving dependency ID %s: %v", args[1], err) + } } } } diff --git a/cmd/bd/doctor.go b/cmd/bd/doctor.go index 92164c6e..e065a8be 100644 --- a/cmd/bd/doctor.go +++ b/cmd/bd/doctor.go @@ -43,8 +43,8 @@ type doctorResult struct { Checks []doctorCheck `json:"checks"` OverallOK bool `json:"overall_ok"` CLIVersion string `json:"cli_version"` - Timestamp string `json:"timestamp,omitempty"` // bd-9cc: ISO8601 timestamp for historical tracking - Platform map[string]string `json:"platform,omitempty"` // bd-9cc: platform info for debugging + Timestamp string `json:"timestamp,omitempty"` // bd-9cc: ISO8601 timestamp for historical tracking + Platform map[string]string `json:"platform,omitempty"` // bd-9cc: platform info for debugging } var ( @@ -353,6 +353,42 @@ func applyFixesInteractive(path string, issues []doctorCheck) { // applyFixList applies a list of fixes and reports results func applyFixList(path string, fixes []doctorCheck) { + // Apply fixes in a dependency-aware order. + // Rough dependency chain: + // permissions/daemon cleanup → config sanity → DB integrity/migrations → DB↔JSONL sync. + order := []string{ + "Permissions", + "Daemon Health", + "Database Config", + "JSONL Config", + "Database Integrity", + "Database", + "Schema Compatibility", + "JSONL Integrity", + "DB-JSONL Sync", + } + priority := make(map[string]int, len(order)) + for i, name := range order { + priority[name] = i + } + slices.SortStableFunc(fixes, func(a, b doctorCheck) int { + pa, oka := priority[a.Name] + if !oka { + pa = 1000 + } + pb, okb := priority[b.Name] + if !okb { + pb = 1000 + } + if pa < pb { + return -1 + } + if pa > pb { + return 1 + } + return 0 + }) + fixedCount := 0 errorCount := 0 @@ -373,11 +409,11 @@ func applyFixList(path string, fixes []doctorCheck) { err = fix.Permissions(path) case "Database": err = fix.DatabaseVersion(path) - case "Schema Compatibility": - err = fix.SchemaCompatibility(path) case "Database Integrity": // Corruption detected - try recovery from JSONL err = fix.DatabaseCorruptionRecovery(path) + case "Schema Compatibility": + err = fix.SchemaCompatibility(path) case "Repo Fingerprint": err = fix.RepoFingerprint(path) case "Git Merge Driver": @@ -390,6 +426,8 @@ func applyFixList(path string, fixes []doctorCheck) { err = fix.DatabaseConfig(path) case "JSONL Config": err = fix.LegacyJSONLConfig(path) + case "JSONL Integrity": + err = fix.JSONLIntegrity(path) case "Deletions Manifest": err = fix.MigrateTombstones(path) case "Untracked Files": @@ -694,6 +732,13 @@ func runDiagnostics(path string) doctorResult { result.Checks = append(result.Checks, configValuesCheck) // Don't fail overall check for config value warnings, just warn + // Check 7b: JSONL integrity (malformed lines, missing IDs) + jsonlIntegrityCheck := convertWithCategory(doctor.CheckJSONLIntegrity(path), doctor.CategoryData) + result.Checks = append(result.Checks, jsonlIntegrityCheck) + if jsonlIntegrityCheck.Status == statusWarning || jsonlIntegrityCheck.Status == statusError { + result.OverallOK = false + } + // Check 8: Daemon health daemonCheck := convertWithCategory(doctor.CheckDaemonStatus(path, Version), doctor.CategoryRuntime) result.Checks = append(result.Checks, daemonCheck) @@ -757,6 +802,16 @@ func runDiagnostics(path string) doctorResult { result.Checks = append(result.Checks, mergeDriverCheck) // Don't fail overall check for merge driver, just warn + // Check 15a: Git working tree cleanliness (AGENTS.md hygiene) + gitWorkingTreeCheck := convertWithCategory(doctor.CheckGitWorkingTree(path), doctor.CategoryGit) + result.Checks = append(result.Checks, gitWorkingTreeCheck) + // Don't fail overall check for dirty working tree, just warn + + // Check 15b: Git upstream sync (ahead/behind/diverged) + gitUpstreamCheck := convertWithCategory(doctor.CheckGitUpstream(path), doctor.CategoryGit) + result.Checks = append(result.Checks, gitUpstreamCheck) + // Don't fail overall check for upstream drift, just warn + // Check 16: Metadata.json version tracking (bd-u4sb) metadataCheck := convertWithCategory(doctor.CheckMetadataVersionTracking(path, Version), doctor.CategoryMetadata) result.Checks = append(result.Checks, metadataCheck) diff --git a/cmd/bd/doctor/config_values.go b/cmd/bd/doctor/config_values.go index 45e7c995..71ebe5f9 100644 --- a/cmd/bd/doctor/config_values.go +++ b/cmd/bd/doctor/config_values.go @@ -316,6 +316,10 @@ func checkMetadataConfigValues(repoPath string) []string { // Validate jsonl_export filename if cfg.JSONLExport != "" { + switch cfg.JSONLExport { + case "deletions.jsonl", "interactions.jsonl", "molecules.jsonl": + issues = append(issues, fmt.Sprintf("metadata.json jsonl_export: %q is a system file and should not be configured as a JSONL export (expected issues.jsonl)", cfg.JSONLExport)) + } if strings.Contains(cfg.JSONLExport, string(os.PathSeparator)) || strings.Contains(cfg.JSONLExport, "/") { issues = append(issues, fmt.Sprintf("metadata.json jsonl_export: %q should be a filename, not a path", cfg.JSONLExport)) } @@ -353,7 +357,7 @@ func checkDatabaseConfigValues(repoPath string) []string { } // Open database in read-only mode - db, err := sql.Open("sqlite3", "file:"+dbPath+"?mode=ro") + db, err := sql.Open("sqlite3", sqliteConnString(dbPath, true)) if err != nil { return issues // Can't open database, skip } diff --git a/cmd/bd/doctor/config_values_test.go b/cmd/bd/doctor/config_values_test.go index 04bf60ba..1fc844d8 100644 --- a/cmd/bd/doctor/config_values_test.go +++ b/cmd/bd/doctor/config_values_test.go @@ -213,6 +213,21 @@ func TestCheckMetadataConfigValues(t *testing.T) { t.Error("expected issues for wrong jsonl extension") } }) + + t.Run("jsonl_export cannot be system file", func(t *testing.T) { + metadataContent := `{ + "database": "beads.db", + "jsonl_export": "interactions.jsonl" +}` + if err := os.WriteFile(filepath.Join(beadsDir, "metadata.json"), []byte(metadataContent), 0644); err != nil { + t.Fatalf("failed to write metadata.json: %v", err) + } + + issues := checkMetadataConfigValues(tmpDir) + if len(issues) == 0 { + t.Error("expected issues for system jsonl_export") + } + }) } func contains(s, substr string) bool { diff --git a/cmd/bd/doctor/database.go b/cmd/bd/doctor/database.go index 876fd7d4..5280b7d8 100644 --- a/cmd/bd/doctor/database.go +++ b/cmd/bd/doctor/database.go @@ -155,9 +155,9 @@ func CheckSchemaCompatibility(path string) DoctorCheck { } } - // Open database (bd-ckvw: This will run migrations and schema probe) + // Open database (bd-ckvw: schema probe) // Note: We can't use the global 'store' because doctor can check arbitrary paths - db, err := sql.Open("sqlite3", "file:"+dbPath+"?_pragma=foreign_keys(ON)&_pragma=busy_timeout(30000)") + db, err := sql.Open("sqlite3", sqliteConnString(dbPath, true)) if err != nil { return DoctorCheck{ Name: "Schema Compatibility", @@ -244,7 +244,7 @@ func CheckDatabaseIntegrity(path string) DoctorCheck { } // Open database in read-only mode for integrity check - db, err := sql.Open("sqlite3", "file:"+dbPath+"?mode=ro&_pragma=busy_timeout(30000)") + db, err := sql.Open("sqlite3", sqliteConnString(dbPath, true)) if err != nil { // Check if JSONL recovery is possible jsonlCount, _, jsonlErr := CountJSONLIssues(filepath.Join(beadsDir, "issues.jsonl")) @@ -267,7 +267,7 @@ func CheckDatabaseIntegrity(path string) DoctorCheck { Status: StatusError, Message: "Failed to open database for integrity check", Detail: err.Error(), - Fix: "Database may be corrupted. Restore JSONL from git history, then run 'bd doctor --fix'", + Fix: "Run 'bd doctor --fix' to back up the corrupt DB and rebuild from JSONL (if available), or restore from backup", } } defer db.Close() @@ -297,7 +297,7 @@ func CheckDatabaseIntegrity(path string) DoctorCheck { Status: StatusError, Message: "Failed to run integrity check", Detail: err.Error(), - Fix: "Database may be corrupted. Restore JSONL from git history, then run 'bd doctor --fix'", + Fix: "Run 'bd doctor --fix' to back up the corrupt DB and rebuild from JSONL (if available), or restore from backup", } } defer rows.Close() @@ -342,22 +342,37 @@ func CheckDatabaseIntegrity(path string) DoctorCheck { Status: StatusError, Message: "Database corruption detected", Detail: strings.Join(results, "; "), - Fix: "Database may need recovery. Restore JSONL from git history, then run 'bd doctor --fix'", + Fix: "Run 'bd doctor --fix' to back up the corrupt DB and rebuild from JSONL (if available), or restore from backup", } } // CheckDatabaseJSONLSync checks if database and JSONL are in sync func CheckDatabaseJSONLSync(path string) DoctorCheck { beadsDir := filepath.Join(path, ".beads") - dbPath := filepath.Join(beadsDir, beads.CanonicalDatabaseName) - // Find JSONL file - var jsonlPath string - for _, name := range []string{"issues.jsonl", "beads.jsonl"} { - testPath := filepath.Join(beadsDir, name) - if _, err := os.Stat(testPath); err == nil { - jsonlPath = testPath - break + // Resolve database path (respects metadata.json override). + dbPath := filepath.Join(beadsDir, beads.CanonicalDatabaseName) + if cfg, err := configfile.Load(beadsDir); err == nil && cfg != nil && cfg.Database != "" { + dbPath = cfg.DatabasePath(beadsDir) + } + + // Find JSONL file (respects metadata.json override when set). + jsonlPath := "" + if cfg, err := configfile.Load(beadsDir); err == nil && cfg != nil { + if cfg.JSONLExport != "" && !isSystemJSONLFilename(cfg.JSONLExport) { + p := cfg.JSONLPath(beadsDir) + if _, err := os.Stat(p); err == nil { + jsonlPath = p + } + } + } + if jsonlPath == "" { + for _, name := range []string{"issues.jsonl", "beads.jsonl"} { + testPath := filepath.Join(beadsDir, name) + if _, err := os.Stat(testPath); err == nil { + jsonlPath = testPath + break + } } } @@ -383,7 +398,7 @@ func CheckDatabaseJSONLSync(path string) DoctorCheck { jsonlCount, jsonlPrefixes, jsonlErr := CountJSONLIssues(jsonlPath) // Single database open for all queries (instead of 3 separate opens) - db, err := sql.Open("sqlite3", dbPath) + db, err := sql.Open("sqlite3", sqliteConnString(dbPath, true)) if err != nil { // Database can't be opened. If JSONL has issues, suggest recovery. if jsonlErr == nil && jsonlCount > 0 { @@ -440,11 +455,16 @@ func CheckDatabaseJSONLSync(path string) DoctorCheck { // Use JSONL error if we got it earlier if jsonlErr != nil { + fixMsg := "Run 'bd doctor --fix' to attempt recovery" + if strings.Contains(jsonlErr.Error(), "malformed") { + fixMsg = "Run 'bd doctor --fix' to back up and regenerate the JSONL from the database" + } return DoctorCheck{ Name: "DB-JSONL Sync", Status: StatusWarning, Message: "Unable to read JSONL file", Detail: jsonlErr.Error(), + Fix: fixMsg, } } @@ -551,7 +571,7 @@ func FixDBJSONLSync(path string) error { // getDatabaseVersionFromPath reads the database version from the given path func getDatabaseVersionFromPath(dbPath string) string { - db, err := sql.Open("sqlite3", "file:"+dbPath+"?mode=ro") + db, err := sql.Open("sqlite3", sqliteConnString(dbPath, true)) if err != nil { return "unknown" } diff --git a/cmd/bd/doctor/fix/common.go b/cmd/bd/doctor/fix/common.go index f7276f3b..771f38f2 100644 --- a/cmd/bd/doctor/fix/common.go +++ b/cmd/bd/doctor/fix/common.go @@ -12,6 +12,13 @@ import ( // This prevents fork bombs when tests call functions that execute bd subcommands. var ErrTestBinary = fmt.Errorf("running as test binary - cannot execute bd subcommands") +func newBdCmd(bdBinary string, args ...string) *exec.Cmd { + fullArgs := append([]string{"--no-daemon"}, args...) + cmd := exec.Command(bdBinary, fullArgs...) // #nosec G204 -- bdBinary from validated executable path + cmd.Env = append(os.Environ(), "BEADS_NO_DAEMON=1") + return cmd +} + // getBdBinary returns the path to the bd binary to use for fix operations. // It prefers the current executable to avoid command injection attacks. // Returns ErrTestBinary if running as a test binary to prevent fork bombs. diff --git a/cmd/bd/doctor/fix/daemon.go b/cmd/bd/doctor/fix/daemon.go index 79a892de..e48c41df 100644 --- a/cmd/bd/doctor/fix/daemon.go +++ b/cmd/bd/doctor/fix/daemon.go @@ -3,7 +3,6 @@ package fix import ( "fmt" "os" - "os/exec" "path/filepath" ) @@ -36,7 +35,7 @@ func Daemon(path string) error { } // Run bd daemons killall to clean up stale daemons - cmd := exec.Command(bdBinary, "daemons", "killall") // #nosec G204 -- bdBinary from validated executable path + cmd := newBdCmd(bdBinary, "daemons", "killall") cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr diff --git a/cmd/bd/doctor/fix/database_config.go b/cmd/bd/doctor/fix/database_config.go index 2c8bc539..34d9686d 100644 --- a/cmd/bd/doctor/fix/database_config.go +++ b/cmd/bd/doctor/fix/database_config.go @@ -32,6 +32,13 @@ func DatabaseConfig(path string) error { fixed := false + // Never treat system JSONL files as a JSONL export configuration. + if isSystemJSONLFilename(cfg.JSONLExport) { + fmt.Printf(" Updating jsonl_export: %s → issues.jsonl\n", cfg.JSONLExport) + cfg.JSONLExport = "issues.jsonl" + fixed = true + } + // Check if configured JSONL exists if cfg.JSONLExport != "" { jsonlPath := cfg.JSONLPath(beadsDir) @@ -99,7 +106,15 @@ func findActualJSONLFile(beadsDir string) string { strings.Contains(lowerName, ".orig") || strings.Contains(lowerName, ".bak") || strings.Contains(lowerName, "~") || - strings.HasPrefix(lowerName, "backup_") { + strings.HasPrefix(lowerName, "backup_") || + // System files are not JSONL exports. + name == "deletions.jsonl" || + name == "interactions.jsonl" || + name == "molecules.jsonl" || + // Git merge conflict artifacts (e.g., issues.base.jsonl, issues.left.jsonl) + strings.Contains(lowerName, ".base.jsonl") || + strings.Contains(lowerName, ".left.jsonl") || + strings.Contains(lowerName, ".right.jsonl") { continue } @@ -121,6 +136,15 @@ func findActualJSONLFile(beadsDir string) string { return candidates[0] } +func isSystemJSONLFilename(name string) bool { + switch name { + case "deletions.jsonl", "interactions.jsonl", "molecules.jsonl": + return true + default: + return false + } +} + // LegacyJSONLConfig migrates from legacy beads.jsonl to canonical issues.jsonl. // This renames the file, updates metadata.json, and updates .gitattributes if present. // bd-6xd: issues.jsonl is the canonical filename diff --git a/cmd/bd/doctor/fix/database_config_test.go b/cmd/bd/doctor/fix/database_config_test.go index 42f2642b..5ae00a2a 100644 --- a/cmd/bd/doctor/fix/database_config_test.go +++ b/cmd/bd/doctor/fix/database_config_test.go @@ -220,3 +220,53 @@ func TestLegacyJSONLConfig_UpdatesGitattributes(t *testing.T) { t.Errorf("Expected .gitattributes to reference issues.jsonl, got: %q", string(content)) } } + +// TestFindActualJSONLFile_SkipsSystemFiles ensures system JSONL files are never treated as JSONL exports. +func TestFindActualJSONLFile_SkipsSystemFiles(t *testing.T) { + tmpDir := t.TempDir() + + // Only system files → no candidates. + if err := os.WriteFile(filepath.Join(tmpDir, "interactions.jsonl"), []byte(`{"id":"x"}`), 0644); err != nil { + t.Fatal(err) + } + if got := findActualJSONLFile(tmpDir); got != "" { + t.Fatalf("expected empty result, got %q", got) + } + + // System + legacy export → legacy wins. + if err := os.WriteFile(filepath.Join(tmpDir, "beads.jsonl"), []byte(`{"id":"x"}`), 0644); err != nil { + t.Fatal(err) + } + if got := findActualJSONLFile(tmpDir); got != "beads.jsonl" { + t.Fatalf("expected beads.jsonl, got %q", got) + } +} + +func TestDatabaseConfigFix_RejectsSystemJSONLExport(t *testing.T) { + tmpDir := t.TempDir() + beadsDir := filepath.Join(tmpDir, ".beads") + if err := os.Mkdir(beadsDir, 0755); err != nil { + t.Fatalf("Failed to create .beads dir: %v", err) + } + + if err := os.WriteFile(filepath.Join(beadsDir, "interactions.jsonl"), []byte(`{"id":"x"}`), 0644); err != nil { + t.Fatalf("Failed to create interactions.jsonl: %v", err) + } + + cfg := &configfile.Config{Database: "beads.db", JSONLExport: "interactions.jsonl"} + if err := cfg.Save(beadsDir); err != nil { + t.Fatalf("Failed to save config: %v", err) + } + + if err := DatabaseConfig(tmpDir); err != nil { + t.Fatalf("DatabaseConfig failed: %v", err) + } + + updated, err := configfile.Load(beadsDir) + if err != nil { + t.Fatalf("Failed to load updated config: %v", err) + } + if updated.JSONLExport != "issues.jsonl" { + t.Fatalf("expected issues.jsonl, got %q", updated.JSONLExport) + } +} diff --git a/cmd/bd/doctor/fix/database_integrity.go b/cmd/bd/doctor/fix/database_integrity.go new file mode 100644 index 00000000..23d048b6 --- /dev/null +++ b/cmd/bd/doctor/fix/database_integrity.go @@ -0,0 +1,116 @@ +package fix + +import ( + "fmt" + "os" + "path/filepath" + "time" + + "github.com/steveyegge/beads/internal/beads" + "github.com/steveyegge/beads/internal/configfile" +) + +// DatabaseIntegrity attempts to recover from database corruption by: +// 1. Backing up the corrupt database (and WAL/SHM if present) +// 2. Re-initializing the database from the working tree JSONL export +// +// This is intentionally conservative: it will not delete JSONL, and it preserves the +// original DB as a backup for forensic recovery. +func DatabaseIntegrity(path string) error { + if err := validateBeadsWorkspace(path); err != nil { + return err + } + + absPath, err := filepath.Abs(path) + if err != nil { + return fmt.Errorf("failed to resolve path: %w", err) + } + + beadsDir := filepath.Join(absPath, ".beads") + + // Best-effort: stop any running daemon to reduce the chance of DB file locks. + _ = Daemon(absPath) + + // Resolve database path (respects metadata.json database override). + var dbPath string + if cfg, err := configfile.Load(beadsDir); err == nil && cfg != nil && cfg.Database != "" { + dbPath = cfg.DatabasePath(beadsDir) + } else { + dbPath = filepath.Join(beadsDir, beads.CanonicalDatabaseName) + } + + // Find JSONL source of truth. + jsonlPath := "" + if cfg, err := configfile.Load(beadsDir); err == nil && cfg != nil { + if cfg.JSONLExport != "" && !isSystemJSONLFilename(cfg.JSONLExport) { + candidate := cfg.JSONLPath(beadsDir) + if _, err := os.Stat(candidate); err == nil { + jsonlPath = candidate + } + } + } + if jsonlPath == "" { + for _, name := range []string{"issues.jsonl", "beads.jsonl"} { + candidate := filepath.Join(beadsDir, name) + if _, err := os.Stat(candidate); err == nil { + jsonlPath = candidate + break + } + } + } + if jsonlPath == "" { + return fmt.Errorf("cannot auto-recover: no JSONL export found in %s", beadsDir) + } + + // Back up corrupt DB and its sidecar files. + ts := time.Now().UTC().Format("20060102T150405Z") + backupDB := dbPath + "." + ts + ".corrupt.backup.db" + if err := moveFile(dbPath, backupDB); err != nil { + // Retry once after attempting to kill daemons again (helps on platforms with strict file locks). + _ = Daemon(absPath) + if err2 := moveFile(dbPath, backupDB); err2 != nil { + // Prefer the original error (more likely root cause). + return fmt.Errorf("failed to back up database: %w", err) + } + } + for _, suffix := range []string{"-wal", "-shm", "-journal"} { + sidecar := dbPath + suffix + if _, err := os.Stat(sidecar); err == nil { + _ = moveFile(sidecar, backupDB+suffix) // best effort + } + } + + // Rebuild by importing from the working tree JSONL into a fresh database. + bdBinary, err := getBdBinary() + if err != nil { + return err + } + + // Use import (not init) so we always hydrate from the working tree JSONL, not git-tracked blobs. + args := []string{"--db", dbPath, "import", "-i", jsonlPath, "--force", "--no-git-history"} + cmd := newBdCmd(bdBinary, args...) + cmd.Dir = absPath + cmd.Stdout = os.Stdout + cmd.Stderr = os.Stderr + + if err := cmd.Run(); err != nil { + // Best-effort rollback: attempt to restore the original DB, while preserving the backup. + failedTS := time.Now().UTC().Format("20060102T150405Z") + if _, statErr := os.Stat(dbPath); statErr == nil { + failedDB := dbPath + "." + failedTS + ".failed.init.db" + _ = moveFile(dbPath, failedDB) + for _, suffix := range []string{"-wal", "-shm", "-journal"} { + _ = moveFile(dbPath+suffix, failedDB+suffix) + } + } + _ = copyFile(backupDB, dbPath) + for _, suffix := range []string{"-wal", "-shm", "-journal"} { + if _, statErr := os.Stat(backupDB + suffix); statErr == nil { + _ = copyFile(backupDB+suffix, dbPath+suffix) + } + } + return fmt.Errorf("failed to rebuild database from JSONL: %w (backup: %s)", err, backupDB) + } + + return nil +} diff --git a/cmd/bd/doctor/fix/fs.go b/cmd/bd/doctor/fix/fs.go new file mode 100644 index 00000000..fddb48e1 --- /dev/null +++ b/cmd/bd/doctor/fix/fs.go @@ -0,0 +1,57 @@ +package fix + +import ( + "errors" + "fmt" + "io" + "os" + "syscall" +) + +var ( + renameFile = os.Rename + removeFile = os.Remove + openFileRO = os.Open + openFileRW = os.OpenFile +) + +func moveFile(src, dst string) error { + if err := renameFile(src, dst); err == nil { + return nil + } else if isEXDEV(err) { + if err := copyFile(src, dst); err != nil { + return err + } + if err := removeFile(src); err != nil { + return fmt.Errorf("failed to remove source after copy: %w", err) + } + return nil + } else { + return err + } +} + +func copyFile(src, dst string) error { + in, err := openFileRO(src) // #nosec G304 -- src is within the workspace + if err != nil { + return err + } + defer in.Close() + out, err := openFileRW(dst, os.O_CREATE|os.O_TRUNC|os.O_WRONLY, 0644) + if err != nil { + return err + } + defer func() { _ = out.Close() }() + if _, err := io.Copy(out, in); err != nil { + return err + } + return out.Close() +} + +func isEXDEV(err error) bool { + var linkErr *os.LinkError + if errors.As(err, &linkErr) { + return errors.Is(linkErr.Err, syscall.EXDEV) + } + return errors.Is(err, syscall.EXDEV) +} diff --git a/cmd/bd/doctor/fix/fs_test.go b/cmd/bd/doctor/fix/fs_test.go new file mode 100644 index 00000000..db242f3c --- /dev/null +++ b/cmd/bd/doctor/fix/fs_test.go @@ -0,0 +1,71 @@ +package fix + +import ( + "errors" + "os" + "path/filepath" + "syscall" + "testing" +) + +func TestMoveFile_EXDEV_FallsBackToCopy(t *testing.T) { + root := t.TempDir() + src := filepath.Join(root, "src.txt") + dst := filepath.Join(root, "dst.txt") + if err := os.WriteFile(src, []byte("hello"), 0644); err != nil { + t.Fatal(err) + } + + oldRename := renameFile + defer func() { renameFile = oldRename }() + renameFile = func(oldpath, newpath string) error { + return &os.LinkError{Op: "rename", Old: oldpath, New: newpath, Err: syscall.EXDEV} + } + + if err := moveFile(src, dst); err != nil { + t.Fatalf("moveFile failed: %v", err) + } + if _, err := os.Stat(src); !os.IsNotExist(err) { + t.Fatalf("expected src to be removed, stat err=%v", err) + } + data, err := os.ReadFile(dst) + if err != nil { + t.Fatalf("read dst: %v", err) + } + if string(data) != "hello" { + t.Fatalf("dst contents=%q", string(data)) + } +} + +func TestMoveFile_EXDEV_CopyFails_LeavesSource(t *testing.T) { + root := t.TempDir() + src := filepath.Join(root, "src.txt") + dst := filepath.Join(root, "dst.txt") + if err := os.WriteFile(src, []byte("hello"), 0644); err != nil { + t.Fatal(err) + } + + oldRename := renameFile + oldOpenRW := openFileRW + defer func() { + renameFile = oldRename + openFileRW = oldOpenRW + }() + renameFile = func(oldpath, newpath string) error { + return &os.LinkError{Op: "rename", Old: oldpath, New: newpath, Err: syscall.EXDEV} + } + openFileRW = func(name string, flag int, perm os.FileMode) (*os.File, error) { + return nil, &os.PathError{Op: "open", Path: name, Err: syscall.ENOSPC} + } + + err := moveFile(src, dst) + if err == nil { + t.Fatalf("expected error") + } + if !errors.Is(err, syscall.ENOSPC) { + t.Fatalf("expected ENOSPC, got %v", err) + } + if _, err := os.Stat(src); err != nil { + t.Fatalf("expected src to remain, stat err=%v", err) + } +} diff --git a/cmd/bd/doctor/fix/hooks.go b/cmd/bd/doctor/fix/hooks.go index 12cc67fc..d46131b1 100644 --- a/cmd/bd/doctor/fix/hooks.go +++ b/cmd/bd/doctor/fix/hooks.go @@ -28,7 +28,7 @@ func GitHooks(path string) error { } // Run bd hooks install - cmd := exec.Command(bdBinary, "hooks", "install") // #nosec G204 -- bdBinary from validated executable path + cmd := newBdCmd(bdBinary, "hooks", "install") cmd.Dir = path // Set working directory without changing process dir cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr diff --git a/cmd/bd/doctor/fix/jsonl_integrity.go b/cmd/bd/doctor/fix/jsonl_integrity.go new file mode 100644 index 00000000..11273298 --- /dev/null +++ b/cmd/bd/doctor/fix/jsonl_integrity.go @@ -0,0 +1,87 @@ +package fix + +import ( + "fmt" + "os" + "path/filepath" + "time" + + "github.com/steveyegge/beads/internal/beads" + "github.com/steveyegge/beads/internal/configfile" + "github.com/steveyegge/beads/internal/utils" +) + +// JSONLIntegrity backs up a malformed JSONL export and regenerates it from the database. +// This is safe only when a database exists and is readable. +func JSONLIntegrity(path string) error { + if err := validateBeadsWorkspace(path); err != nil { + return err + } + + absPath, err := filepath.Abs(path) + if err != nil { + return fmt.Errorf("failed to resolve path: %w", err) + } + + beadsDir := filepath.Join(absPath, ".beads") + + // Resolve db path. + dbPath := filepath.Join(beadsDir, beads.CanonicalDatabaseName) + if cfg, err := configfile.Load(beadsDir); err == nil && cfg != nil && cfg.Database != "" { + dbPath = cfg.DatabasePath(beadsDir) + } + if _, err := os.Stat(dbPath); os.IsNotExist(err) { + return fmt.Errorf("cannot auto-repair JSONL: no database found") + } + + // Resolve JSONL export path. + jsonlPath := "" + if cfg, err := configfile.Load(beadsDir); err == nil && cfg != nil { + if cfg.JSONLExport != "" && !isSystemJSONLFilename(cfg.JSONLExport) { + p := cfg.JSONLPath(beadsDir) + if _, err := os.Stat(p); err == nil { + jsonlPath = p + } + } + } + if jsonlPath == "" { + p := utils.FindJSONLInDir(beadsDir) + if _, err := os.Stat(p); err == nil { + jsonlPath = p + } + } + if jsonlPath == "" { + return fmt.Errorf("cannot auto-repair JSONL: no JSONL file found") + } + + // Back up the JSONL. + ts := time.Now().UTC().Format("20060102T150405Z") + backup := jsonlPath + "." + ts + ".corrupt.backup.jsonl" + if err := moveFile(jsonlPath, backup); err != nil { + return fmt.Errorf("failed to back up JSONL: %w", err) + } + + binary, err := getBdBinary() + if err != nil { + _ = moveFile(backup, jsonlPath) + return err + } + + // Re-export from DB. + cmd := newBdCmd(binary, "--db", dbPath, "export", "-o", jsonlPath, "--force") + cmd.Dir = absPath + cmd.Stdout = os.Stdout + cmd.Stderr = os.Stderr + if err := cmd.Run(); err != nil { + // Best-effort rollback: restore the original JSONL, but keep the backup. + failedTS := time.Now().UTC().Format("20060102T150405Z") + if _, statErr := os.Stat(jsonlPath); statErr == nil { + failed := jsonlPath + "." + failedTS + ".failed.regen.jsonl" + _ = moveFile(jsonlPath, failed) + } + _ = copyFile(backup, jsonlPath) + return fmt.Errorf("failed to regenerate JSONL from database: %w (backup: %s)", err, backup) + } + + return nil +} diff --git a/cmd/bd/doctor/fix/migrate.go b/cmd/bd/doctor/fix/migrate.go index f03d112f..2c20abb4 100644 --- a/cmd/bd/doctor/fix/migrate.go +++ b/cmd/bd/doctor/fix/migrate.go @@ -3,8 +3,10 @@ package fix import ( "fmt" "os" - "os/exec" "path/filepath" + + "github.com/steveyegge/beads/internal/beads" + "github.com/steveyegge/beads/internal/configfile" ) // DatabaseVersion fixes database version mismatches by running bd migrate, @@ -23,12 +25,15 @@ func DatabaseVersion(path string) error { // Check if database exists - if not, run init instead of migrate (bd-4h9) beadsDir := filepath.Join(path, ".beads") - dbPath := filepath.Join(beadsDir, "beads.db") + dbPath := filepath.Join(beadsDir, beads.CanonicalDatabaseName) + if cfg, err := configfile.Load(beadsDir); err == nil && cfg != nil && cfg.Database != "" { + dbPath = cfg.DatabasePath(beadsDir) + } if _, err := os.Stat(dbPath); os.IsNotExist(err) { // No database - this is a fresh clone, run bd init fmt.Println("→ No database found, running 'bd init' to hydrate from JSONL...") - cmd := exec.Command(bdBinary, "init") // #nosec G204 -- bdBinary from validated executable path + cmd := newBdCmd(bdBinary, "--db", dbPath, "init") cmd.Dir = path cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr @@ -41,8 +46,8 @@ func DatabaseVersion(path string) error { } // Database exists - run bd migrate - cmd := exec.Command(bdBinary, "migrate") // #nosec G204 -- bdBinary from validated executable path - cmd.Dir = path // Set working directory without changing process dir + cmd := newBdCmd(bdBinary, "--db", dbPath, "migrate") + cmd.Dir = path // Set working directory without changing process dir cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr diff --git a/cmd/bd/doctor/fix/repo_fingerprint.go b/cmd/bd/doctor/fix/repo_fingerprint.go index 3a689071..4ca9644c 100644 --- a/cmd/bd/doctor/fix/repo_fingerprint.go +++ b/cmd/bd/doctor/fix/repo_fingerprint.go @@ -3,7 +3,6 @@ package fix import ( "fmt" "os" - "os/exec" "path/filepath" "strings" ) @@ -31,9 +30,9 @@ func readLineUnbuffered() (string, error) { // RepoFingerprint fixes repo fingerprint mismatches by prompting the user // for which action to take. This is interactive because the consequences // differ significantly between options: -// 1. Update repo ID (if URL changed or bd upgraded) -// 2. Reinitialize database (if wrong database was copied) -// 3. Skip (do nothing) +// 1. Update repo ID (if URL changed or bd upgraded) +// 2. Reinitialize database (if wrong database was copied) +// 3. Skip (do nothing) func RepoFingerprint(path string) error { // Validate workspace if err := validateBeadsWorkspace(path); err != nil { @@ -67,7 +66,7 @@ func RepoFingerprint(path string) error { case "1": // Run bd migrate --update-repo-id fmt.Println(" → Running 'bd migrate --update-repo-id'...") - cmd := exec.Command(bdBinary, "migrate", "--update-repo-id") // #nosec G204 -- bdBinary from validated executable path + cmd := newBdCmd(bdBinary, "migrate", "--update-repo-id") cmd.Dir = path cmd.Stdin = os.Stdin // Allow user to respond to migrate's confirmation prompt cmd.Stdout = os.Stdout @@ -105,7 +104,7 @@ func RepoFingerprint(path string) error { _ = os.Remove(dbPath + "-shm") fmt.Println(" → Running 'bd init'...") - cmd := exec.Command(bdBinary, "init", "--quiet") // #nosec G204 -- bdBinary from validated executable path + cmd := newBdCmd(bdBinary, "init", "--quiet") cmd.Dir = path cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr diff --git a/cmd/bd/doctor/fix/sqlite_open.go b/cmd/bd/doctor/fix/sqlite_open.go new file mode 100644 index 00000000..373b81c8 --- /dev/null +++ b/cmd/bd/doctor/fix/sqlite_open.go @@ -0,0 +1,52 @@ +package fix + +import ( + "fmt" + "os" + "strings" + "time" +) + +func sqliteConnString(path string, readOnly bool) string { + path = strings.TrimSpace(path) + if path == "" { + return "" + } + + busy := 30 * time.Second + if v := strings.TrimSpace(os.Getenv("BD_LOCK_TIMEOUT")); v != "" { + if d, err := time.ParseDuration(v); err == nil { + busy = d + } + } + busyMs := int64(busy / time.Millisecond) + + if strings.HasPrefix(path, "file:") { + conn := path + sep := "?" + if strings.Contains(conn, "?") { + sep = "&" + } + if readOnly && !strings.Contains(conn, "mode=") { + conn += sep + "mode=ro" + sep = "&" + } + if !strings.Contains(conn, "_pragma=busy_timeout") { + conn += fmt.Sprintf("%s_pragma=busy_timeout(%d)", sep, busyMs) + sep = "&" + } + if !strings.Contains(conn, "_pragma=foreign_keys") { + conn += sep + "_pragma=foreign_keys(ON)" + sep = "&" + } + if !strings.Contains(conn, "_time_format=") { + conn += sep + "_time_format=sqlite" + } + return conn + } + + if readOnly { + return fmt.Sprintf("file:%s?mode=ro&_pragma=foreign_keys(ON)&_pragma=busy_timeout(%d)&_time_format=sqlite", path, busyMs) + } + return fmt.Sprintf("file:%s?_pragma=foreign_keys(ON)&_pragma=busy_timeout(%d)&_time_format=sqlite", path, busyMs) +} diff --git a/cmd/bd/doctor/fix/sync.go b/cmd/bd/doctor/fix/sync.go index 4024cce6..7224326e 100644 --- a/cmd/bd/doctor/fix/sync.go +++ b/cmd/bd/doctor/fix/sync.go @@ -6,7 +6,6 @@ import ( "encoding/json" "fmt" "os" - "os/exec" "path/filepath" _ "github.com/ncruces/go-sqlite3/driver" @@ -38,13 +37,23 @@ func DBJSONLSync(path string) error { // Find JSONL file var jsonlPath string - issuesJSONL := filepath.Join(beadsDir, "issues.jsonl") - beadsJSONL := filepath.Join(beadsDir, "beads.jsonl") + if cfg, err := configfile.Load(beadsDir); err == nil && cfg != nil { + if cfg.JSONLExport != "" && !isSystemJSONLFilename(cfg.JSONLExport) { + p := cfg.JSONLPath(beadsDir) + if _, err := os.Stat(p); err == nil { + jsonlPath = p + } + } + } + if jsonlPath == "" { + issuesJSONL := filepath.Join(beadsDir, "issues.jsonl") + beadsJSONL := filepath.Join(beadsDir, "beads.jsonl") - if _, err := os.Stat(issuesJSONL); err == nil { - jsonlPath = issuesJSONL - } else if _, err := os.Stat(beadsJSONL); err == nil { - jsonlPath = beadsJSONL + if _, err := os.Stat(issuesJSONL); err == nil { + jsonlPath = issuesJSONL + } else if _, err := os.Stat(beadsJSONL); err == nil { + jsonlPath = beadsJSONL + } } // Check if both database and JSONL exist @@ -102,21 +111,36 @@ func DBJSONLSync(path string) error { return err } - // Run the appropriate sync command - var cmd *exec.Cmd if syncDirection == "export" { // Export DB to JSONL file (must specify -o to write to file, not stdout) - jsonlOutputPath := filepath.Join(beadsDir, "issues.jsonl") - cmd = exec.Command(bdBinary, "export", "-o", jsonlOutputPath, "--force") // #nosec G204 -- bdBinary from validated executable path - } else { - cmd = exec.Command(bdBinary, "sync", "--import-only") // #nosec G204 -- bdBinary from validated executable path + jsonlOutputPath := jsonlPath + exportCmd := newBdCmd(bdBinary, "--db", dbPath, "export", "-o", jsonlOutputPath, "--force") + exportCmd.Dir = path // Set working directory without changing process dir + exportCmd.Stdout = os.Stdout + exportCmd.Stderr = os.Stderr + if err := exportCmd.Run(); err != nil { + return fmt.Errorf("failed to export database to JSONL: %w", err) + } + + // Staleness check uses last_import_time. After exporting, JSONL mtime is newer, + // so mark the DB as fresh by running a no-op import (skip existing issues). + markFreshCmd := newBdCmd(bdBinary, "--db", dbPath, "import", "-i", jsonlOutputPath, "--force", "--skip-existing", "--no-git-history") + markFreshCmd.Dir = path + markFreshCmd.Stdout = os.Stdout + markFreshCmd.Stderr = os.Stderr + if err := markFreshCmd.Run(); err != nil { + return fmt.Errorf("failed to mark database as fresh after export: %w", err) + } + + return nil } - cmd.Dir = path // Set working directory without changing process dir - cmd.Stdout = os.Stdout - cmd.Stderr = os.Stderr + importCmd := newBdCmd(bdBinary, "--db", dbPath, "sync", "--import-only") + importCmd.Dir = path // Set working directory without changing process dir + importCmd.Stdout = os.Stdout + importCmd.Stderr = os.Stderr - if err := cmd.Run(); err != nil { + if err := importCmd.Run(); err != nil { return fmt.Errorf("failed to sync database with JSONL: %w", err) } @@ -125,7 +149,7 @@ func DBJSONLSync(path string) error { // countDatabaseIssues counts the number of issues in the database. func countDatabaseIssues(dbPath string) (int, error) { - db, err := sql.Open("sqlite3", dbPath) + db, err := sql.Open("sqlite3", sqliteConnString(dbPath, true)) if err != nil { return 0, fmt.Errorf("failed to open database: %w", err) } diff --git a/cmd/bd/doctor/fix/sync_branch.go b/cmd/bd/doctor/fix/sync_branch.go index 06a2388d..88ac1bcc 100644 --- a/cmd/bd/doctor/fix/sync_branch.go +++ b/cmd/bd/doctor/fix/sync_branch.go @@ -32,8 +32,7 @@ func SyncBranchConfig(path string) error { } // Set sync.branch using bd config set - // #nosec G204 - bdBinary is controlled by getBdBinary() which returns os.Executable() - setCmd := exec.Command(bdBinary, "config", "set", "sync.branch", currentBranch) + setCmd := newBdCmd(bdBinary, "config", "set", "sync.branch", currentBranch) setCmd.Dir = path if output, err := setCmd.CombinedOutput(); err != nil { return fmt.Errorf("failed to set sync.branch: %w\nOutput: %s", err, string(output)) diff --git a/cmd/bd/doctor/fix/validation.go b/cmd/bd/doctor/fix/validation.go index 6f278a76..297f8cea 100644 --- a/cmd/bd/doctor/fix/validation.go +++ b/cmd/bd/doctor/fix/validation.go @@ -233,5 +233,5 @@ func ChildParentDependencies(path string) error { // openDB opens a SQLite database for read-write access func openDB(dbPath string) (*sql.DB, error) { - return sql.Open("sqlite3", dbPath) + return sql.Open("sqlite3", sqliteConnString(dbPath, false)) } diff --git a/cmd/bd/doctor/git.go b/cmd/bd/doctor/git.go index ab373ff7..b3d38725 100644 --- a/cmd/bd/doctor/git.go +++ b/cmd/bd/doctor/git.go @@ -78,6 +78,173 @@ func CheckGitHooks() DoctorCheck { } } +// CheckGitWorkingTree checks if the git working tree is clean. +// This helps prevent leaving work stranded (AGENTS.md: keep git state clean). +func CheckGitWorkingTree(path string) DoctorCheck { + cmd := exec.Command("git", "rev-parse", "--git-dir") + cmd.Dir = path + if err := cmd.Run(); err != nil { + return DoctorCheck{ + Name: "Git Working Tree", + Status: StatusOK, + Message: "N/A (not a git repository)", + } + } + + cmd = exec.Command("git", "status", "--porcelain") + cmd.Dir = path + out, err := cmd.Output() + if err != nil { + return DoctorCheck{ + Name: "Git Working Tree", + Status: StatusWarning, + Message: "Unable to check git status", + Detail: err.Error(), + Fix: "Run 'git status' and commit/stash changes before syncing", + } + } + + status := strings.TrimSpace(string(out)) + if status == "" { + return DoctorCheck{ + Name: "Git Working Tree", + Status: StatusOK, + Message: "Clean", + } + } + + // Show a small sample of paths for quick debugging. + lines := strings.Split(status, "\n") + maxLines := 8 + if len(lines) > maxLines { + lines = append(lines[:maxLines], "…") + } + + return DoctorCheck{ + Name: "Git Working Tree", + Status: StatusWarning, + Message: "Uncommitted changes present", + Detail: strings.Join(lines, "\n"), + Fix: "Commit or stash changes, then follow AGENTS.md: git pull --rebase && git push", + } +} + +// CheckGitUpstream checks whether the current branch is up to date with its upstream. +// This catches common "forgot to pull/push" failure modes (AGENTS.md: pull --rebase, push). +func CheckGitUpstream(path string) DoctorCheck { + cmd := exec.Command("git", "rev-parse", "--git-dir") + cmd.Dir = path + if err := cmd.Run(); err != nil { + return DoctorCheck{ + Name: "Git Upstream", + Status: StatusOK, + Message: "N/A (not a git repository)", + } + } + + // Detect detached HEAD. + cmd = exec.Command("git", "symbolic-ref", "--short", "HEAD") + cmd.Dir = path + branchOut, err := cmd.Output() + if err != nil { + return DoctorCheck{ + Name: "Git Upstream", + Status: StatusWarning, + Message: "Detached HEAD (no branch)", + Fix: "Check out a branch before syncing", + } + } + branch := strings.TrimSpace(string(branchOut)) + + cmd = exec.Command("git", "rev-parse", "--abbrev-ref", "--symbolic-full-name", "@{u}") + cmd.Dir = path + upOut, err := cmd.Output() + if err != nil { + return DoctorCheck{ + Name: "Git Upstream", + Status: StatusWarning, + Message: fmt.Sprintf("No upstream configured for %s", branch), + Fix: fmt.Sprintf("Set upstream then push: git push -u origin %s", branch), + } + } + upstream := strings.TrimSpace(string(upOut)) + + ahead, aheadErr := gitRevListCount(path, "@{u}..HEAD") + behind, behindErr := gitRevListCount(path, "HEAD..@{u}") + if aheadErr != nil || behindErr != nil { + detailParts := []string{} + if aheadErr != nil { + detailParts = append(detailParts, "ahead: "+aheadErr.Error()) + } + if behindErr != nil { + detailParts = append(detailParts, "behind: "+behindErr.Error()) + } + return DoctorCheck{ + Name: "Git Upstream", + Status: StatusWarning, + Message: fmt.Sprintf("Unable to compare with upstream (%s)", upstream), + Detail: strings.Join(detailParts, "; "), + Fix: "Run 'git fetch' then check: git status -sb", + } + } + + if ahead == 0 && behind == 0 { + return DoctorCheck{ + Name: "Git Upstream", + Status: StatusOK, + Message: fmt.Sprintf("Up to date (%s)", upstream), + Detail: fmt.Sprintf("Branch: %s", branch), + } + } + + if ahead > 0 && behind == 0 { + return DoctorCheck{ + Name: "Git Upstream", + Status: StatusWarning, + Message: fmt.Sprintf("Ahead of upstream by %d commit(s)", ahead), + Detail: fmt.Sprintf("Branch: %s, upstream: %s", branch, upstream), + Fix: "Run 'git push' (AGENTS.md: git pull --rebase && git push)", + } + } + + if behind > 0 && ahead == 0 { + return DoctorCheck{ + Name: "Git Upstream", + Status: StatusWarning, + Message: fmt.Sprintf("Behind upstream by %d commit(s)", behind), + Detail: fmt.Sprintf("Branch: %s, upstream: %s", branch, upstream), + Fix: "Run 'git pull --rebase' (then re-run bd sync / bd doctor)", + } + } + + return DoctorCheck{ + Name: "Git Upstream", + Status: StatusWarning, + Message: fmt.Sprintf("Diverged from upstream (ahead %d, behind %d)", ahead, behind), + Detail: fmt.Sprintf("Branch: %s, upstream: %s", branch, upstream), + Fix: "Run 'git pull --rebase' then 'git push'", + } +} + +func gitRevListCount(path string, rangeExpr string) (int, error) { + cmd := exec.Command("git", "rev-list", "--count", rangeExpr) // #nosec G204 -- fixed args + cmd.Dir = path + out, err := cmd.Output() + if err != nil { + return 0, err + } + countStr := strings.TrimSpace(string(out)) + if countStr == "" { + return 0, nil + } + + var n int + if _, err := fmt.Sscanf(countStr, "%d", &n); err != nil { + return 0, err + } + return n, nil +} + // CheckSyncBranchHookCompatibility checks if pre-push hook is compatible with sync-branch mode. // When sync-branch is configured, the pre-push hook must have the sync-branch bypass logic // (added in version 0.29.0). Without it, users experience circular "bd sync" failures (issue #532). @@ -664,5 +831,5 @@ func CheckOrphanedIssues(path string) DoctorCheck { // openDBReadOnly opens a SQLite database in read-only mode func openDBReadOnly(dbPath string) (*sql.DB, error) { - return sql.Open("sqlite3", "file:"+dbPath+"?mode=ro") + return sql.Open("sqlite3", sqliteConnString(dbPath, true)) } diff --git a/cmd/bd/doctor/git_hygiene_test.go b/cmd/bd/doctor/git_hygiene_test.go new file mode 100644 index 00000000..8b9fefba --- /dev/null +++ b/cmd/bd/doctor/git_hygiene_test.go @@ -0,0 +1,176 @@ +package doctor + +import ( + "os" + "os/exec" + "path/filepath" + "strings" + "testing" +) + +func mkTmpDirInTmp(t *testing.T, prefix string) string { + t.Helper() + dir, err := os.MkdirTemp("/tmp", prefix) + if err != nil { + // Fallback for platforms without /tmp (e.g. Windows). + dir, err = os.MkdirTemp("", prefix) + if err != nil { + t.Fatalf("failed to create temp dir: %v", err) + } + } + t.Cleanup(func() { _ = os.RemoveAll(dir) }) + return dir +} + +func runGit(t *testing.T, dir string, args ...string) string { + t.Helper() + cmd := exec.Command("git", args...) + cmd.Dir = dir + out, err := cmd.CombinedOutput() + if err != nil { + t.Fatalf("git %v failed: %v\n%s", args, err, string(out)) + } + return string(out) +} + +func initRepo(t *testing.T, dir string, branch string) { + t.Helper() + _ = os.MkdirAll(filepath.Join(dir, ".beads"), 0755) + runGit(t, dir, "init", "-b", branch) + runGit(t, dir, "config", "user.email", "test@test.com") + runGit(t, dir, "config", "user.name", "Test User") +} + +func commitFile(t *testing.T, dir, name, content, msg string) { + t.Helper() + path := filepath.Join(dir, name) + if err := os.MkdirAll(filepath.Dir(path), 0755); err != nil { + t.Fatalf("mkdir: %v", err) + } + if err := os.WriteFile(path, []byte(content), 0644); err != nil { + t.Fatalf("write file: %v", err) + } + runGit(t, dir, "add", name) + runGit(t, dir, "commit", "-m", msg) +} + +func TestCheckGitWorkingTree(t *testing.T) { + t.Run("not a git repo", func(t *testing.T) { + dir := mkTmpDirInTmp(t, "bd-git-nt-*") + check := CheckGitWorkingTree(dir) + if check.Status != StatusOK { + t.Fatalf("status=%q want %q", check.Status, StatusOK) + } + if !strings.Contains(check.Message, "N/A") { + t.Fatalf("message=%q want N/A", check.Message) + } + }) + + t.Run("clean", func(t *testing.T) { + dir := mkTmpDirInTmp(t, "bd-git-clean-*") + initRepo(t, dir, "main") + commitFile(t, dir, "README.md", "# test\n", "initial") + + check := CheckGitWorkingTree(dir) + if check.Status != StatusOK { + t.Fatalf("status=%q want %q (msg=%q)", check.Status, StatusOK, check.Message) + } + }) + + t.Run("dirty", func(t *testing.T) { + dir := mkTmpDirInTmp(t, "bd-git-dirty-*") + initRepo(t, dir, "main") + commitFile(t, dir, "README.md", "# test\n", "initial") + if err := os.WriteFile(filepath.Join(dir, "dirty.txt"), []byte("x"), 0644); err != nil { + t.Fatalf("write dirty file: %v", err) + } + + check := CheckGitWorkingTree(dir) + if check.Status != StatusWarning { + t.Fatalf("status=%q want %q (msg=%q)", check.Status, StatusWarning, check.Message) + } + }) +} + +func TestCheckGitUpstream(t *testing.T) { + t.Run("no upstream", func(t *testing.T) { + dir := mkTmpDirInTmp(t, "bd-git-up-*") + initRepo(t, dir, "main") + commitFile(t, dir, "README.md", "# test\n", "initial") + + check := CheckGitUpstream(dir) + if check.Status != StatusWarning { + t.Fatalf("status=%q want %q (msg=%q)", check.Status, StatusWarning, check.Message) + } + if !strings.Contains(check.Message, "No upstream") { + t.Fatalf("message=%q want to mention upstream", check.Message) + } + }) + + t.Run("up to date", func(t *testing.T) { + dir := mkTmpDirInTmp(t, "bd-git-up2-*") + remote := mkTmpDirInTmp(t, "bd-git-remote-*") + runGit(t, remote, "init", "--bare", "--initial-branch=main") + + initRepo(t, dir, "main") + commitFile(t, dir, "README.md", "# test\n", "initial") + runGit(t, dir, "remote", "add", "origin", remote) + runGit(t, dir, "push", "-u", "origin", "main") + + check := CheckGitUpstream(dir) + if check.Status != StatusOK { + t.Fatalf("status=%q want %q (msg=%q)", check.Status, StatusOK, check.Message) + } + }) + + t.Run("ahead of upstream", func(t *testing.T) { + dir := mkTmpDirInTmp(t, "bd-git-ahead-*") + remote := mkTmpDirInTmp(t, "bd-git-remote2-*") + runGit(t, remote, "init", "--bare", "--initial-branch=main") + + initRepo(t, dir, "main") + commitFile(t, dir, "README.md", "# test\n", "initial") + runGit(t, dir, "remote", "add", "origin", remote) + runGit(t, dir, "push", "-u", "origin", "main") + + commitFile(t, dir, "file2.txt", "x", "local commit") + + check := CheckGitUpstream(dir) + if check.Status != StatusWarning { + t.Fatalf("status=%q want %q (msg=%q)", check.Status, StatusWarning, check.Message) + } + if !strings.Contains(check.Message, "Ahead") { + t.Fatalf("message=%q want to mention ahead", check.Message) + } + }) + + t.Run("behind upstream", func(t *testing.T) { + dir := mkTmpDirInTmp(t, "bd-git-behind-*") + remote := mkTmpDirInTmp(t, "bd-git-remote3-*") + runGit(t, remote, "init", "--bare", "--initial-branch=main") + + initRepo(t, dir, "main") + commitFile(t, dir, "README.md", "# test\n", "initial") + runGit(t, dir, "remote", "add", "origin", remote) + runGit(t, dir, "push", "-u", "origin", "main") + + // Advance remote via another clone. + clone := mkTmpDirInTmp(t, "bd-git-clone-*") + runGit(t, clone, "clone", remote, ".") + runGit(t, clone, "config", "user.email", "test@test.com") + runGit(t, clone, "config", "user.name", "Test User") + commitFile(t, clone, "remote.txt", "y", "remote commit") + runGit(t, clone, "push", "origin", "main") + + // Update tracking refs. + runGit(t, dir, "fetch", "origin") + + check := CheckGitUpstream(dir) + if check.Status != StatusWarning { + t.Fatalf("status=%q want %q (msg=%q)", check.Status, StatusWarning, check.Message) + } + if !strings.Contains(check.Message, "Behind") { + t.Fatalf("message=%q want to mention behind", check.Message) + } + }) +} diff --git a/cmd/bd/doctor/installation.go b/cmd/bd/doctor/installation.go index c5b94eeb..478c1638 100644 --- a/cmd/bd/doctor/installation.go +++ b/cmd/bd/doctor/installation.go @@ -106,7 +106,7 @@ func CheckPermissions(path string) DoctorCheck { dbPath := filepath.Join(beadsDir, beads.CanonicalDatabaseName) if _, err := os.Stat(dbPath); err == nil { // Try to open database - db, err := sql.Open("sqlite3", dbPath) + db, err := sql.Open("sqlite3", sqliteConnString(dbPath, true)) if err != nil { return DoctorCheck{ Name: "Permissions", @@ -118,7 +118,7 @@ func CheckPermissions(path string) DoctorCheck { _ = db.Close() // Intentionally ignore close error // Try a write test - db, err = sql.Open("sqlite", dbPath) + db, err = sql.Open("sqlite", sqliteConnString(dbPath, true)) if err == nil { _, err = db.Exec("SELECT 1") _ = db.Close() // Intentionally ignore close error diff --git a/cmd/bd/doctor/integrity.go b/cmd/bd/doctor/integrity.go index df9c3375..35aecabc 100644 --- a/cmd/bd/doctor/integrity.go +++ b/cmd/bd/doctor/integrity.go @@ -51,7 +51,7 @@ func CheckIDFormat(path string) DoctorCheck { } // Open database - db, err := sql.Open("sqlite3", "file:"+dbPath+"?mode=ro") + db, err := sql.Open("sqlite3", sqliteConnString(dbPath, true)) if err != nil { return DoctorCheck{ Name: "Issue IDs", @@ -121,7 +121,7 @@ func CheckDependencyCycles(path string) DoctorCheck { } // Open database to check for cycles - db, err := sql.Open("sqlite3", dbPath) + db, err := sql.Open("sqlite3", sqliteConnString(dbPath, true)) if err != nil { return DoctorCheck{ Name: "Dependency Cycles", @@ -216,7 +216,7 @@ func CheckTombstones(path string) DoctorCheck { } } - db, err := sql.Open("sqlite3", dbPath) + db, err := sql.Open("sqlite3", sqliteConnString(dbPath, true)) if err != nil { return DoctorCheck{ Name: "Tombstones", @@ -420,7 +420,7 @@ func CheckRepoFingerprint(path string) DoctorCheck { } // Open database - db, err := sql.Open("sqlite3", "file:"+dbPath+"?mode=ro") + db, err := sql.Open("sqlite3", sqliteConnString(dbPath, true)) if err != nil { return DoctorCheck{ Name: "Repo Fingerprint", diff --git a/cmd/bd/doctor/jsonl_integrity.go b/cmd/bd/doctor/jsonl_integrity.go new file mode 100644 index 00000000..1c84f862 --- /dev/null +++ b/cmd/bd/doctor/jsonl_integrity.go @@ -0,0 +1,123 @@ +package doctor + +import ( + "bufio" + "encoding/json" + "fmt" + "os" + "path/filepath" + "strings" + + "github.com/steveyegge/beads/internal/beads" + "github.com/steveyegge/beads/internal/configfile" + "github.com/steveyegge/beads/internal/utils" +) + +func CheckJSONLIntegrity(path string) DoctorCheck { + beadsDir := filepath.Join(path, ".beads") + + // Resolve JSONL path. + jsonlPath := "" + if cfg, err := configfile.Load(beadsDir); err == nil && cfg != nil { + if cfg.JSONLExport != "" && !isSystemJSONLFilename(cfg.JSONLExport) { + p := cfg.JSONLPath(beadsDir) + if _, err := os.Stat(p); err == nil { + jsonlPath = p + } + } + } + if jsonlPath == "" { + // Fall back to a best-effort discovery within .beads/. + p := utils.FindJSONLInDir(beadsDir) + if _, err := os.Stat(p); err == nil { + jsonlPath = p + } + } + if jsonlPath == "" { + return DoctorCheck{Name: "JSONL Integrity", Status: StatusOK, Message: "N/A (no JSONL file)"} + } + + // Best-effort scan for malformed lines. + f, err := os.Open(jsonlPath) // #nosec G304 -- jsonlPath is within the workspace + if err != nil { + return DoctorCheck{ + Name: "JSONL Integrity", + Status: StatusWarning, + Message: "Unable to read JSONL file", + Detail: err.Error(), + } + } + defer f.Close() + + var malformed int + var examples []string + scanner := bufio.NewScanner(f) + lineNo := 0 + for scanner.Scan() { + lineNo++ + line := strings.TrimSpace(scanner.Text()) + if line == "" { + continue + } + var v struct { + ID string `json:"id"` + } + if err := json.Unmarshal([]byte(line), &v); err != nil || v.ID == "" { + malformed++ + if len(examples) < 5 { + if err != nil { + examples = append(examples, fmt.Sprintf("line %d: %v", lineNo, err)) + } else { + examples = append(examples, fmt.Sprintf("line %d: missing id", lineNo)) + } + } + } + } + if err := scanner.Err(); err != nil { + return DoctorCheck{ + Name: "JSONL Integrity", + Status: StatusWarning, + Message: "Unable to scan JSONL file", + Detail: err.Error(), + } + } + if malformed == 0 { + return DoctorCheck{ + Name: "JSONL Integrity", + Status: StatusOK, + Message: fmt.Sprintf("%s looks valid", filepath.Base(jsonlPath)), + } + } + + // If we have a database, we can auto-repair by re-exporting from DB. + dbPath := filepath.Join(beadsDir, beads.CanonicalDatabaseName) + if cfg, err := configfile.Load(beadsDir); err == nil && cfg != nil && cfg.Database != "" { + dbPath = cfg.DatabasePath(beadsDir) + } + if _, err := os.Stat(dbPath); os.IsNotExist(err) { + return DoctorCheck{ + Name: "JSONL Integrity", + Status: StatusError, + Message: fmt.Sprintf("%s has %d malformed line(s)", filepath.Base(jsonlPath), malformed), + Detail: strings.Join(examples, "\n"), + Fix: "Restore the JSONL file from git or from a backup (no database available for auto-repair).", + } + } + + return DoctorCheck{ + Name: "JSONL Integrity", + Status: StatusError, + Message: fmt.Sprintf("%s has %d malformed line(s)", filepath.Base(jsonlPath), malformed), + Detail: strings.Join(examples, "\n"), + Fix: "Run 'bd doctor --fix' to back up the JSONL and regenerate it from the database.", + } +} + +func isSystemJSONLFilename(name string) bool { + switch name { + case "deletions.jsonl", "interactions.jsonl", "molecules.jsonl": + return true + default: + return false + } +} diff --git a/cmd/bd/doctor/jsonl_integrity_test.go b/cmd/bd/doctor/jsonl_integrity_test.go new file mode 100644 index 00000000..772e16a5 --- /dev/null +++ b/cmd/bd/doctor/jsonl_integrity_test.go @@ -0,0 +1,43 @@ +package doctor + +import ( + "os" + "path/filepath" + "testing" +) + +func TestCheckJSONLIntegrity_MalformedLine(t *testing.T) { + ws := t.TempDir() + beadsDir := filepath.Join(ws, ".beads") + if err := os.MkdirAll(beadsDir, 0755); err != nil { + t.Fatal(err) + } + jsonlPath := filepath.Join(beadsDir, "issues.jsonl") + if err := os.WriteFile(jsonlPath, []byte("{\"id\":\"t-1\"}\n{not json}\n"), 0644); err != nil { + t.Fatal(err) + } + // Ensure DB exists so check suggests auto-repair. + if err := os.WriteFile(filepath.Join(beadsDir, "beads.db"), []byte("x"), 0644); err != nil { + t.Fatal(err) + } + + check := CheckJSONLIntegrity(ws) + if check.Status != StatusError { + t.Fatalf("expected StatusError, got %v (%s)", check.Status, check.Message) + } + if check.Fix == "" { + t.Fatalf("expected Fix guidance") + } +} + +func TestCheckJSONLIntegrity_NoJSONL(t *testing.T) { + ws := t.TempDir() + beadsDir := filepath.Join(ws, ".beads") + if err := os.MkdirAll(beadsDir, 0755); err != nil { + t.Fatal(err) + } + check := CheckJSONLIntegrity(ws) + if check.Status != StatusOK { + t.Fatalf("expected StatusOK, got %v (%s)", check.Status, check.Message) + } +} diff --git a/cmd/bd/doctor/legacy.go b/cmd/bd/doctor/legacy.go index 1a5a59c6..078bdf4c 100644 --- a/cmd/bd/doctor/legacy.go +++ b/cmd/bd/doctor/legacy.go @@ -53,7 +53,7 @@ func CheckLegacyBeadsSlashCommands(repoPath string) DoctorCheck { Name: "Legacy Commands", Status: "warning", Message: fmt.Sprintf("Old beads integration detected in %s", strings.Join(filesWithLegacyCommands, ", ")), - Detail: "Found: /beads:* slash command references (deprecated)\n" + + Detail: "Found: /beads:* slash command references (deprecated)\n" + " These commands are token-inefficient (~10.5k tokens per session)", Fix: "Migrate to bd prime hooks for better token efficiency:\n" + "\n" + @@ -104,7 +104,7 @@ func CheckAgentDocumentation(repoPath string) DoctorCheck { Name: "Agent Documentation", Status: "warning", Message: "No agent documentation found", - Detail: "Missing: AGENTS.md or CLAUDE.md\n" + + Detail: "Missing: AGENTS.md or CLAUDE.md\n" + " Documenting workflow helps AI agents work more effectively", Fix: "Add agent documentation:\n" + " • Run 'bd onboard' to create AGENTS.md with workflow guidance\n" + @@ -187,7 +187,7 @@ func CheckLegacyJSONLFilename(repoPath string) DoctorCheck { Name: "JSONL Files", Status: "warning", Message: fmt.Sprintf("Multiple JSONL files found: %s", strings.Join(realJSONLFiles, ", ")), - Detail: "Having multiple JSONL files can cause sync and merge conflicts.\n" + + Detail: "Having multiple JSONL files can cause sync and merge conflicts.\n" + " Only one JSONL file should be used per repository.", Fix: "Determine which file is current and remove the others:\n" + " 1. Check .beads/metadata.json for 'jsonl_export' setting\n" + @@ -235,7 +235,7 @@ func CheckLegacyJSONLConfig(repoPath string) DoctorCheck { Name: "JSONL Config", Status: "warning", Message: "Using legacy beads.jsonl filename", - Detail: "The canonical filename is now issues.jsonl (bd-6xd).\n" + + Detail: "The canonical filename is now issues.jsonl (bd-6xd).\n" + " Legacy beads.jsonl is still supported but should be migrated.", Fix: "Run 'bd doctor --fix' to auto-migrate, or manually:\n" + " 1. git mv .beads/beads.jsonl .beads/issues.jsonl\n" + @@ -251,7 +251,7 @@ func CheckLegacyJSONLConfig(repoPath string) DoctorCheck { Status: "warning", Message: "Config references beads.jsonl but issues.jsonl exists", Detail: "metadata.json says beads.jsonl but the actual file is issues.jsonl", - Fix: "Run 'bd doctor --fix' to update the configuration", + Fix: "Run 'bd doctor --fix' to update the configuration", } } } @@ -303,6 +303,16 @@ func CheckDatabaseConfig(repoPath string) DoctorCheck { // Check if configured JSONL exists if cfg.JSONLExport != "" { + if cfg.JSONLExport == "deletions.jsonl" || cfg.JSONLExport == "interactions.jsonl" || cfg.JSONLExport == "molecules.jsonl" { + return DoctorCheck{ + Name: "Database Config", + Status: "error", + Message: fmt.Sprintf("Invalid jsonl_export %q (system file)", cfg.JSONLExport), + Detail: "metadata.json jsonl_export must reference the git-tracked issues export (typically issues.jsonl), not a system log file.", + Fix: "Run 'bd doctor --fix' to reset metadata.json jsonl_export to issues.jsonl, then commit the change.", + } + } + jsonlPath := cfg.JSONLPath(beadsDir) if _, err := os.Stat(jsonlPath); os.IsNotExist(err) { // Check if other .jsonl files exist @@ -315,7 +325,15 @@ func CheckDatabaseConfig(repoPath string) DoctorCheck { lowerName := strings.ToLower(name) if !strings.Contains(lowerName, "backup") && !strings.Contains(lowerName, ".orig") && - !strings.Contains(lowerName, ".bak") { + !strings.Contains(lowerName, ".bak") && + !strings.Contains(lowerName, "~") && + !strings.HasPrefix(lowerName, "backup_") && + name != "deletions.jsonl" && + name != "interactions.jsonl" && + name != "molecules.jsonl" && + !strings.Contains(lowerName, ".base.jsonl") && + !strings.Contains(lowerName, ".left.jsonl") && + !strings.Contains(lowerName, ".right.jsonl") { otherJSONLs = append(otherJSONLs, name) } } @@ -421,7 +439,7 @@ func CheckFreshClone(repoPath string) DoctorCheck { Name: "Fresh Clone", Status: "warning", Message: fmt.Sprintf("Fresh clone detected (%d issues in %s, no database)", issueCount, jsonlName), - Detail: "This appears to be a freshly cloned repository.\n" + + Detail: "This appears to be a freshly cloned repository.\n" + " The JSONL file contains issues but no local database exists.\n" + " Run 'bd init' to create the database and import existing issues.", Fix: fmt.Sprintf("Run '%s' to initialize the database and import issues", fixCmd), diff --git a/cmd/bd/doctor/legacy_test.go b/cmd/bd/doctor/legacy_test.go index 241c9d75..9c5fb49d 100644 --- a/cmd/bd/doctor/legacy_test.go +++ b/cmd/bd/doctor/legacy_test.go @@ -410,6 +410,49 @@ func TestCheckLegacyJSONLConfig(t *testing.T) { } } +func TestCheckDatabaseConfig_IgnoresSystemJSONLs(t *testing.T) { + tmpDir := t.TempDir() + beadsDir := filepath.Join(tmpDir, ".beads") + if err := os.Mkdir(beadsDir, 0750); err != nil { + t.Fatal(err) + } + + // Configure issues.jsonl, but only create interactions.jsonl. + metadataPath := filepath.Join(beadsDir, "metadata.json") + if err := os.WriteFile(metadataPath, []byte(`{"database":"beads.db","jsonl_export":"issues.jsonl"}`), 0644); err != nil { + t.Fatal(err) + } + if err := os.WriteFile(filepath.Join(beadsDir, "interactions.jsonl"), []byte(`{"id":"x"}`), 0644); err != nil { + t.Fatal(err) + } + + check := CheckDatabaseConfig(tmpDir) + if check.Status != "ok" { + t.Fatalf("expected ok, got %s: %s\n%s", check.Status, check.Message, check.Detail) + } +} + +func TestCheckDatabaseConfig_SystemJSONLExportIsError(t *testing.T) { + tmpDir := t.TempDir() + beadsDir := filepath.Join(tmpDir, ".beads") + if err := os.Mkdir(beadsDir, 0750); err != nil { + t.Fatal(err) + } + + metadataPath := filepath.Join(beadsDir, "metadata.json") + if err := os.WriteFile(metadataPath, []byte(`{"database":"beads.db","jsonl_export":"interactions.jsonl"}`), 0644); err != nil { + t.Fatal(err) + } + if err := os.WriteFile(filepath.Join(beadsDir, "interactions.jsonl"), []byte(`{"id":"x"}`), 0644); err != nil { + t.Fatal(err) + } + + check := CheckDatabaseConfig(tmpDir) + if check.Status != "error" { + t.Fatalf("expected error, got %s: %s", check.Status, check.Message) + } +} + func TestCheckFreshClone(t *testing.T) { tests := []struct { name string diff --git a/cmd/bd/doctor/sqlite_open.go b/cmd/bd/doctor/sqlite_open.go new file mode 100644 index 00000000..da982233 --- /dev/null +++ b/cmd/bd/doctor/sqlite_open.go @@ -0,0 +1,54 @@ +package doctor + +import ( + "fmt" + "os" + "strings" + "time" +) + +func sqliteConnString(path string, readOnly bool) string { + path = strings.TrimSpace(path) + if path == "" { + return "" + } + + // Best-effort: honor the same env var viper uses (BD_LOCK_TIMEOUT). + busy := 30 * time.Second + if v := strings.TrimSpace(os.Getenv("BD_LOCK_TIMEOUT")); v != "" { + if d, err := time.ParseDuration(v); err == nil { + busy = d + } + } + busyMs := int64(busy / time.Millisecond) + + // If it's already a URI, append pragmas if absent. + if strings.HasPrefix(path, "file:") { + conn := path + sep := "?" + if strings.Contains(conn, "?") { + sep = "&" + } + if readOnly && !strings.Contains(conn, "mode=") { + conn += sep + "mode=ro" + sep = "&" + } + if !strings.Contains(conn, "_pragma=busy_timeout") { + conn += fmt.Sprintf("%s_pragma=busy_timeout(%d)", sep, busyMs) + sep = "&" + } + if !strings.Contains(conn, "_pragma=foreign_keys") { + conn += sep + "_pragma=foreign_keys(ON)" + sep = "&" + } + if !strings.Contains(conn, "_time_format=") { + conn += sep + "_time_format=sqlite" + } + return conn + } + + if readOnly { + return fmt.Sprintf("file:%s?mode=ro&_pragma=foreign_keys(ON)&_pragma=busy_timeout(%d)&_time_format=sqlite", path, busyMs) + } + return fmt.Sprintf("file:%s?_pragma=foreign_keys(ON)&_pragma=busy_timeout(%d)&_time_format=sqlite", path, busyMs) +} diff --git a/cmd/bd/doctor_repair_chaos_test.go b/cmd/bd/doctor_repair_chaos_test.go new file mode 100644 index 00000000..5af6ffd3 --- /dev/null +++ b/cmd/bd/doctor_repair_chaos_test.go @@ -0,0 +1,378 @@ +//go:build chaos + +package main + +import ( + "bytes" + "context" + "database/sql" + "io" + "os" + "os/exec" + "path/filepath" + "strings" + "testing" + "time" + + _ "github.com/ncruces/go-sqlite3/driver" +) + +func TestDoctorRepair_CorruptDatabase_NotADatabase_RebuildFromJSONL(t *testing.T) { + bdExe := buildBDForTest(t) + ws := mkTmpDirInTmp(t, "bd-doctor-chaos-*") + dbPath := filepath.Join(ws, ".beads", "beads.db") + jsonlPath := filepath.Join(ws, ".beads", "issues.jsonl") + + if _, err := runBDSideDB(t, bdExe, ws, dbPath, "init", "--prefix", "chaos", "--quiet"); err != nil { + t.Fatalf("bd init failed: %v", err) + } + if _, err := runBDSideDB(t, bdExe, ws, dbPath, "create", "Chaos issue", "-p", "1"); err != nil { + t.Fatalf("bd create failed: %v", err) + } + if _, err := runBDSideDB(t, bdExe, ws, dbPath, "export", "-o", jsonlPath, "--force"); err != nil { + t.Fatalf("bd export failed: %v", err) + } + + // Make the DB unreadable. + if err := os.WriteFile(dbPath, []byte("not a database"), 0644); err != nil { + t.Fatalf("corrupt db: %v", err) + } + + if _, err := runBDSideDB(t, bdExe, ws, dbPath, "doctor", "--fix", "--yes"); err != nil { + t.Fatalf("bd doctor --fix failed: %v", err) + } + + if out, err := runBDSideDB(t, bdExe, ws, dbPath, "doctor"); err != nil { + t.Fatalf("bd doctor after fix failed: %v\n%s", err, out) + } +} + +func TestDoctorRepair_CorruptDatabase_NoJSONL_FixFails(t *testing.T) { + bdExe := buildBDForTest(t) + ws := mkTmpDirInTmp(t, "bd-doctor-chaos-nojsonl-*") + dbPath := filepath.Join(ws, ".beads", "beads.db") + + if _, err := runBDSideDB(t, bdExe, ws, dbPath, "init", "--prefix", "chaos", "--quiet"); err != nil { + t.Fatalf("bd init failed: %v", err) + } + if _, err := runBDSideDB(t, bdExe, ws, dbPath, "create", "Chaos issue", "-p", "1"); err != nil { + t.Fatalf("bd create failed: %v", err) + } + + // Some workflows keep JSONL in sync automatically; force it to be missing. + _ = os.Remove(filepath.Join(ws, ".beads", "issues.jsonl")) + _ = os.Remove(filepath.Join(ws, ".beads", "beads.jsonl")) + + // Corrupt without providing JSONL source-of-truth. + if err := os.Truncate(dbPath, 64); err != nil { + t.Fatalf("truncate db: %v", err) + } + + out, err := runBDSideDB(t, bdExe, ws, dbPath, "doctor", "--fix", "--yes") + if err == nil { + t.Fatalf("expected bd doctor --fix to fail without JSONL") + } + if !strings.Contains(out, "cannot auto-recover") { + t.Fatalf("expected auto-recover error, got:\n%s", out) + } + + // Ensure we don't mis-configure jsonl_export to a system file during failure. + metadata, readErr := os.ReadFile(filepath.Join(ws, ".beads", "metadata.json")) + if readErr == nil { + if strings.Contains(string(metadata), "interactions.jsonl") { + t.Fatalf("unexpected metadata.json jsonl_export set to interactions.jsonl:\n%s", string(metadata)) + } + } +} + +func TestDoctorRepair_CorruptDatabase_BacksUpSidecars(t *testing.T) { + bdExe := buildBDForTest(t) + ws := mkTmpDirInTmp(t, "bd-doctor-chaos-sidecars-*") + dbPath := filepath.Join(ws, ".beads", "beads.db") + jsonlPath := filepath.Join(ws, ".beads", "issues.jsonl") + + if _, err := runBDSideDB(t, bdExe, ws, dbPath, "init", "--prefix", "chaos", "--quiet"); err != nil { + t.Fatalf("bd init failed: %v", err) + } + if _, err := runBDSideDB(t, bdExe, ws, dbPath, "create", "Chaos issue", "-p", "1"); err != nil { + t.Fatalf("bd create failed: %v", err) + } + if _, err := runBDSideDB(t, bdExe, ws, dbPath, "export", "-o", jsonlPath, "--force"); err != nil { + t.Fatalf("bd export failed: %v", err) + } + + // Ensure sidecars exist so we can verify they get moved with the backup. + for _, suffix := range []string{"-wal", "-shm", "-journal"} { + if err := os.WriteFile(dbPath+suffix, []byte("x"), 0644); err != nil { + t.Fatalf("write sidecar %s: %v", suffix, err) + } + } + if err := os.Truncate(dbPath, 64); err != nil { + t.Fatalf("truncate db: %v", err) + } + + if _, err := runBDSideDB(t, bdExe, ws, dbPath, "doctor", "--fix", "--yes"); err != nil { + t.Fatalf("bd doctor --fix failed: %v", err) + } + + // Verify a backup exists, and at least one sidecar got moved. + entries, err := os.ReadDir(filepath.Join(ws, ".beads")) + if err != nil { + t.Fatalf("readdir: %v", err) + } + var backup string + for _, e := range entries { + if strings.Contains(e.Name(), ".corrupt.backup.db") { + backup = filepath.Join(ws, ".beads", e.Name()) + break + } + } + if backup == "" { + t.Fatalf("expected backup db in .beads, found none") + } + + wal := backup + "-wal" + if _, err := os.Stat(wal); err != nil { + // At minimum, the backup DB itself should exist; sidecar backup is best-effort. + if _, err2 := os.Stat(backup); err2 != nil { + t.Fatalf("backup db missing: %v", err2) + } + } +} + +func TestDoctorRepair_CorruptDatabase_WithRunningDaemon_FixSucceeds(t *testing.T) { + bdExe := buildBDForTest(t) + ws := mkTmpDirInTmp(t, "bd-doctor-chaos-daemon-*") + dbPath := filepath.Join(ws, ".beads", "beads.db") + jsonlPath := filepath.Join(ws, ".beads", "issues.jsonl") + + if _, err := runBDSideDB(t, bdExe, ws, dbPath, "init", "--prefix", "chaos", "--quiet"); err != nil { + t.Fatalf("bd init failed: %v", err) + } + if _, err := runBDSideDB(t, bdExe, ws, dbPath, "create", "Chaos issue", "-p", "1"); err != nil { + t.Fatalf("bd create failed: %v", err) + } + if _, err := runBDSideDB(t, bdExe, ws, dbPath, "export", "-o", jsonlPath, "--force"); err != nil { + t.Fatalf("bd export failed: %v", err) + } + + cmd := startDaemonForChaosTest(t, bdExe, ws, dbPath) + defer func() { + if cmd.Process != nil && (cmd.ProcessState == nil || !cmd.ProcessState.Exited()) { + _ = cmd.Process.Kill() + _, _ = cmd.Process.Wait() + } + }() + + // Corrupt the DB. + if err := os.WriteFile(dbPath, []byte("not a database"), 0644); err != nil { + t.Fatalf("corrupt db: %v", err) + } + + if _, err := runBDSideDB(t, bdExe, ws, dbPath, "doctor", "--fix", "--yes"); err != nil { + t.Fatalf("bd doctor --fix failed: %v", err) + } + + // Ensure we can cleanly stop the daemon afterwards (repair shouldn't wedge it). + if cmd.Process != nil { + _ = cmd.Process.Kill() + done := make(chan error, 1) + go func() { done <- cmd.Wait() }() + select { + case <-time.After(3 * time.Second): + t.Fatalf("expected daemon to exit when killed") + case <-done: + // ok + } + } +} + +func TestDoctorRepair_JSONLIntegrity_MalformedLine_ReexportFromDB(t *testing.T) { + bdExe := buildBDForTest(t) + ws := mkTmpDirInTmp(t, "bd-doctor-chaos-jsonl-*") + dbPath := filepath.Join(ws, ".beads", "beads.db") + jsonlPath := filepath.Join(ws, ".beads", "issues.jsonl") + + if _, err := runBDSideDB(t, bdExe, ws, dbPath, "init", "--prefix", "chaos", "--quiet"); err != nil { + t.Fatalf("bd init failed: %v", err) + } + if _, err := runBDSideDB(t, bdExe, ws, dbPath, "create", "Chaos issue", "-p", "1"); err != nil { + t.Fatalf("bd create failed: %v", err) + } + if _, err := runBDSideDB(t, bdExe, ws, dbPath, "export", "-o", jsonlPath, "--force"); err != nil { + t.Fatalf("bd export failed: %v", err) + } + + // Corrupt JSONL (leave DB intact). + f, err := os.OpenFile(jsonlPath, os.O_APPEND|os.O_WRONLY, 0644) + if err != nil { + t.Fatalf("open jsonl: %v", err) + } + if _, err := f.WriteString("{not json}\n"); err != nil { + _ = f.Close() + t.Fatalf("append corrupt jsonl: %v", err) + } + _ = f.Close() + + if _, err := runBDSideDB(t, bdExe, ws, dbPath, "doctor", "--fix", "--yes"); err != nil { + t.Fatalf("bd doctor --fix failed: %v", err) + } + + data, err := os.ReadFile(jsonlPath) + if err != nil { + t.Fatalf("read jsonl: %v", err) + } + if strings.Contains(string(data), "{not json}") { + t.Fatalf("expected JSONL to be regenerated without corrupt line") + } +} + +func TestDoctorRepair_DatabaseIntegrity_DBWriteLocked_ImportFailsFast(t *testing.T) { + bdExe := buildBDForTest(t) + ws := mkTmpDirInTmp(t, "bd-doctor-chaos-db-locked-*") + dbPath := filepath.Join(ws, ".beads", "beads.db") + jsonlPath := filepath.Join(ws, ".beads", "issues.jsonl") + + if _, err := runBDSideDB(t, bdExe, ws, dbPath, "init", "--prefix", "chaos", "--quiet"); err != nil { + t.Fatalf("bd init failed: %v", err) + } + if _, err := runBDSideDB(t, bdExe, ws, dbPath, "create", "Chaos issue", "-p", "1"); err != nil { + t.Fatalf("bd create failed: %v", err) + } + if _, err := runBDSideDB(t, bdExe, ws, dbPath, "export", "-o", jsonlPath, "--force"); err != nil { + t.Fatalf("bd export failed: %v", err) + } + + // Lock the DB for writes in-process. + db, err := sql.Open("sqlite3", dbPath) + if err != nil { + t.Fatalf("open db: %v", err) + } + defer db.Close() + tx, err := db.Begin() + if err != nil { + t.Fatalf("begin tx: %v", err) + } + if _, err := tx.Exec("INSERT INTO issues (id, title, status) VALUES ('lock-test', 'Lock Test', 'open')"); err != nil { + _ = tx.Rollback() + t.Fatalf("insert lock row: %v", err) + } + defer func() { _ = tx.Rollback() }() + + ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) + defer cancel() + out, err := runBDWithEnv(ctx, bdExe, ws, dbPath, map[string]string{ + "BD_LOCK_TIMEOUT": "200ms", + }, "import", "-i", jsonlPath, "--force", "--skip-existing", "--no-git-history") + if err == nil { + t.Fatalf("expected bd import to fail under DB write lock") + } + if ctx.Err() == context.DeadlineExceeded { + t.Fatalf("import exceeded timeout (likely hung); output:\n%s", out) + } + low := strings.ToLower(out) + if !strings.Contains(low, "locked") && !strings.Contains(low, "busy") && !strings.Contains(low, "timeout") { + t.Fatalf("expected lock/busy/timeout error, got:\n%s", out) + } +} + +func TestDoctorRepair_CorruptDatabase_ReadOnlyBeadsDir_PermissionsFixMakesWritable(t *testing.T) { + bdExe := buildBDForTest(t) + ws := mkTmpDirInTmp(t, "bd-doctor-chaos-readonly-*") + beadsDir := filepath.Join(ws, ".beads") + dbPath := filepath.Join(beadsDir, "beads.db") + jsonlPath := filepath.Join(beadsDir, "issues.jsonl") + + if _, err := runBDSideDB(t, bdExe, ws, dbPath, "init", "--prefix", "chaos", "--quiet"); err != nil { + t.Fatalf("bd init failed: %v", err) + } + if _, err := runBDSideDB(t, bdExe, ws, dbPath, "create", "Chaos issue", "-p", "1"); err != nil { + t.Fatalf("bd create failed: %v", err) + } + if _, err := runBDSideDB(t, bdExe, ws, dbPath, "export", "-o", jsonlPath, "--force"); err != nil { + t.Fatalf("bd export failed: %v", err) + } + + // Corrupt the DB. + if err := os.Truncate(dbPath, 64); err != nil { + t.Fatalf("truncate db: %v", err) + } + + // Make .beads read-only; the Permissions fix should make it writable again. + if err := os.Chmod(beadsDir, 0555); err != nil { + t.Fatalf("chmod beads dir: %v", err) + } + t.Cleanup(func() { _ = os.Chmod(beadsDir, 0755) }) + + if out, err := runBDSideDB(t, bdExe, ws, dbPath, "doctor", "--fix", "--yes"); err != nil { + t.Fatalf("expected bd doctor --fix to succeed (permissions auto-fix), got: %v\n%s", err, out) + } + info, err := os.Stat(beadsDir) + if err != nil { + t.Fatalf("stat beads dir: %v", err) + } + if info.Mode().Perm()&0200 == 0 { + t.Fatalf("expected .beads to be writable after permissions fix, mode=%v", info.Mode().Perm()) + } +} + +func startDaemonForChaosTest(t *testing.T, bdExe, ws, dbPath string) *exec.Cmd { + t.Helper() + cmd := exec.Command(bdExe, "--db", dbPath, "daemon", "--start", "--foreground", "--local", "--interval", "10m") + cmd.Dir = ws + var stdout, stderr bytes.Buffer + cmd.Stdout = &stdout + cmd.Stderr = &stderr + + // Inherit environment, but explicitly ensure daemon mode is allowed. + env := make([]string, 0, len(os.Environ())+1) + for _, e := range os.Environ() { + if strings.HasPrefix(e, "BEADS_NO_DAEMON=") { + continue + } + env = append(env, e) + } + cmd.Env = env + + if err := cmd.Start(); err != nil { + t.Fatalf("start daemon: %v", err) + } + + // Wait for socket to appear. + sock := filepath.Join(ws, ".beads", "bd.sock") + deadline := time.Now().Add(8 * time.Second) + for time.Now().Before(deadline) { + if _, err := os.Stat(sock); err == nil { + // Put the process back into the caller's control. + cmd.Stdout = io.Discard + cmd.Stderr = io.Discard + return cmd + } + time.Sleep(50 * time.Millisecond) + } + + _ = cmd.Process.Kill() + _ = cmd.Wait() + t.Fatalf("daemon failed to start (no socket: %s)\nstdout:\n%s\nstderr:\n%s", sock, stdout.String(), stderr.String()) + return nil +} + +func runBDWithEnv(ctx context.Context, exe, dir, dbPath string, env map[string]string, args ...string) (string, error) { + fullArgs := []string{"--db", dbPath} + if len(args) > 0 && args[0] != "init" { + fullArgs = append(fullArgs, "--no-daemon") + } + fullArgs = append(fullArgs, args...) + + cmd := exec.CommandContext(ctx, exe, fullArgs...) + cmd.Dir = dir + cmd.Env = append(os.Environ(), + "BEADS_NO_DAEMON=1", + "BEADS_DIR="+filepath.Join(dir, ".beads"), + ) + for k, v := range env { + cmd.Env = append(cmd.Env, k+"="+v) + } + out, err := cmd.CombinedOutput() + return string(out), err +} diff --git a/cmd/bd/doctor_repair_test.go b/cmd/bd/doctor_repair_test.go new file mode 100644 index 00000000..5e223a44 --- /dev/null +++ b/cmd/bd/doctor_repair_test.go @@ -0,0 +1,151 @@ +package main + +import ( + "encoding/json" + "os" + "os/exec" + "path/filepath" + "runtime" + "strings" + "testing" +) + +func buildBDForTest(t *testing.T) string { + t.Helper() + exeName := "bd" + if runtime.GOOS == "windows" { + exeName = "bd.exe" + } + + binDir := t.TempDir() + exe := filepath.Join(binDir, exeName) + cmd := exec.Command("go", "build", "-o", exe, ".") + out, err := cmd.CombinedOutput() + if err != nil { + t.Fatalf("go build failed: %v\n%s", err, string(out)) + } + return exe +} + +func mkTmpDirInTmp(t *testing.T, prefix string) string { + t.Helper() + dir, err := os.MkdirTemp("/tmp", prefix) + if err != nil { + // Fallback for platforms without /tmp (e.g. Windows). + dir, err = os.MkdirTemp("", prefix) + if err != nil { + t.Fatalf("failed to create temp dir: %v", err) + } + } + t.Cleanup(func() { _ = os.RemoveAll(dir) }) + return dir +} + +func runBDSideDB(t *testing.T, exe, dir, dbPath string, args ...string) (string, error) { + t.Helper() + fullArgs := []string{"--db", dbPath} + if len(args) > 0 && args[0] != "init" { + fullArgs = append(fullArgs, "--no-daemon") + } + fullArgs = append(fullArgs, args...) + + cmd := exec.Command(exe, fullArgs...) + cmd.Dir = dir + cmd.Env = append(os.Environ(), + "BEADS_NO_DAEMON=1", + "BEADS_DIR="+filepath.Join(dir, ".beads"), + ) + out, err := cmd.CombinedOutput() + return string(out), err +} + +func TestDoctorRepair_CorruptDatabase_RebuildFromJSONL(t *testing.T) { + if testing.Short() { + t.Skip("skipping slow repair test in short mode") + } + + bdExe := buildBDForTest(t) + ws := mkTmpDirInTmp(t, "bd-doctor-repair-*") + dbPath := filepath.Join(ws, ".beads", "beads.db") + jsonlPath := filepath.Join(ws, ".beads", "issues.jsonl") + + if _, err := runBDSideDB(t, bdExe, ws, dbPath, "init", "--prefix", "chaos", "--quiet"); err != nil { + t.Fatalf("bd init failed: %v", err) + } + if _, err := runBDSideDB(t, bdExe, ws, dbPath, "create", "Chaos issue", "-p", "1"); err != nil { + t.Fatalf("bd create failed: %v", err) + } + if _, err := runBDSideDB(t, bdExe, ws, dbPath, "export", "-o", jsonlPath, "--force"); err != nil { + t.Fatalf("bd export failed: %v", err) + } + + // Corrupt the SQLite file (truncate) and verify doctor reports an integrity error. + if err := os.Truncate(dbPath, 128); err != nil { + t.Fatalf("truncate db: %v", err) + } + + out, err := runBDSideDB(t, bdExe, ws, dbPath, "doctor", "--json") + if err == nil { + t.Fatalf("expected bd doctor to fail on corrupt db") + } + jsonStart := strings.Index(out, "{") + if jsonStart < 0 { + t.Fatalf("doctor output missing JSON: %s", out) + } + var before doctorResult + if err := json.Unmarshal([]byte(out[jsonStart:]), &before); err != nil { + t.Fatalf("unmarshal doctor json: %v\n%s", err, out) + } + var foundIntegrity bool + for _, c := range before.Checks { + if c.Name == "Database Integrity" { + foundIntegrity = true + if c.Status != statusError { + t.Fatalf("Database Integrity status=%q want %q", c.Status, statusError) + } + } + } + if !foundIntegrity { + t.Fatalf("Database Integrity check not found") + } + + // Attempt auto-repair. + out, err = runBDSideDB(t, bdExe, ws, dbPath, "doctor", "--fix", "--yes") + if err != nil { + t.Fatalf("bd doctor --fix failed: %v\n%s", err, out) + } + + // Doctor should now pass. + out, err = runBDSideDB(t, bdExe, ws, dbPath, "doctor", "--json") + if err != nil { + t.Fatalf("bd doctor after fix failed: %v\n%s", err, out) + } + jsonStart = strings.Index(out, "{") + if jsonStart < 0 { + t.Fatalf("doctor output missing JSON: %s", out) + } + var after doctorResult + if err := json.Unmarshal([]byte(out[jsonStart:]), &after); err != nil { + t.Fatalf("unmarshal doctor json: %v\n%s", err, out) + } + if !after.OverallOK { + t.Fatalf("expected overall_ok=true after repair") + } + + // Data should still be present. + out, err = runBDSideDB(t, bdExe, ws, dbPath, "list", "--json") + if err != nil { + t.Fatalf("bd list failed after repair: %v\n%s", err, out) + } + jsonStart = strings.Index(out, "[") + if jsonStart < 0 { + t.Fatalf("list output missing JSON array: %s", out) + } + var issues []map[string]any + if err := json.Unmarshal([]byte(out[jsonStart:]), &issues); err != nil { + t.Fatalf("unmarshal list json: %v\n%s", err, out) + } + if len(issues) != 1 { + t.Fatalf("expected 1 issue after repair, got %d", len(issues)) + } +} diff --git a/cmd/bd/export.go b/cmd/bd/export.go index bca6fd17..4308fd87 100644 --- a/cmd/bd/export.go +++ b/cmd/bd/export.go @@ -156,7 +156,7 @@ Examples: _ = daemonClient.Close() daemonClient = nil } - + // Note: We used to check database file timestamps here, but WAL files // get created when opening the DB, making timestamp checks unreliable. // Instead, we check issue counts after loading (see below). @@ -168,7 +168,7 @@ Examples: fmt.Fprintf(os.Stderr, "Error: no database path found\n") os.Exit(1) } - store, err = sqlite.New(rootCtx, dbPath) + store, err = sqlite.NewWithTimeout(rootCtx, dbPath, lockTimeout) if err != nil { fmt.Fprintf(os.Stderr, "Error: failed to open database: %v\n", err) os.Exit(1) @@ -302,20 +302,20 @@ Examples: // Safety check: prevent exporting stale database that would lose issues if output != "" && !force { debug.Logf("Debug: checking staleness - output=%s, force=%v\n", output, force) - + // Read existing JSONL to get issue IDs jsonlIDs, err := getIssueIDsFromJSONL(output) if err != nil && !os.IsNotExist(err) { fmt.Fprintf(os.Stderr, "Warning: failed to read existing JSONL for staleness check: %v\n", err) } - + if err == nil && len(jsonlIDs) > 0 { // Build set of DB issue IDs dbIDs := make(map[string]bool) for _, issue := range issues { dbIDs[issue.ID] = true } - + // Check if JSONL has any issues that DB doesn't have var missingIDs []string for id := range jsonlIDs { @@ -323,17 +323,17 @@ Examples: missingIDs = append(missingIDs, id) } } - - debug.Logf("Debug: JSONL has %d issues, DB has %d issues, missing %d\n", + + debug.Logf("Debug: JSONL has %d issues, DB has %d issues, missing %d\n", len(jsonlIDs), len(issues), len(missingIDs)) - + if len(missingIDs) > 0 { slices.Sort(missingIDs) fmt.Fprintf(os.Stderr, "Error: refusing to export stale database that would lose issues\n") fmt.Fprintf(os.Stderr, " Database has %d issues\n", len(issues)) fmt.Fprintf(os.Stderr, " JSONL has %d issues\n", len(jsonlIDs)) fmt.Fprintf(os.Stderr, " Export would lose %d issue(s):\n", len(missingIDs)) - + // Show first 10 missing issues showCount := len(missingIDs) if showCount > 10 { @@ -345,7 +345,7 @@ Examples: if len(missingIDs) > 10 { fmt.Fprintf(os.Stderr, " ... and %d more\n", len(missingIDs)-10) } - + fmt.Fprintf(os.Stderr, "\n") fmt.Fprintf(os.Stderr, "This usually means:\n") fmt.Fprintf(os.Stderr, " 1. You need to run 'bd import -i %s' to sync the latest changes\n", output) @@ -362,7 +362,7 @@ Examples: // Wisps exist only in SQLite and are shared via .beads/redirect, not JSONL. filtered := make([]*types.Issue, 0, len(issues)) for _, issue := range issues { - if !issue.Wisp { + if !issue.Ephemeral { filtered = append(filtered, issue) } } @@ -434,8 +434,8 @@ Examples: skippedCount := 0 for _, issue := range issues { if err := encoder.Encode(issue); err != nil { - fmt.Fprintf(os.Stderr, "Error encoding issue %s: %v\n", issue.ID, err) - os.Exit(1) + fmt.Fprintf(os.Stderr, "Error encoding issue %s: %v\n", issue.ID, err) + os.Exit(1) } exportedIDs = append(exportedIDs, issue.ID) @@ -495,19 +495,19 @@ Examples: } } - // Verify JSONL file integrity after export - actualCount, err := countIssuesInJSONL(finalPath) - if err != nil { - fmt.Fprintf(os.Stderr, "Error: Export verification failed: %v\n", err) - os.Exit(1) - } - if actualCount != len(exportedIDs) { - fmt.Fprintf(os.Stderr, "Error: Export verification failed\n") - fmt.Fprintf(os.Stderr, " Expected: %d issues\n", len(exportedIDs)) - fmt.Fprintf(os.Stderr, " JSONL file: %d lines\n", actualCount) - fmt.Fprintf(os.Stderr, " Mismatch indicates export failed to write all issues\n") - os.Exit(1) - } + // Verify JSONL file integrity after export + actualCount, err := countIssuesInJSONL(finalPath) + if err != nil { + fmt.Fprintf(os.Stderr, "Error: Export verification failed: %v\n", err) + os.Exit(1) + } + if actualCount != len(exportedIDs) { + fmt.Fprintf(os.Stderr, "Error: Export verification failed\n") + fmt.Fprintf(os.Stderr, " Expected: %d issues\n", len(exportedIDs)) + fmt.Fprintf(os.Stderr, " JSONL file: %d lines\n", actualCount) + fmt.Fprintf(os.Stderr, " Mismatch indicates export failed to write all issues\n") + os.Exit(1) + } // Update database mtime to be >= JSONL mtime (fixes #278, #301, #321) // Only do this when exporting to default JSONL path (not arbitrary outputs) @@ -520,9 +520,9 @@ Examples: fmt.Fprintf(os.Stderr, "Warning: failed to update database mtime: %v\n", err) } } - } + } - // Output statistics if JSON format requested + // Output statistics if JSON format requested if jsonOutput { stats := map[string]interface{}{ "success": true, diff --git a/cmd/bd/gate.go b/cmd/bd/gate.go index f79dafa7..f68120a1 100644 --- a/cmd/bd/gate.go +++ b/cmd/bd/gate.go @@ -157,7 +157,7 @@ Examples: Status: types.StatusOpen, Priority: 1, // Gates are typically high priority // Assignee left empty - orchestrator decides who processes gates - Wisp: true, // Gates are wisps (ephemeral) + Ephemeral: true, // Gates are wisps (ephemeral) AwaitType: awaitType, AwaitID: awaitID, Timeout: timeout, diff --git a/cmd/bd/git_sync_test.go b/cmd/bd/git_sync_test.go index ba5a8b73..d68f0803 100644 --- a/cmd/bd/git_sync_test.go +++ b/cmd/bd/git_sync_test.go @@ -26,36 +26,36 @@ func TestGitPullSyncIntegration(t *testing.T) { // Create temp directory for test repositories tempDir := t.TempDir() - + // Create "remote" repository remoteDir := filepath.Join(tempDir, "remote") if err := os.MkdirAll(remoteDir, 0750); err != nil { t.Fatalf("Failed to create remote dir: %v", err) } - + // Initialize remote git repo - runGitCmd(t, remoteDir, "init", "--bare") - + runGitCmd(t, remoteDir, "init", "--bare", "-b", "master") + // Create "clone1" repository clone1Dir := filepath.Join(tempDir, "clone1") runGitCmd(t, tempDir, "clone", remoteDir, clone1Dir) configureGit(t, clone1Dir) - + // Initialize beads in clone1 clone1BeadsDir := filepath.Join(clone1Dir, ".beads") if err := os.MkdirAll(clone1BeadsDir, 0750); err != nil { t.Fatalf("Failed to create .beads dir: %v", err) } - + clone1DBPath := filepath.Join(clone1BeadsDir, "test.db") clone1Store := newTestStore(t, clone1DBPath) defer clone1Store.Close() - + ctx := context.Background() if err := clone1Store.SetMetadata(ctx, "issue_prefix", "test"); err != nil { t.Fatalf("Failed to set prefix: %v", err) } - + // Create and close an issue in clone1 issue := &types.Issue{ Title: "Test sync issue", @@ -69,80 +69,80 @@ func TestGitPullSyncIntegration(t *testing.T) { t.Fatalf("Failed to create issue: %v", err) } issueID := issue.ID - + // Close the issue if err := clone1Store.CloseIssue(ctx, issueID, "Test completed", "test-user"); err != nil { t.Fatalf("Failed to close issue: %v", err) } - + // Export to JSONL jsonlPath := filepath.Join(clone1BeadsDir, "issues.jsonl") if err := exportIssuesToJSONL(ctx, clone1Store, jsonlPath); err != nil { t.Fatalf("Failed to export: %v", err) } - + // Commit and push from clone1 runGitCmd(t, clone1Dir, "add", ".beads") runGitCmd(t, clone1Dir, "commit", "-m", "Add closed issue") runGitCmd(t, clone1Dir, "push", "origin", "master") - + // Create "clone2" repository clone2Dir := filepath.Join(tempDir, "clone2") runGitCmd(t, tempDir, "clone", remoteDir, clone2Dir) configureGit(t, clone2Dir) - + // Initialize empty database in clone2 clone2BeadsDir := filepath.Join(clone2Dir, ".beads") clone2DBPath := filepath.Join(clone2BeadsDir, "test.db") clone2Store := newTestStore(t, clone2DBPath) defer clone2Store.Close() - + if err := clone2Store.SetMetadata(ctx, "issue_prefix", "test"); err != nil { t.Fatalf("Failed to set prefix: %v", err) } - + // Import the existing JSONL (simulating initial sync) clone2JSONLPath := filepath.Join(clone2BeadsDir, "issues.jsonl") if err := importJSONLToStore(ctx, clone2Store, clone2DBPath, clone2JSONLPath); err != nil { t.Fatalf("Failed to import: %v", err) } - + // Verify issue exists and is closed verifyIssueClosed(t, clone2Store, issueID) - + // Note: We don't commit in clone2 - it stays clean as a read-only consumer - + // Now test git pull scenario: Clone1 makes a change (update priority) if err := clone1Store.UpdateIssue(ctx, issueID, map[string]interface{}{ "priority": 0, }, "test-user"); err != nil { t.Fatalf("Failed to update issue: %v", err) } - + if err := exportIssuesToJSONL(ctx, clone1Store, jsonlPath); err != nil { t.Fatalf("Failed to export after update: %v", err) } - + runGitCmd(t, clone1Dir, "add", ".beads/issues.jsonl") runGitCmd(t, clone1Dir, "commit", "-m", "Update priority") runGitCmd(t, clone1Dir, "push", "origin", "master") - + // Clone2 pulls the change runGitCmd(t, clone2Dir, "pull") - + // Test auto-import in non-daemon mode t.Run("NonDaemonAutoImport", func(t *testing.T) { // Use a temporary local store for this test localStore := newTestStore(t, clone2DBPath) defer localStore.Close() - + // Manually import to simulate auto-import behavior startTime := time.Now() if err := importJSONLToStore(ctx, localStore, clone2DBPath, clone2JSONLPath); err != nil { t.Fatalf("Failed to auto-import: %v", err) } elapsed := time.Since(startTime) - + // Verify priority was updated issue, err := localStore.GetIssue(ctx, issueID) if err != nil { @@ -151,13 +151,13 @@ func TestGitPullSyncIntegration(t *testing.T) { if issue.Priority != 0 { t.Errorf("Expected priority 0 after auto-import, got %d", issue.Priority) } - + // Verify performance: import should be fast if elapsed > 100*time.Millisecond { t.Logf("Info: import took %v", elapsed) } }) - + // Test bd sync --import-only command t.Run("BdSyncCommand", func(t *testing.T) { // Make another change in clone1 (change priority back to 1) @@ -166,27 +166,27 @@ func TestGitPullSyncIntegration(t *testing.T) { }, "test-user"); err != nil { t.Fatalf("Failed to update issue: %v", err) } - + if err := exportIssuesToJSONL(ctx, clone1Store, jsonlPath); err != nil { t.Fatalf("Failed to export: %v", err) } - + runGitCmd(t, clone1Dir, "add", ".beads/issues.jsonl") runGitCmd(t, clone1Dir, "commit", "-m", "Update priority") runGitCmd(t, clone1Dir, "push", "origin", "master") - + // Clone2 pulls runGitCmd(t, clone2Dir, "pull") - + // Use a fresh store for import syncStore := newTestStore(t, clone2DBPath) defer syncStore.Close() - + // Manually trigger import via in-process equivalent if err := importJSONLToStore(ctx, syncStore, clone2DBPath, clone2JSONLPath); err != nil { t.Fatalf("Failed to import via sync: %v", err) } - + // Verify priority was updated back to 1 issue, err := syncStore.GetIssue(ctx, issueID) if err != nil { @@ -214,7 +214,7 @@ func configureGit(t *testing.T, dir string) { runGitCmd(t, dir, "config", "user.email", "test@example.com") runGitCmd(t, dir, "config", "user.name", "Test User") runGitCmd(t, dir, "config", "pull.rebase", "false") - + // Create .gitignore to prevent test database files from being tracked gitignorePath := filepath.Join(dir, ".gitignore") gitignoreContent := `# Test database files @@ -233,7 +233,7 @@ func exportIssuesToJSONL(ctx context.Context, store *sqlite.SQLiteStorage, jsonl if err != nil { return err } - + // Populate dependencies allDeps, err := store.GetAllDependencyRecords(ctx) if err != nil { @@ -244,20 +244,20 @@ func exportIssuesToJSONL(ctx context.Context, store *sqlite.SQLiteStorage, jsonl labels, _ := store.GetLabels(ctx, issue.ID) issue.Labels = labels } - + f, err := os.Create(jsonlPath) if err != nil { return err } defer f.Close() - + encoder := json.NewEncoder(f) for _, issue := range issues { if err := encoder.Encode(issue); err != nil { return err } } - + return nil } @@ -266,7 +266,7 @@ func importJSONLToStore(ctx context.Context, store *sqlite.SQLiteStorage, dbPath if err != nil { return err } - + // Use the autoimport package's AutoImportIfNewer function // For testing, we'll directly parse and import var issues []*types.Issue @@ -278,7 +278,7 @@ func importJSONLToStore(ctx context.Context, store *sqlite.SQLiteStorage, dbPath } issues = append(issues, &issue) } - + // Import each issue for _, issue := range issues { existing, _ := store.GetIssue(ctx, issue.ID) @@ -298,12 +298,12 @@ func importJSONLToStore(ctx context.Context, store *sqlite.SQLiteStorage, dbPath } } } - + // Set last_import_time metadata so staleness check works if err := store.SetMetadata(ctx, "last_import_time", time.Now().Format(time.RFC3339)); err != nil { return err } - + return nil } diff --git a/cmd/bd/graph.go b/cmd/bd/graph.go index ec7c8a1a..d4e585dd 100644 --- a/cmd/bd/graph.go +++ b/cmd/bd/graph.go @@ -11,6 +11,7 @@ import ( "github.com/spf13/cobra" "github.com/steveyegge/beads/internal/rpc" "github.com/steveyegge/beads/internal/storage" + "github.com/steveyegge/beads/internal/storage/sqlite" "github.com/steveyegge/beads/internal/types" "github.com/steveyegge/beads/internal/ui" "github.com/steveyegge/beads/internal/utils" @@ -80,6 +81,17 @@ Colors indicate status: os.Exit(1) } + // If daemon is running but doesn't support this command, use direct storage + if daemonClient != nil && store == nil { + var err error + store, err = sqlite.New(ctx, dbPath) + if err != nil { + fmt.Fprintf(os.Stderr, "Error: failed to open database: %v\n", err) + os.Exit(1) + } + defer func() { _ = store.Close() }() + } + // Load the subgraph subgraph, err := loadGraphSubgraph(ctx, store, issueID) if err != nil { diff --git a/cmd/bd/hook.go b/cmd/bd/hook.go index 9e098fa0..99482589 100644 --- a/cmd/bd/hook.go +++ b/cmd/bd/hook.go @@ -87,8 +87,8 @@ func runHook(cmd *cobra.Command, args []string) { for _, issue := range issues { phase := "mol" - if issue.Wisp { - phase = "wisp" + if issue.Ephemeral { + phase = "ephemeral" } fmt.Printf(" 📌 %s (%s) - %s\n", issue.ID, phase, issue.Status) fmt.Printf(" %s\n", issue.Title) diff --git a/cmd/bd/import_helpers_test.go b/cmd/bd/import_helpers_test.go new file mode 100644 index 00000000..5d0c23da --- /dev/null +++ b/cmd/bd/import_helpers_test.go @@ -0,0 +1,107 @@ +package main + +import ( + "os" + "os/exec" + "path/filepath" + "strings" + "testing" + "time" + + "github.com/steveyegge/beads/internal/types" +) + +func TestTouchDatabaseFile_UsesJSONLMtime(t *testing.T) { + tmp := t.TempDir() + dbPath := filepath.Join(tmp, "beads.db") + jsonlPath := filepath.Join(tmp, "issues.jsonl") + + if err := os.WriteFile(dbPath, []byte(""), 0o600); err != nil { + t.Fatalf("WriteFile db: %v", err) + } + if err := os.WriteFile(jsonlPath, []byte("{}\n"), 0o600); err != nil { + t.Fatalf("WriteFile jsonl: %v", err) + } + + jsonlTime := time.Now().Add(2 * time.Second) + if err := os.Chtimes(jsonlPath, jsonlTime, jsonlTime); err != nil { + t.Fatalf("Chtimes jsonl: %v", err) + } + + if err := TouchDatabaseFile(dbPath, jsonlPath); err != nil { + t.Fatalf("TouchDatabaseFile: %v", err) + } + + info, err := os.Stat(dbPath) + if err != nil { + t.Fatalf("Stat db: %v", err) + } + if info.ModTime().Before(jsonlTime) { + t.Fatalf("db mtime %v should be >= jsonl mtime %v", info.ModTime(), jsonlTime) + } +} + +func TestImportDetectPrefixFromIssues(t *testing.T) { + if detectPrefixFromIssues(nil) != "" { + t.Fatalf("expected empty") + } + + issues := []*types.Issue{ + {ID: "test-1"}, + {ID: "test-2"}, + {ID: "other-1"}, + } + if got := detectPrefixFromIssues(issues); got != "test" { + t.Fatalf("got %q, want %q", got, "test") + } +} + +func TestCountLines(t *testing.T) { + tmp := t.TempDir() + p := filepath.Join(tmp, "f.txt") + if err := os.WriteFile(p, []byte("a\n\nb\n"), 0o600); err != nil { + t.Fatalf("WriteFile: %v", err) + } + if got := countLines(p); got != 3 { + t.Fatalf("countLines=%d, want 3", got) + } +} + +func TestCheckUncommittedChanges_Warns(t *testing.T) { + _, cleanup := setupGitRepo(t) + defer cleanup() + + if err := os.WriteFile("issues.jsonl", []byte("{\"id\":\"test-1\"}\n"), 0o600); err != nil { + t.Fatalf("WriteFile: %v", err) + } + _ = execCmd(t, "git", "add", "issues.jsonl") + _ = execCmd(t, "git", "commit", "-m", "add issues") + + // Modify without committing. + if err := os.WriteFile("issues.jsonl", []byte("{\"id\":\"test-1\"}\n{\"id\":\"test-2\"}\n"), 0o600); err != nil { + t.Fatalf("WriteFile: %v", err) + } + + warn := captureStderr(t, func() { + checkUncommittedChanges("issues.jsonl", &ImportResult{}) + }) + if !strings.Contains(warn, "uncommitted changes") { + t.Fatalf("expected warning, got: %q", warn) + } + + noWarn := captureStderr(t, func() { + checkUncommittedChanges("issues.jsonl", &ImportResult{Created: 1}) + }) + if noWarn != "" { + t.Fatalf("expected no warning, got: %q", noWarn) + } +} + +func execCmd(t *testing.T, name string, args ...string) string { + t.Helper() + out, err := exec.Command(name, args...).CombinedOutput() + if err != nil { + t.Fatalf("%s %v failed: %v\n%s", name, args, err, out) + } + return string(out) +} diff --git a/cmd/bd/info.go b/cmd/bd/info.go index ff8c1094..ca5b3923 100644 --- a/cmd/bd/info.go +++ b/cmd/bd/info.go @@ -292,6 +292,7 @@ var versionChanges = []VersionChange{ Version: "0.37.0", Date: "2025-12-26", Changes: []string{ + "BREAKING: Ephemeral API rename (bd-o18s) - Wisp→Ephemeral: JSON 'wisp'→'ephemeral', bd wisp→bd ephemeral", "NEW: bd gate create/show/list/close/wait (bd-udsi) - Async coordination primitives for agent workflows", "NEW: bd gate eval (gt-twjr5.2) - Evaluate timer gates and GitHub gates (gh:run, gh:pr, mail)", "NEW: bd gate approve (gt-twjr5.4) - Human gate approval command", diff --git a/cmd/bd/list_helpers_test.go b/cmd/bd/list_helpers_test.go new file mode 100644 index 00000000..d2725391 --- /dev/null +++ b/cmd/bd/list_helpers_test.go @@ -0,0 +1,116 @@ +package main + +import ( + "strings" + "testing" + "time" + + "github.com/steveyegge/beads/internal/types" +) + +func TestListParseTimeFlag(t *testing.T) { + cases := []string{ + "2025-12-26", + "2025-12-26T12:34:56", + "2025-12-26 12:34:56", + time.DateOnly, + time.RFC3339, + } + + for _, c := range cases { + // Just make sure we accept the expected formats. + var s string + switch c { + case time.DateOnly: + s = "2025-12-26" + case time.RFC3339: + s = "2025-12-26T12:34:56Z" + default: + s = c + } + got, err := parseTimeFlag(s) + if err != nil { + t.Fatalf("parseTimeFlag(%q) error: %v", s, err) + } + if got.Year() != 2025 { + t.Fatalf("parseTimeFlag(%q) year=%d, want 2025", s, got.Year()) + } + } + + if _, err := parseTimeFlag("not-a-date"); err == nil { + t.Fatalf("expected error") + } +} + +func TestListPinIndicator(t *testing.T) { + if pinIndicator(&types.Issue{Pinned: true}) == "" { + t.Fatalf("expected pin indicator") + } + if pinIndicator(&types.Issue{Pinned: false}) != "" { + t.Fatalf("expected empty pin indicator") + } +} + +func TestListFormatPrettyIssue_BadgesAndDefaults(t *testing.T) { + iss := &types.Issue{ID: "bd-1", Title: "Hello", Status: "wat", Priority: 99, IssueType: "bug"} + out := formatPrettyIssue(iss) + if !strings.Contains(out, "bd-1") || !strings.Contains(out, "Hello") { + t.Fatalf("unexpected output: %q", out) + } + if !strings.Contains(out, "[BUG]") { + t.Fatalf("expected BUG badge: %q", out) + } +} + +func TestListBuildIssueTree_ParentChildByDotID(t *testing.T) { + parent := &types.Issue{ID: "bd-1", Title: "Parent", Status: types.StatusOpen, Priority: 2, IssueType: types.TypeTask} + child := &types.Issue{ID: "bd-1.1", Title: "Child", Status: types.StatusOpen, Priority: 2, IssueType: types.TypeTask} + orphan := &types.Issue{ID: "bd-2.1", Title: "Orphan", Status: types.StatusOpen, Priority: 2, IssueType: types.TypeTask} + + roots, children := buildIssueTree([]*types.Issue{child, parent, orphan}) + if len(children["bd-1"]) != 1 || children["bd-1"][0].ID != "bd-1.1" { + t.Fatalf("expected bd-1 to have bd-1.1 child: %+v", children) + } + if len(roots) != 2 { + t.Fatalf("expected 2 roots (parent + orphan), got %d", len(roots)) + } +} + +func TestListSortIssues_ClosedNilLast(t *testing.T) { + t1 := time.Now().Add(-2 * time.Hour) + t2 := time.Now().Add(-1 * time.Hour) + + closedOld := &types.Issue{ID: "bd-1", ClosedAt: &t1} + closedNew := &types.Issue{ID: "bd-2", ClosedAt: &t2} + open := &types.Issue{ID: "bd-3", ClosedAt: nil} + + issues := []*types.Issue{open, closedOld, closedNew} + sortIssues(issues, "closed", false) + if issues[0].ID != "bd-2" || issues[1].ID != "bd-1" || issues[2].ID != "bd-3" { + t.Fatalf("unexpected order: %s, %s, %s", issues[0].ID, issues[1].ID, issues[2].ID) + } +} + +func TestListDisplayPrettyList(t *testing.T) { + out := captureStdout(t, func() error { + displayPrettyList(nil, false) + return nil + }) + if !strings.Contains(out, "No issues found") { + t.Fatalf("unexpected output: %q", out) + } + + issues := []*types.Issue{ + {ID: "bd-1", Title: "A", Status: types.StatusOpen, Priority: 2, IssueType: types.TypeTask}, + {ID: "bd-2", Title: "B", Status: types.StatusInProgress, Priority: 1, IssueType: types.TypeFeature}, + {ID: "bd-1.1", Title: "C", Status: types.StatusOpen, Priority: 2, IssueType: types.TypeTask}, + } + + out = captureStdout(t, func() error { + displayPrettyList(issues, false) + return nil + }) + if !strings.Contains(out, "bd-1") || !strings.Contains(out, "bd-1.1") || !strings.Contains(out, "Total:") { + t.Fatalf("unexpected output: %q", out) + } +} diff --git a/cmd/bd/main.go b/cmd/bd/main.go index 0b5b52be..c054080c 100644 --- a/cmd/bd/main.go +++ b/cmd/bd/main.go @@ -2,6 +2,7 @@ package main import ( "context" + "encoding/json" "fmt" "os" "os/signal" @@ -312,6 +313,9 @@ var rootCmd = &cobra.Command{ // Set up signal-aware context for graceful cancellation rootCtx, rootCancel = signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGTERM) + // Signal Gas Town daemon about bd activity (best-effort, for exponential backoff) + defer signalGasTownActivity() + // Apply verbosity flags early (before any output) debug.SetVerbose(verboseFlag) debug.SetQuiet(quietFlag) @@ -623,6 +627,13 @@ var rootCmd = &cobra.Command{ FallbackReason: FallbackNone, } + // Doctor should always run in direct mode. It's specifically used to diagnose and + // repair daemon/DB issues, so attempting to connect to (or auto-start) a daemon + // can add noise and timeouts. + if cmd.Name() == "doctor" { + noDaemon = true + } + // Try to connect to daemon first (unless --no-daemon flag is set or worktree safety check fails) if noDaemon { daemonStatus.FallbackReason = FallbackFlagNoDaemon @@ -920,8 +931,14 @@ var rootCmd = &cobra.Command{ if store != nil { _ = store.Close() } - if profileFile != nil { pprof.StopCPUProfile(); _ = profileFile.Close() } - if traceFile != nil { trace.Stop(); _ = traceFile.Close() } + if profileFile != nil { + pprof.StopCPUProfile() + _ = profileFile.Close() + } + if traceFile != nil { + trace.Stop() + _ = traceFile.Close() + } // Cancel the signal context to clean up resources if rootCancel != nil { @@ -934,6 +951,80 @@ var rootCmd = &cobra.Command{ // Configurable via config file or BEADS_FLUSH_DEBOUNCE env var (e.g., "500ms", "10s") // Defaults to 5 seconds if not set or invalid +// signalGasTownActivity writes an activity signal for Gas Town daemon. +// This enables exponential backoff based on bd usage detection (gt-ws8ol). +// Best-effort: silent on any failure, never affects bd operation. +func signalGasTownActivity() { + // Determine town root + // Priority: GT_ROOT env > detect from cwd path > skip + townRoot := os.Getenv("GT_ROOT") + if townRoot == "" { + // Try to detect from cwd - if under ~/gt/, use that as town root + home, err := os.UserHomeDir() + if err != nil { + return + } + gtRoot := filepath.Join(home, "gt") + cwd, err := os.Getwd() + if err != nil { + return + } + if strings.HasPrefix(cwd, gtRoot+string(os.PathSeparator)) { + townRoot = gtRoot + } + } + + if townRoot == "" { + return // Not in Gas Town, skip + } + + // Ensure daemon directory exists + daemonDir := filepath.Join(townRoot, "daemon") + if err := os.MkdirAll(daemonDir, 0755); err != nil { + return + } + + // Build command line from os.Args + cmdLine := strings.Join(os.Args, " ") + + // Determine actor (use package-level var if set, else fall back to env) + actorName := actor + if actorName == "" { + if bdActor := os.Getenv("BD_ACTOR"); bdActor != "" { + actorName = bdActor + } else if user := os.Getenv("USER"); user != "" { + actorName = user + } else { + actorName = "unknown" + } + } + + // Build activity signal + activity := struct { + LastCommand string `json:"last_command"` + Actor string `json:"actor"` + Timestamp string `json:"timestamp"` + }{ + LastCommand: cmdLine, + Actor: actorName, + Timestamp: time.Now().UTC().Format(time.RFC3339), + } + + data, err := json.Marshal(activity) + if err != nil { + return + } + + // Write atomically (write to temp, rename) + activityPath := filepath.Join(daemonDir, "activity.json") + tmpPath := activityPath + ".tmp" + // nolint:gosec // G306: 0644 is appropriate for a status file + if err := os.WriteFile(tmpPath, data, 0644); err != nil { + return + } + _ = os.Rename(tmpPath, activityPath) +} + func main() { if err := rootCmd.Execute(); err != nil { os.Exit(1) diff --git a/cmd/bd/mol.go b/cmd/bd/mol.go index 5cbc673a..18abf012 100644 --- a/cmd/bd/mol.go +++ b/cmd/bd/mol.go @@ -20,8 +20,8 @@ import ( // Usage: // bd mol catalog # List available protos // bd mol show # Show proto/molecule structure -// bd pour --var key=value # Instantiate proto → persistent mol -// bd wisp create --var key=value # Instantiate proto → ephemeral wisp +// bd mol pour --var key=value # Instantiate proto → persistent mol +// bd mol wisp --var key=value # Instantiate proto → ephemeral wisp // MoleculeLabel is the label used to identify molecules (templates) // Molecules use the same label as templates - they ARE templates with workflow semantics @@ -48,14 +48,14 @@ The molecule metaphor: - Distilling extracts a proto from an ad-hoc epic Commands: - catalog List available protos - show Show proto/molecule structure and variables - bond Polymorphic combine: proto+proto, proto+mol, mol+mol - distill Extract proto from ad-hoc epic - -See also: - bd pour # Instantiate as persistent mol (liquid phase) - bd wisp create # Instantiate as ephemeral wisp (vapor phase)`, + catalog List available protos + show Show proto/molecule structure and variables + pour Instantiate proto as persistent mol (liquid phase) + wisp Instantiate proto as ephemeral wisp (vapor phase) + bond Polymorphic combine: proto+proto, proto+mol, mol+mol + squash Condense molecule to digest + burn Discard wisp + distill Extract proto from ad-hoc epic`, } // ============================================================================= @@ -72,7 +72,7 @@ func spawnMolecule(ctx context.Context, s storage.Storage, subgraph *MoleculeSub Vars: vars, Assignee: assignee, Actor: actorName, - Wisp: ephemeral, + Ephemeral: ephemeral, Prefix: prefix, } return cloneSubgraph(ctx, s, subgraph, opts) diff --git a/cmd/bd/mol_bond.go b/cmd/bd/mol_bond.go index 00776f06..e5d2c1bd 100644 --- a/cmd/bd/mol_bond.go +++ b/cmd/bd/mol_bond.go @@ -40,12 +40,12 @@ Bond types: Phase control: By default, spawned protos follow the target's phase: - - Attaching to mol (Wisp=false) → spawns as persistent (Wisp=false) - - Attaching to wisp (Wisp=true) → spawns as ephemeral (Wisp=true) + - Attaching to mol (Ephemeral=false) → spawns as persistent (Ephemeral=false) + - Attaching to ephemeral issue (Ephemeral=true) → spawns as ephemeral (Ephemeral=true) Override with: - --pour Force spawn as liquid (persistent, Wisp=false) - --wisp Force spawn as vapor (ephemeral, Wisp=true, excluded from JSONL export) + --pour Force spawn as liquid (persistent, Ephemeral=false) + --ephemeral Force spawn as vapor (ephemeral, Ephemeral=true, excluded from JSONL export) Dynamic bonding (Christmas Ornament pattern): Use --ref to specify a custom child reference with variable substitution. @@ -57,7 +57,7 @@ Dynamic bonding (Christmas Ornament pattern): Use cases: - Found important bug during patrol? Use --pour to persist it - - Need ephemeral diagnostic on persistent feature? Use --wisp + - Need ephemeral diagnostic on persistent feature? Use --ephemeral - Spawning per-worker arms on a patrol? Use --ref for readable IDs Examples: @@ -66,7 +66,7 @@ Examples: bd mol bond mol-feature bd-abc123 # Attach proto to molecule bd mol bond bd-abc123 bd-def456 # Join two molecules bd mol bond mol-critical-bug wisp-patrol --pour # Persist found bug - bd mol bond mol-temp-check bd-feature --wisp # Ephemeral diagnostic + bd mol bond mol-temp-check bd-feature --ephemeral # Ephemeral diagnostic bd mol bond mol-arm bd-patrol --ref arm-{{name}} --var name=ace # Dynamic child ID`, Args: cobra.ExactArgs(2), Run: runMolBond, @@ -102,20 +102,20 @@ func runMolBond(cmd *cobra.Command, args []string) { customTitle, _ := cmd.Flags().GetString("as") dryRun, _ := cmd.Flags().GetBool("dry-run") varFlags, _ := cmd.Flags().GetStringSlice("var") - wisp, _ := cmd.Flags().GetBool("wisp") + ephemeral, _ := cmd.Flags().GetBool("ephemeral") pour, _ := cmd.Flags().GetBool("pour") childRef, _ := cmd.Flags().GetString("ref") // Validate phase flags are not both set - if wisp && pour { - fmt.Fprintf(os.Stderr, "Error: cannot use both --wisp and --pour\n") + if ephemeral && pour { + fmt.Fprintf(os.Stderr, "Error: cannot use both --ephemeral and --pour\n") os.Exit(1) } - // All issues go in the main store; wisp vs pour determines the Wisp flag - // --wisp: create with Wisp=true (ephemeral, excluded from JSONL export) - // --pour: create with Wisp=false (persistent, exported to JSONL) - // Default: follow target's phase (wisp if target is wisp, otherwise persistent) + // All issues go in the main store; ephemeral vs pour determines the Wisp flag + // --ephemeral: create with Ephemeral=true (ephemeral, excluded from JSONL export) + // --pour: create with Ephemeral=false (persistent, exported to JSONL) + // Default: follow target's phase (ephemeral if target is ephemeral, otherwise persistent) // Validate bond type if bondType != types.BondTypeSequential && bondType != types.BondTypeParallel && bondType != types.BondTypeConditional { @@ -181,8 +181,8 @@ func runMolBond(cmd *cobra.Command, args []string) { fmt.Printf(" B: %s (%s)\n", issueB.Title, operandType(bIsProto)) } fmt.Printf(" Bond type: %s\n", bondType) - if wisp { - fmt.Printf(" Phase override: vapor (--wisp)\n") + if ephemeral { + fmt.Printf(" Phase override: vapor (--ephemeral)\n") } else if pour { fmt.Printf(" Phase override: liquid (--pour)\n") } @@ -240,16 +240,16 @@ func runMolBond(cmd *cobra.Command, args []string) { case aIsProto && !bIsProto: // Pass subgraph directly if cooked from formula if cookedA { - result, err = bondProtoMolWithSubgraph(ctx, store, subgraphA, issueA, issueB, bondType, vars, childRef, actor, wisp, pour) + result, err = bondProtoMolWithSubgraph(ctx, store, subgraphA, issueA, issueB, bondType, vars, childRef, actor, ephemeral, pour) } else { - result, err = bondProtoMol(ctx, store, issueA, issueB, bondType, vars, childRef, actor, wisp, pour) + result, err = bondProtoMol(ctx, store, issueA, issueB, bondType, vars, childRef, actor, ephemeral, pour) } case !aIsProto && bIsProto: // Pass subgraph directly if cooked from formula if cookedB { - result, err = bondProtoMolWithSubgraph(ctx, store, subgraphB, issueB, issueA, bondType, vars, childRef, actor, wisp, pour) + result, err = bondProtoMolWithSubgraph(ctx, store, subgraphB, issueB, issueA, bondType, vars, childRef, actor, ephemeral, pour) } else { - result, err = bondMolProto(ctx, store, issueA, issueB, bondType, vars, childRef, actor, wisp, pour) + result, err = bondMolProto(ctx, store, issueA, issueB, bondType, vars, childRef, actor, ephemeral, pour) } default: result, err = bondMolMol(ctx, store, issueA, issueB, bondType, actor) @@ -273,10 +273,10 @@ func runMolBond(cmd *cobra.Command, args []string) { if result.Spawned > 0 { fmt.Printf(" Spawned: %d issues\n", result.Spawned) } - if wisp { - fmt.Printf(" Phase: vapor (ephemeral, Wisp=true)\n") + if ephemeral { + fmt.Printf(" Phase: vapor (ephemeral, Ephemeral=true)\n") } else if pour { - fmt.Printf(" Phase: liquid (persistent, Wisp=false)\n") + fmt.Printf(" Phase: liquid (persistent, Ephemeral=false)\n") } } @@ -386,12 +386,12 @@ func bondProtoProto(ctx context.Context, s storage.Storage, protoA, protoB *type // bondProtoMol bonds a proto to an existing molecule by spawning the proto. // If childRef is provided, generates custom IDs like "parent.childref" (dynamic bonding). // protoSubgraph can be nil if proto is from DB (will be loaded), or pre-loaded for formulas. -func bondProtoMol(ctx context.Context, s storage.Storage, proto, mol *types.Issue, bondType string, vars map[string]string, childRef string, actorName string, wispFlag, pourFlag bool) (*BondResult, error) { - return bondProtoMolWithSubgraph(ctx, s, nil, proto, mol, bondType, vars, childRef, actorName, wispFlag, pourFlag) +func bondProtoMol(ctx context.Context, s storage.Storage, proto, mol *types.Issue, bondType string, vars map[string]string, childRef string, actorName string, ephemeralFlag, pourFlag bool) (*BondResult, error) { + return bondProtoMolWithSubgraph(ctx, s, nil, proto, mol, bondType, vars, childRef, actorName, ephemeralFlag, pourFlag) } // bondProtoMolWithSubgraph is the internal implementation that accepts a pre-loaded subgraph. -func bondProtoMolWithSubgraph(ctx context.Context, s storage.Storage, protoSubgraph *TemplateSubgraph, proto, mol *types.Issue, bondType string, vars map[string]string, childRef string, actorName string, wispFlag, pourFlag bool) (*BondResult, error) { +func bondProtoMolWithSubgraph(ctx context.Context, s storage.Storage, protoSubgraph *TemplateSubgraph, proto, mol *types.Issue, bondType string, vars map[string]string, childRef string, actorName string, ephemeralFlag, pourFlag bool) (*BondResult, error) { // Use provided subgraph or load from DB subgraph := protoSubgraph if subgraph == nil { @@ -414,20 +414,20 @@ func bondProtoMolWithSubgraph(ctx context.Context, s storage.Storage, protoSubgr return nil, fmt.Errorf("missing required variables: %s (use --var)", strings.Join(missingVars, ", ")) } - // Determine wisp flag based on explicit flags or target's phase - // --wisp: force wisp=true, --pour: force wisp=false, neither: follow target - makeWisp := mol.Wisp // Default: follow target's phase - if wispFlag { - makeWisp = true + // Determine ephemeral flag based on explicit flags or target's phase + // --ephemeral: force ephemeral=true, --pour: force ephemeral=false, neither: follow target + makeEphemeral := mol.Ephemeral // Default: follow target's phase + if ephemeralFlag { + makeEphemeral = true } else if pourFlag { - makeWisp = false + makeEphemeral = false } // Build CloneOptions for spawning opts := CloneOptions{ Vars: vars, Actor: actorName, - Wisp: makeWisp, + Ephemeral: makeEphemeral, } // Dynamic bonding: use custom IDs if childRef is provided @@ -482,9 +482,9 @@ func bondProtoMolWithSubgraph(ctx context.Context, s storage.Storage, protoSubgr } // bondMolProto bonds a molecule to a proto (symmetric with bondProtoMol) -func bondMolProto(ctx context.Context, s storage.Storage, mol, proto *types.Issue, bondType string, vars map[string]string, childRef string, actorName string, wispFlag, pourFlag bool) (*BondResult, error) { +func bondMolProto(ctx context.Context, s storage.Storage, mol, proto *types.Issue, bondType string, vars map[string]string, childRef string, actorName string, ephemeralFlag, pourFlag bool) (*BondResult, error) { // Same as bondProtoMol but with arguments swapped - return bondProtoMol(ctx, s, proto, mol, bondType, vars, childRef, actorName, wispFlag, pourFlag) + return bondProtoMol(ctx, s, proto, mol, bondType, vars, childRef, actorName, ephemeralFlag, pourFlag) } // bondMolMol bonds two molecules together @@ -630,8 +630,8 @@ func init() { molBondCmd.Flags().String("as", "", "Custom title for compound proto (proto+proto only)") molBondCmd.Flags().Bool("dry-run", false, "Preview what would be created") molBondCmd.Flags().StringSlice("var", []string{}, "Variable substitution for spawned protos (key=value)") - molBondCmd.Flags().Bool("wisp", false, "Force spawn as vapor (ephemeral, Wisp=true)") - molBondCmd.Flags().Bool("pour", false, "Force spawn as liquid (persistent, Wisp=false)") + molBondCmd.Flags().Bool("ephemeral", false, "Force spawn as vapor (ephemeral, Ephemeral=true)") + molBondCmd.Flags().Bool("pour", false, "Force spawn as liquid (persistent, Ephemeral=false)") molBondCmd.Flags().String("ref", "", "Custom child reference with {{var}} substitution (e.g., arm-{{polecat_name}})") molCmd.AddCommand(molBondCmd) diff --git a/cmd/bd/mol_burn.go b/cmd/bd/mol_burn.go index 04da097b..6d053389 100644 --- a/cmd/bd/mol_burn.go +++ b/cmd/bd/mol_burn.go @@ -23,8 +23,8 @@ completely removes the wisp with no trace. Use this for: - Test/debug wisps you don't want to preserve The burn operation: - 1. Verifies the molecule has Wisp=true (is ephemeral) - 2. Deletes the molecule and all its wisp children + 1. Verifies the molecule has Ephemeral=true (is ephemeral) + 2. Deletes the molecule and all its ephemeral children 3. No digest is created (use 'bd mol squash' if you want a digest) CAUTION: This is a destructive operation. The wisp's data will be @@ -81,8 +81,8 @@ func runMolBurn(cmd *cobra.Command, args []string) { } // Verify it's a wisp - if !rootIssue.Wisp { - fmt.Fprintf(os.Stderr, "Error: molecule %s is not a wisp (Wisp=false)\n", resolvedID) + if !rootIssue.Ephemeral { + fmt.Fprintf(os.Stderr, "Error: molecule %s is not a wisp (Ephemeral=false)\n", resolvedID) fmt.Fprintf(os.Stderr, "Hint: mol burn only works with wisp molecules\n") fmt.Fprintf(os.Stderr, " Use 'bd delete' to remove non-wisp issues\n") os.Exit(1) @@ -98,7 +98,7 @@ func runMolBurn(cmd *cobra.Command, args []string) { // Collect wisp issue IDs to delete (only delete wisps, not regular children) var wispIDs []string for _, issue := range subgraph.Issues { - if issue.Wisp { + if issue.Ephemeral { wispIDs = append(wispIDs, issue.ID) } } @@ -120,7 +120,7 @@ func runMolBurn(cmd *cobra.Command, args []string) { fmt.Printf("Root: %s\n", subgraph.Root.Title) fmt.Printf("\nWisp issues to delete (%d total):\n", len(wispIDs)) for _, issue := range subgraph.Issues { - if !issue.Wisp { + if !issue.Ephemeral { continue } status := string(issue.Status) @@ -166,7 +166,7 @@ func runMolBurn(cmd *cobra.Command, args []string) { } fmt.Printf("%s Burned wisp: %d issues deleted\n", ui.RenderPass("✓"), result.DeletedCount) - fmt.Printf(" Wisp: %s\n", resolvedID) + fmt.Printf(" Ephemeral: %s\n", resolvedID) fmt.Printf(" No digest created.\n") } diff --git a/cmd/bd/mol_catalog.go b/cmd/bd/mol_catalog.go index e5e1bf14..1b5f4577 100644 --- a/cmd/bd/mol_catalog.go +++ b/cmd/bd/mol_catalog.go @@ -23,7 +23,7 @@ var molCatalogCmd = &cobra.Command{ Use: "catalog", Aliases: []string{"list", "ls"}, Short: "List available molecule formulas", - Long: `List formulas available for bd pour / bd wisp create. + Long: `List formulas available for bd mol pour / bd mol wisp. Formulas are ephemeral proto definitions stored as .formula.json files. They are cooked inline when pouring, never stored as database beads. @@ -92,12 +92,12 @@ Search paths (in priority order): fmt.Println("\nOr distill from existing work:") fmt.Println(" bd mol distill my-workflow") fmt.Println("\nTo instantiate from formula:") - fmt.Println(" bd pour --var key=value # persistent mol") - fmt.Println(" bd wisp create --var key=value # ephemeral wisp") + fmt.Println(" bd mol pour --var key=value # persistent mol") + fmt.Println(" bd mol wisp --var key=value # ephemeral wisp") return } - fmt.Printf("%s\n\n", ui.RenderPass("Formulas (for bd pour / bd wisp create):")) + fmt.Printf("%s\n\n", ui.RenderPass("Formulas (for bd mol pour / bd mol wisp):")) // Group by type for display byType := make(map[string][]CatalogEntry) diff --git a/cmd/bd/mol_current.go b/cmd/bd/mol_current.go index 95a6651f..75e2f07b 100644 --- a/cmd/bd/mol_current.go +++ b/cmd/bd/mol_current.go @@ -100,7 +100,7 @@ The output shows all steps with status indicators: } fmt.Println(".") fmt.Println("\nTo start work on a molecule:") - fmt.Println(" bd pour # Instantiate a molecule from template") + fmt.Println(" bd mol pour # Instantiate a molecule from template") fmt.Println(" bd update --status in_progress # Claim a step") return } diff --git a/cmd/bd/mol_distill.go b/cmd/bd/mol_distill.go index f101397e..2acdcbc2 100644 --- a/cmd/bd/mol_distill.go +++ b/cmd/bd/mol_distill.go @@ -225,7 +225,7 @@ func runMolDistill(cmd *cobra.Command, args []string) { fmt.Printf(" Variables: %s\n", strings.Join(result.Variables, ", ")) } fmt.Printf("\nTo instantiate:\n") - fmt.Printf(" bd pour %s", result.FormulaName) + fmt.Printf(" bd mol pour %s", result.FormulaName) for _, v := range result.Variables { fmt.Printf(" --var %s=", v) } diff --git a/cmd/bd/mol_squash.go b/cmd/bd/mol_squash.go index 6b922a5d..b4236b24 100644 --- a/cmd/bd/mol_squash.go +++ b/cmd/bd/mol_squash.go @@ -18,17 +18,17 @@ import ( var molSquashCmd = &cobra.Command{ Use: "squash ", Short: "Compress molecule execution into a digest", - Long: `Squash a molecule's wisp children into a single digest issue. + Long: `Squash a molecule's ephemeral children into a single digest issue. -This command collects all wisp child issues of a molecule (Wisp=true), +This command collects all ephemeral child issues of a molecule (Ephemeral=true), generates a summary digest, and promotes the wisps to persistent by clearing their Wisp flag (or optionally deletes them). The squash operation: 1. Loads the molecule and all its children - 2. Filters to only wisps (ephemeral issues with Wisp=true) + 2. Filters to only wisps (ephemeral issues with Ephemeral=true) 3. Generates a digest (summary of work done) - 4. Creates a permanent digest issue (Wisp=false) + 4. Creates a permanent digest issue (Ephemeral=false) 5. Clears Wisp flag on children (promotes to persistent) OR deletes them with --delete-children @@ -95,13 +95,13 @@ func runMolSquash(cmd *cobra.Command, args []string) { os.Exit(1) } - // Filter to only wisp children (exclude root) + // Filter to only ephemeral children (exclude root) var wispChildren []*types.Issue for _, issue := range subgraph.Issues { if issue.ID == subgraph.Root.ID { continue // Skip root } - if issue.Wisp { + if issue.Ephemeral { wispChildren = append(wispChildren, issue) } } @@ -113,13 +113,13 @@ func runMolSquash(cmd *cobra.Command, args []string) { SquashedCount: 0, }) } else { - fmt.Printf("No wisp children found for molecule %s\n", moleculeID) + fmt.Printf("No ephemeral children found for molecule %s\n", moleculeID) } return } if dryRun { - fmt.Printf("\nDry run: would squash %d wisp children of %s\n\n", len(wispChildren), moleculeID) + fmt.Printf("\nDry run: would squash %d ephemeral children of %s\n\n", len(wispChildren), moleculeID) fmt.Printf("Root: %s\n", subgraph.Root.Title) fmt.Printf("\nWisp children to squash:\n") for _, issue := range wispChildren { @@ -247,7 +247,7 @@ func squashMolecule(ctx context.Context, s storage.Storage, root *types.Issue, c CloseReason: fmt.Sprintf("Squashed from %d wisps", len(children)), Priority: root.Priority, IssueType: types.TypeTask, - Wisp: false, // Digest is permanent, not a wisp + Ephemeral: false, // Digest is permanent, not a wisp ClosedAt: &now, } @@ -283,7 +283,7 @@ func squashMolecule(ctx context.Context, s storage.Storage, root *types.Issue, c return nil, err } - // Delete wisp children (outside transaction for better error handling) + // Delete ephemeral children (outside transaction for better error handling) if !keepChildren { deleted, err := deleteWispChildren(ctx, s, childIDs) if err != nil { @@ -319,7 +319,7 @@ func deleteWispChildren(ctx context.Context, s storage.Storage, ids []string) (i func init() { molSquashCmd.Flags().Bool("dry-run", false, "Preview what would be squashed") - molSquashCmd.Flags().Bool("keep-children", false, "Don't delete wisp children after squash") + molSquashCmd.Flags().Bool("keep-children", false, "Don't delete ephemeral children after squash") molSquashCmd.Flags().String("summary", "", "Agent-provided summary (bypasses auto-generation)") molCmd.AddCommand(molSquashCmd) diff --git a/cmd/bd/mol_test.go b/cmd/bd/mol_test.go index cf57a18b..a8d18728 100644 --- a/cmd/bd/mol_test.go +++ b/cmd/bd/mol_test.go @@ -489,7 +489,7 @@ func TestSquashMolecule(t *testing.T) { Status: types.StatusClosed, Priority: 2, IssueType: types.TypeTask, - Wisp: true, + Ephemeral: true, CloseReason: "Completed design", } child2 := &types.Issue{ @@ -498,7 +498,7 @@ func TestSquashMolecule(t *testing.T) { Status: types.StatusClosed, Priority: 2, IssueType: types.TypeTask, - Wisp: true, + Ephemeral: true, CloseReason: "Code merged", } @@ -547,7 +547,7 @@ func TestSquashMolecule(t *testing.T) { if err != nil { t.Fatalf("Failed to get digest: %v", err) } - if digest.Wisp { + if digest.Ephemeral { t.Error("Digest should NOT be ephemeral") } if digest.Status != types.StatusClosed { @@ -595,7 +595,7 @@ func TestSquashMoleculeWithDelete(t *testing.T) { Status: types.StatusClosed, Priority: 2, IssueType: types.TypeTask, - Wisp: true, + Ephemeral: true, } if err := s.CreateIssue(ctx, child, "test"); err != nil { t.Fatalf("Failed to create child: %v", err) @@ -705,7 +705,7 @@ func TestSquashMoleculeWithAgentSummary(t *testing.T) { Status: types.StatusClosed, Priority: 2, IssueType: types.TypeTask, - Wisp: true, + Ephemeral: true, CloseReason: "Done", } if err := s.CreateIssue(ctx, child, "test"); err != nil { @@ -1304,14 +1304,14 @@ func TestWispFilteringFromExport(t *testing.T) { Status: types.StatusOpen, Priority: 1, IssueType: types.TypeTask, - Wisp: false, + Ephemeral: false, } wispIssue := &types.Issue{ Title: "Wisp Issue", Status: types.StatusOpen, Priority: 2, IssueType: types.TypeTask, - Wisp: true, + Ephemeral: true, } if err := s.CreateIssue(ctx, normalIssue, "test"); err != nil { @@ -1333,7 +1333,7 @@ func TestWispFilteringFromExport(t *testing.T) { // Filter wisp issues (simulating export behavior) exportableIssues := make([]*types.Issue, 0) for _, issue := range allIssues { - if !issue.Wisp { + if !issue.Ephemeral { exportableIssues = append(exportableIssues, issue) } } diff --git a/cmd/bd/nodb.go b/cmd/bd/nodb.go index 5bbcfd15..012ac5d4 100644 --- a/cmd/bd/nodb.go +++ b/cmd/bd/nodb.go @@ -72,8 +72,11 @@ func initializeNoDbMode() error { debug.Logf("using prefix '%s'", prefix) - // Set global store + // Set global store and mark as active (fixes bd comment --no-db) + storeMutex.Lock() store = memStore + storeActive = true + storeMutex.Unlock() return nil } @@ -218,7 +221,7 @@ func writeIssuesToJSONL(memStore *memory.MemoryStorage, beadsDir string) error { // Wisps exist only in SQLite and are shared via .beads/redirect, not JSONL. filtered := make([]*types.Issue, 0, len(issues)) for _, issue := range issues { - if !issue.Wisp { + if !issue.Ephemeral { filtered = append(filtered, issue) } } diff --git a/cmd/bd/nodb_test.go b/cmd/bd/nodb_test.go index 56be64e9..0063c58e 100644 --- a/cmd/bd/nodb_test.go +++ b/cmd/bd/nodb_test.go @@ -158,6 +158,90 @@ func TestDetectPrefix(t *testing.T) { }) } +func TestInitializeNoDbMode_SetsStoreActive(t *testing.T) { + // This test verifies the fix for bd comment --no-db not working. + // The bug was that initializeNoDbMode() set `store` but not `storeActive`, + // so ensureStoreActive() would try to find a SQLite database. + + tempDir := t.TempDir() + beadsDir := filepath.Join(tempDir, ".beads") + if err := os.MkdirAll(beadsDir, 0o755); err != nil { + t.Fatalf("Failed to create .beads dir: %v", err) + } + + // Create a minimal JSONL file with one issue + jsonlPath := filepath.Join(beadsDir, "issues.jsonl") + content := `{"id":"bd-1","title":"Test Issue","status":"open"} +` + if err := os.WriteFile(jsonlPath, []byte(content), 0o600); err != nil { + t.Fatalf("Failed to write JSONL: %v", err) + } + + // Save and restore global state + oldStore := store + oldStoreActive := storeActive + oldCwd, _ := os.Getwd() + defer func() { + storeMutex.Lock() + store = oldStore + storeActive = oldStoreActive + storeMutex.Unlock() + _ = os.Chdir(oldCwd) + }() + + // Change to temp dir so initializeNoDbMode finds .beads + if err := os.Chdir(tempDir); err != nil { + t.Fatalf("Failed to chdir: %v", err) + } + + // Reset global state + storeMutex.Lock() + store = nil + storeActive = false + storeMutex.Unlock() + + // Initialize no-db mode + if err := initializeNoDbMode(); err != nil { + t.Fatalf("initializeNoDbMode failed: %v", err) + } + + // Verify storeActive is now true + storeMutex.Lock() + active := storeActive + s := store + storeMutex.Unlock() + + if !active { + t.Error("storeActive should be true after initializeNoDbMode") + } + if s == nil { + t.Fatal("store should not be nil after initializeNoDbMode") + } + + // ensureStoreActive should now return immediately without error + if err := ensureStoreActive(); err != nil { + t.Errorf("ensureStoreActive should succeed after initializeNoDbMode: %v", err) + } + + // Verify comments work (this was the failing case) + ctx := rootCtx + comment, err := s.AddIssueComment(ctx, "bd-1", "testuser", "Test comment") + if err != nil { + t.Fatalf("AddIssueComment failed: %v", err) + } + if comment.Text != "Test comment" { + t.Errorf("Expected 'Test comment', got %s", comment.Text) + } + + comments, err := s.GetIssueComments(ctx, "bd-1") + if err != nil { + t.Fatalf("GetIssueComments failed: %v", err) + } + if len(comments) != 1 { + t.Errorf("Expected 1 comment, got %d", len(comments)) + } +} + func TestWriteIssuesToJSONL(t *testing.T) { tempDir := t.TempDir() beadsDir := filepath.Join(tempDir, ".beads") diff --git a/cmd/bd/pour.go b/cmd/bd/pour.go index bcd05968..06b88305 100644 --- a/cmd/bd/pour.go +++ b/cmd/bd/pour.go @@ -32,9 +32,9 @@ Use pour for: - Anything you might need to reference later Examples: - bd pour mol-feature --var name=auth # Create persistent mol from proto - bd pour mol-release --var version=1.0 # Release workflow - bd pour mol-review --var pr=123 # Code review workflow`, + bd mol pour mol-feature --var name=auth # Create persistent mol from proto + bd mol pour mol-release --var version=1.0 # Release workflow + bd mol pour mol-review --var pr=123 # Code review workflow`, Args: cobra.ExactArgs(1), Run: runPour, } @@ -260,5 +260,5 @@ func init() { pourCmd.Flags().StringSlice("attach", []string{}, "Proto to attach after spawning (repeatable)") pourCmd.Flags().String("attach-type", types.BondTypeSequential, "Bond type for attachments: sequential, parallel, or conditional") - rootCmd.AddCommand(pourCmd) + molCmd.AddCommand(pourCmd) } diff --git a/cmd/bd/reinit_test.go b/cmd/bd/reinit_test.go index f536414d..5f2bdb7d 100644 --- a/cmd/bd/reinit_test.go +++ b/cmd/bd/reinit_test.go @@ -91,8 +91,12 @@ func testFreshCloneAutoImport(t *testing.T) { // Test checkGitForIssues detects issues.jsonl t.Chdir(dir) + git.ResetCaches() // Reset git caches after changing directory + git.ResetCaches() + + count, path, gitRef := checkGitForIssues() if count != 1 { t.Errorf("Expected 1 issue in git, got %d", count) @@ -171,8 +175,12 @@ func testDatabaseRemovalScenario(t *testing.T) { // Change to test directory t.Chdir(dir) + git.ResetCaches() // Reset git caches after changing directory + git.ResetCaches() + + // Test checkGitForIssues finds issues.jsonl (canonical name) count, path, gitRef := checkGitForIssues() if count != 2 { @@ -250,8 +258,12 @@ func testLegacyFilenameSupport(t *testing.T) { // Change to test directory t.Chdir(dir) + git.ResetCaches() // Reset git caches after changing directory + git.ResetCaches() + + // Test checkGitForIssues finds issues.jsonl count, path, gitRef := checkGitForIssues() if count != 1 { @@ -327,8 +339,12 @@ func testPrecedenceTest(t *testing.T) { // Change to test directory t.Chdir(dir) + git.ResetCaches() // Reset git caches after changing directory + git.ResetCaches() + + // Test checkGitForIssues prefers issues.jsonl count, path, _ := checkGitForIssues() if count != 2 { @@ -374,8 +390,12 @@ func testInitSafetyCheck(t *testing.T) { // Change to test directory t.Chdir(dir) + git.ResetCaches() // Reset git caches after changing directory + git.ResetCaches() + + // Create empty database (simulating failed import) dbPath := filepath.Join(beadsDir, "test.db") store, err := sqlite.New(context.Background(), dbPath) diff --git a/cmd/bd/show.go b/cmd/bd/show.go index 300d2e4a..d80cb9a5 100644 --- a/cmd/bd/show.go +++ b/cmd/bd/show.go @@ -111,6 +111,7 @@ var showCmd = &cobra.Command{ Labels []string `json:"labels,omitempty"` Dependencies []*types.IssueWithDependencyMetadata `json:"dependencies,omitempty"` Dependents []*types.IssueWithDependencyMetadata `json:"dependents,omitempty"` + Comments []*types.Comment `json:"comments,omitempty"` } details := &IssueDetails{Issue: issue} details.Labels, _ = issueStore.GetLabels(ctx, issue.ID) @@ -118,6 +119,7 @@ var showCmd = &cobra.Command{ details.Dependencies, _ = sqliteStore.GetDependenciesWithMetadata(ctx, issue.ID) details.Dependents, _ = sqliteStore.GetDependentsWithMetadata(ctx, issue.ID) } + details.Comments, _ = issueStore.GetIssueComments(ctx, issue.ID) allDetails = append(allDetails, details) } else { if displayIdx > 0 { @@ -151,6 +153,7 @@ var showCmd = &cobra.Command{ Labels []string `json:"labels,omitempty"` Dependencies []*types.IssueWithDependencyMetadata `json:"dependencies,omitempty"` Dependents []*types.IssueWithDependencyMetadata `json:"dependents,omitempty"` + Comments []*types.Comment `json:"comments,omitempty"` } var details IssueDetails if err := json.Unmarshal(resp.Data, &details); err == nil { @@ -173,6 +176,7 @@ var showCmd = &cobra.Command{ Labels []string `json:"labels,omitempty"` Dependencies []*types.IssueWithDependencyMetadata `json:"dependencies,omitempty"` Dependents []*types.IssueWithDependencyMetadata `json:"dependents,omitempty"` + Comments []*types.Comment `json:"comments,omitempty"` } var details IssueDetails if err := json.Unmarshal(resp.Data, &details); err != nil { @@ -303,6 +307,17 @@ var showCmd = &cobra.Command{ } } + if len(details.Comments) > 0 { + fmt.Printf("\nComments (%d):\n", len(details.Comments)) + for _, comment := range details.Comments { + fmt.Printf(" [%s] %s\n", comment.Author, comment.CreatedAt.Format("2006-01-02 15:04")) + commentLines := strings.Split(comment.Text, "\n") + for _, line := range commentLines { + fmt.Printf(" %s\n", line) + } + } + } + fmt.Println() } } @@ -748,8 +763,8 @@ var updateCmd = &cobra.Command{ fmt.Fprintf(os.Stderr, "Error getting %s: %v\n", id, err) continue } - if issue != nil && issue.IsTemplate { - fmt.Fprintf(os.Stderr, "Error: cannot update template %s: templates are read-only; use 'bd molecule instantiate' to create a work item\n", id) + if err := validateIssueUpdatable(id, issue); err != nil { + fmt.Fprintf(os.Stderr, "%s\n", err) continue } @@ -768,48 +783,21 @@ var updateCmd = &cobra.Command{ } // Handle label operations - // Set labels (replaces all existing labels) - if setLabels, ok := updates["set_labels"].([]string); ok && len(setLabels) > 0 { - // Get current labels - currentLabels, err := store.GetLabels(ctx, id) - if err != nil { - fmt.Fprintf(os.Stderr, "Error getting labels for %s: %v\n", id, err) + var setLabels, addLabels, removeLabels []string + if v, ok := updates["set_labels"].([]string); ok { + setLabels = v + } + if v, ok := updates["add_labels"].([]string); ok { + addLabels = v + } + if v, ok := updates["remove_labels"].([]string); ok { + removeLabels = v + } + if len(setLabels) > 0 || len(addLabels) > 0 || len(removeLabels) > 0 { + if err := applyLabelUpdates(ctx, store, id, actor, setLabels, addLabels, removeLabels); err != nil { + fmt.Fprintf(os.Stderr, "Error updating labels for %s: %v\n", id, err) continue } - // Remove all current labels - for _, label := range currentLabels { - if err := store.RemoveLabel(ctx, id, label, actor); err != nil { - fmt.Fprintf(os.Stderr, "Error removing label %s from %s: %v\n", label, id, err) - continue - } - } - // Add new labels - for _, label := range setLabels { - if err := store.AddLabel(ctx, id, label, actor); err != nil { - fmt.Fprintf(os.Stderr, "Error setting label %s on %s: %v\n", label, id, err) - continue - } - } - } - - // Add labels - if addLabels, ok := updates["add_labels"].([]string); ok { - for _, label := range addLabels { - if err := store.AddLabel(ctx, id, label, actor); err != nil { - fmt.Fprintf(os.Stderr, "Error adding label %s to %s: %v\n", label, id, err) - continue - } - } - } - - // Remove labels - if removeLabels, ok := updates["remove_labels"].([]string); ok { - for _, label := range removeLabels { - if err := store.RemoveLabel(ctx, id, label, actor); err != nil { - fmt.Fprintf(os.Stderr, "Error removing label %s from %s: %v\n", label, id, err) - continue - } - } } // Run update hook (bd-kwro.8) @@ -1031,6 +1019,10 @@ var closeCmd = &cobra.Command{ Run: func(cmd *cobra.Command, args []string) { CheckReadonly("close") reason, _ := cmd.Flags().GetString("reason") + if reason == "" { + // Check --resolution alias (Jira CLI convention) + reason, _ = cmd.Flags().GetString("resolution") + } if reason == "" { reason = "Closed" } @@ -1084,14 +1076,8 @@ var closeCmd = &cobra.Command{ if showErr == nil { var issue types.Issue if json.Unmarshal(showResp.Data, &issue) == nil { - // Check if issue is a template (beads-1ra): templates are read-only - if issue.IsTemplate { - fmt.Fprintf(os.Stderr, "Error: cannot close template %s: templates are read-only\n", id) - continue - } - // Check if issue is pinned (bd-6v2) - if !force && issue.Status == types.StatusPinned { - fmt.Fprintf(os.Stderr, "Error: cannot close pinned issue %s (use --force to override)\n", id) + if err := validateIssueClosable(id, &issue, force); err != nil { + fmt.Fprintf(os.Stderr, "%s\n", err) continue } } @@ -1169,20 +1155,11 @@ var closeCmd = &cobra.Command{ // Get issue for checks issue, _ := store.GetIssue(ctx, id) - // Check if issue is a template (beads-1ra): templates are read-only - if issue != nil && issue.IsTemplate { - fmt.Fprintf(os.Stderr, "Error: cannot close template %s: templates are read-only\n", id) + if err := validateIssueClosable(id, issue, force); err != nil { + fmt.Fprintf(os.Stderr, "%s\n", err) continue } - // Check if issue is pinned (bd-6v2) - if !force { - if issue != nil && issue.Status == types.StatusPinned { - fmt.Fprintf(os.Stderr, "Error: cannot close pinned issue %s (use --force to override)\n", id) - continue - } - } - if err := store.CloseIssue(ctx, id, reason, actor); err != nil { fmt.Fprintf(os.Stderr, "Error closing %s: %v\n", id, err) continue @@ -1427,15 +1404,13 @@ func findRepliesTo(ctx context.Context, issueID string, daemonClient *rpc.Client return "" } // Direct mode - query storage - if sqliteStore, ok := store.(*sqlite.SQLiteStorage); ok { - deps, err := sqliteStore.GetDependenciesWithMetadata(ctx, issueID) - if err != nil { - return "" - } - for _, dep := range deps { - if dep.DependencyType == types.DepRepliesTo { - return dep.ID - } + deps, err := store.GetDependencyRecords(ctx, issueID) + if err != nil { + return "" + } + for _, dep := range deps { + if dep.Type == types.DepRepliesTo { + return dep.DependsOnID } } return "" @@ -1484,7 +1459,25 @@ func findReplies(ctx context.Context, issueID string, daemonClient *rpc.Client, } return replies } - return nil + + allDeps, err := store.GetAllDependencyRecords(ctx) + if err != nil { + return nil + } + + var replies []*types.Issue + for childID, deps := range allDeps { + for _, dep := range deps { + if dep.Type == types.DepRepliesTo && dep.DependsOnID == issueID { + issue, _ := store.GetIssue(ctx, childID) + if issue != nil { + replies = append(replies, issue) + } + } + } + } + + return replies } func init() { @@ -1513,6 +1506,8 @@ func init() { rootCmd.AddCommand(editCmd) closeCmd.Flags().StringP("reason", "r", "", "Reason for closing") + closeCmd.Flags().String("resolution", "", "Alias for --reason (Jira CLI convention)") + _ = closeCmd.Flags().MarkHidden("resolution") // Hidden alias for agent/CLI ergonomics closeCmd.Flags().BoolP("force", "f", false, "Force close pinned issues") closeCmd.Flags().Bool("continue", false, "Auto-advance to next step in molecule") closeCmd.Flags().Bool("no-auto", false, "With --continue, show next step but don't claim it") diff --git a/cmd/bd/show_unit_helpers.go b/cmd/bd/show_unit_helpers.go new file mode 100644 index 00000000..23c80a1e --- /dev/null +++ b/cmd/bd/show_unit_helpers.go @@ -0,0 +1,68 @@ +package main + +import ( + "context" + "fmt" + + "github.com/steveyegge/beads/internal/storage" + "github.com/steveyegge/beads/internal/types" +) + +func validateIssueUpdatable(id string, issue *types.Issue) error { + if issue == nil { + return nil + } + if issue.IsTemplate { + return fmt.Errorf("Error: cannot update template %s: templates are read-only; use 'bd molecule instantiate' to create a work item", id) + } + return nil +} + +func validateIssueClosable(id string, issue *types.Issue, force bool) error { + if issue == nil { + return nil + } + if issue.IsTemplate { + return fmt.Errorf("Error: cannot close template %s: templates are read-only", id) + } + if !force && issue.Status == types.StatusPinned { + return fmt.Errorf("Error: cannot close pinned issue %s (use --force to override)", id) + } + return nil +} + +func applyLabelUpdates(ctx context.Context, st storage.Storage, issueID, actor string, setLabels, addLabels, removeLabels []string) error { + // Set labels (replaces all existing labels) + if len(setLabels) > 0 { + currentLabels, err := st.GetLabels(ctx, issueID) + if err != nil { + return err + } + for _, label := range currentLabels { + if err := st.RemoveLabel(ctx, issueID, label, actor); err != nil { + return err + } + } + for _, label := range setLabels { + if err := st.AddLabel(ctx, issueID, label, actor); err != nil { + return err + } + } + } + + // Add labels + for _, label := range addLabels { + if err := st.AddLabel(ctx, issueID, label, actor); err != nil { + return err + } + } + + // Remove labels + for _, label := range removeLabels { + if err := st.RemoveLabel(ctx, issueID, label, actor); err != nil { + return err + } + } + + return nil +} diff --git a/cmd/bd/show_unit_helpers_test.go b/cmd/bd/show_unit_helpers_test.go new file mode 100644 index 00000000..e1bef58d --- /dev/null +++ b/cmd/bd/show_unit_helpers_test.go @@ -0,0 +1,139 @@ +package main + +import ( + "context" + "testing" + + "github.com/steveyegge/beads/internal/storage/memory" + "github.com/steveyegge/beads/internal/types" +) + +func TestValidateIssueUpdatable(t *testing.T) { + if err := validateIssueUpdatable("x", nil); err != nil { + t.Fatalf("expected nil error, got %v", err) + } + if err := validateIssueUpdatable("x", &types.Issue{IsTemplate: false}); err != nil { + t.Fatalf("expected nil error, got %v", err) + } + if err := validateIssueUpdatable("bd-1", &types.Issue{IsTemplate: true}); err == nil { + t.Fatalf("expected error") + } +} + +func TestValidateIssueClosable(t *testing.T) { + if err := validateIssueClosable("x", nil, false); err != nil { + t.Fatalf("expected nil error, got %v", err) + } + if err := validateIssueClosable("bd-1", &types.Issue{IsTemplate: true}, false); err == nil { + t.Fatalf("expected template close error") + } + if err := validateIssueClosable("bd-2", &types.Issue{Status: types.StatusPinned}, false); err == nil { + t.Fatalf("expected pinned close error") + } + if err := validateIssueClosable("bd-2", &types.Issue{Status: types.StatusPinned}, true); err != nil { + t.Fatalf("expected pinned close to succeed with force, got %v", err) + } +} + +func TestApplyLabelUpdates_SetAddRemove(t *testing.T) { + ctx := context.Background() + st := memory.New("") + if err := st.SetConfig(ctx, "issue_prefix", "test"); err != nil { + t.Fatalf("SetConfig: %v", err) + } + + issue := &types.Issue{Title: "x", Status: types.StatusOpen, Priority: 2, IssueType: types.TypeTask} + if err := st.CreateIssue(ctx, issue, "tester"); err != nil { + t.Fatalf("CreateIssue: %v", err) + } + + _ = st.AddLabel(ctx, issue.ID, "old1", "tester") + _ = st.AddLabel(ctx, issue.ID, "old2", "tester") + + if err := applyLabelUpdates(ctx, st, issue.ID, "tester", []string{"a", "b"}, []string{"b", "c"}, []string{"a"}); err != nil { + t.Fatalf("applyLabelUpdates: %v", err) + } + labels, _ := st.GetLabels(ctx, issue.ID) + if len(labels) != 2 { + t.Fatalf("expected 2 labels, got %v", labels) + } + // Order is not guaranteed. + foundB := false + foundC := false + for _, l := range labels { + if l == "b" { + foundB = true + } + if l == "c" { + foundC = true + } + if l == "old1" || l == "old2" || l == "a" { + t.Fatalf("unexpected label %q in %v", l, labels) + } + } + if !foundB || !foundC { + t.Fatalf("expected labels b and c, got %v", labels) + } +} + +func TestApplyLabelUpdates_AddRemoveOnly(t *testing.T) { + ctx := context.Background() + st := memory.New("") + issue := &types.Issue{Title: "x", Status: types.StatusOpen, Priority: 2, IssueType: types.TypeTask} + if err := st.CreateIssue(ctx, issue, "tester"); err != nil { + t.Fatalf("CreateIssue: %v", err) + } + + _ = st.AddLabel(ctx, issue.ID, "a", "tester") + if err := applyLabelUpdates(ctx, st, issue.ID, "tester", nil, []string{"b"}, []string{"a"}); err != nil { + t.Fatalf("applyLabelUpdates: %v", err) + } + labels, _ := st.GetLabels(ctx, issue.ID) + if len(labels) != 1 || labels[0] != "b" { + t.Fatalf("expected [b], got %v", labels) + } +} + +func TestFindRepliesToAndReplies_WorksWithMemoryStorage(t *testing.T) { + ctx := context.Background() + st := memory.New("") + if err := st.SetConfig(ctx, "issue_prefix", "test"); err != nil { + t.Fatalf("SetConfig: %v", err) + } + + root := &types.Issue{Title: "root", Status: types.StatusOpen, Priority: 2, IssueType: types.TypeMessage, Sender: "a", Assignee: "b"} + reply1 := &types.Issue{Title: "r1", Status: types.StatusOpen, Priority: 2, IssueType: types.TypeMessage, Sender: "b", Assignee: "a"} + reply2 := &types.Issue{Title: "r2", Status: types.StatusOpen, Priority: 2, IssueType: types.TypeMessage, Sender: "a", Assignee: "b"} + if err := st.CreateIssue(ctx, root, "tester"); err != nil { + t.Fatalf("CreateIssue(root): %v", err) + } + if err := st.CreateIssue(ctx, reply1, "tester"); err != nil { + t.Fatalf("CreateIssue(reply1): %v", err) + } + if err := st.CreateIssue(ctx, reply2, "tester"); err != nil { + t.Fatalf("CreateIssue(reply2): %v", err) + } + + if err := st.AddDependency(ctx, &types.Dependency{IssueID: reply1.ID, DependsOnID: root.ID, Type: types.DepRepliesTo}, "tester"); err != nil { + t.Fatalf("AddDependency(reply1->root): %v", err) + } + if err := st.AddDependency(ctx, &types.Dependency{IssueID: reply2.ID, DependsOnID: reply1.ID, Type: types.DepRepliesTo}, "tester"); err != nil { + t.Fatalf("AddDependency(reply2->reply1): %v", err) + } + + if got := findRepliesTo(ctx, root.ID, nil, st); got != "" { + t.Fatalf("expected root replies-to to be empty, got %q", got) + } + if got := findRepliesTo(ctx, reply2.ID, nil, st); got != reply1.ID { + t.Fatalf("expected reply2 parent %q, got %q", reply1.ID, got) + } + + rootReplies := findReplies(ctx, root.ID, nil, st) + if len(rootReplies) != 1 || rootReplies[0].ID != reply1.ID { + t.Fatalf("expected root replies [%s], got %+v", reply1.ID, rootReplies) + } + r1Replies := findReplies(ctx, reply1.ID, nil, st) + if len(r1Replies) != 1 || r1Replies[0].ID != reply2.ID { + t.Fatalf("expected reply1 replies [%s], got %+v", reply2.ID, r1Replies) + } +} diff --git a/cmd/bd/sync_export.go b/cmd/bd/sync_export.go index 606ebe6d..ea73b203 100644 --- a/cmd/bd/sync_export.go +++ b/cmd/bd/sync_export.go @@ -65,7 +65,7 @@ func exportToJSONL(ctx context.Context, jsonlPath string) error { // This prevents "zombie" issues that resurrect after mol squash deletes them. filteredIssues := make([]*types.Issue, 0, len(issues)) for _, issue := range issues { - if issue.Wisp { + if issue.Ephemeral { continue } filteredIssues = append(filteredIssues, issue) diff --git a/cmd/bd/sync_helpers_more_test.go b/cmd/bd/sync_helpers_more_test.go new file mode 100644 index 00000000..ae8479e4 --- /dev/null +++ b/cmd/bd/sync_helpers_more_test.go @@ -0,0 +1,71 @@ +package main + +import ( + "context" + "os" + "os/exec" + "path/filepath" + "strings" + "testing" + + "github.com/steveyegge/beads/internal/config" +) + +func TestBuildGitCommitArgs_ConfigOptions(t *testing.T) { + if err := config.Initialize(); err != nil { + t.Fatalf("config.Initialize: %v", err) + } + config.Set("git.author", "Test User ") + config.Set("git.no-gpg-sign", true) + + args := buildGitCommitArgs("/repo", "hello", "--", ".beads") + joined := strings.Join(args, " ") + if !strings.Contains(joined, "--author") { + t.Fatalf("expected --author in args: %v", args) + } + if !strings.Contains(joined, "--no-gpg-sign") { + t.Fatalf("expected --no-gpg-sign in args: %v", args) + } + if !strings.Contains(joined, "-m hello") { + t.Fatalf("expected message in args: %v", args) + } +} + +func TestGitCommitBeadsDir_PathspecDoesNotCommitOtherStagedFiles(t *testing.T) { + _, cleanup := setupGitRepo(t) + defer cleanup() + + if err := config.Initialize(); err != nil { + t.Fatalf("config.Initialize: %v", err) + } + + if err := os.MkdirAll(".beads", 0o755); err != nil { + t.Fatalf("MkdirAll: %v", err) + } + + // Stage an unrelated file before running gitCommitBeadsDir. + if err := os.WriteFile("other.txt", []byte("x\n"), 0o600); err != nil { + t.Fatalf("WriteFile other: %v", err) + } + _ = exec.Command("git", "add", "other.txt").Run() + + // Create a beads sync file to commit. + issuesPath := filepath.Join(".beads", "issues.jsonl") + if err := os.WriteFile(issuesPath, []byte("{\"id\":\"test-1\"}\n"), 0o600); err != nil { + t.Fatalf("WriteFile issues: %v", err) + } + + ctx := context.Background() + if err := gitCommitBeadsDir(ctx, "beads commit"); err != nil { + t.Fatalf("gitCommitBeadsDir: %v", err) + } + + // other.txt should still be staged after the beads-only commit. + out, err := exec.Command("git", "diff", "--cached", "--name-only").CombinedOutput() + if err != nil { + t.Fatalf("git diff --cached: %v\n%s", err, out) + } + if strings.TrimSpace(string(out)) != "other.txt" { + t.Fatalf("expected other.txt still staged, got: %q", out) + } +} diff --git a/cmd/bd/template.go b/cmd/bd/template.go index 24e7c2f9..8d4e6e04 100644 --- a/cmd/bd/template.go +++ b/cmd/bd/template.go @@ -42,10 +42,10 @@ type InstantiateResult struct { // CloneOptions controls how the subgraph is cloned during spawn/bond type CloneOptions struct { - Vars map[string]string // Variable substitutions for {{key}} placeholders - Assignee string // Assign the root epic to this agent/user - Actor string // Actor performing the operation - Wisp bool // If true, spawned issues are marked for bulk deletion + Vars map[string]string // Variable substitutions for {{key}} placeholders + Assignee string // Assign the root epic to this agent/user + Actor string // Actor performing the operation + Ephemeral bool // If true, spawned issues are marked for bulk deletion Prefix string // Override prefix for ID generation (bd-hobo: distinct prefixes) // Dynamic bonding fields (for Christmas Ornament pattern) @@ -327,7 +327,7 @@ Example: Vars: vars, Assignee: assignee, Actor: actor, - Wisp: false, + Ephemeral: false, } var result *InstantiateResult if daemonClient != nil { @@ -713,7 +713,7 @@ func cloneSubgraphViaDaemon(client *rpc.Client, subgraph *TemplateSubgraph, opts AcceptanceCriteria: substituteVariables(oldIssue.AcceptanceCriteria, opts.Vars), Assignee: issueAssignee, EstimatedMinutes: oldIssue.EstimatedMinutes, - Wisp: opts.Wisp, + Ephemeral: opts.Ephemeral, IDPrefix: opts.Prefix, // bd-hobo: distinct prefixes for mols/wisps } @@ -960,7 +960,7 @@ func cloneSubgraph(ctx context.Context, s storage.Storage, subgraph *TemplateSub IssueType: oldIssue.IssueType, Assignee: issueAssignee, EstimatedMinutes: oldIssue.EstimatedMinutes, - Wisp: opts.Wisp, // bd-2vh3: mark for cleanup when closed + Ephemeral: opts.Ephemeral, // bd-2vh3: mark for cleanup when closed IDPrefix: opts.Prefix, // bd-hobo: distinct prefixes for mols/wisps CreatedAt: time.Now(), UpdatedAt: time.Now(), diff --git a/cmd/bd/test_repo_beads_guard_test.go b/cmd/bd/test_repo_beads_guard_test.go new file mode 100644 index 00000000..9f7adddf --- /dev/null +++ b/cmd/bd/test_repo_beads_guard_test.go @@ -0,0 +1,118 @@ +package main + +import ( + "fmt" + "os" + "path/filepath" + "testing" + "time" +) + +// Guardrail: ensure the cmd/bd test suite does not touch the real repo .beads state. +// Disable with BEADS_TEST_GUARD_DISABLE=1 (useful when running tests while actively using beads). +func TestMain(m *testing.M) { + if os.Getenv("BEADS_TEST_GUARD_DISABLE") != "" { + os.Exit(m.Run()) + } + + repoRoot := findRepoRoot() + if repoRoot == "" { + os.Exit(m.Run()) + } + + repoBeadsDir := filepath.Join(repoRoot, ".beads") + if _, err := os.Stat(repoBeadsDir); err != nil { + os.Exit(m.Run()) + } + + watch := []string{ + "beads.db", + "beads.db-wal", + "beads.db-shm", + "beads.db-journal", + "issues.jsonl", + "beads.jsonl", + "metadata.json", + "interactions.jsonl", + "deletions.jsonl", + "molecules.jsonl", + "daemon.lock", + "daemon.pid", + "bd.sock", + } + + before := snapshotFiles(repoBeadsDir, watch) + code := m.Run() + after := snapshotFiles(repoBeadsDir, watch) + + if diff := diffSnapshots(before, after); diff != "" { + fmt.Fprintf(os.Stderr, "ERROR: test suite modified repo .beads state:\n%s\n", diff) + if code == 0 { + code = 1 + } + } + + os.Exit(code) +} + +type fileSnap struct { + exists bool + size int64 + modUnix int64 +} + +func snapshotFiles(dir string, names []string) map[string]fileSnap { + out := make(map[string]fileSnap, len(names)) + for _, name := range names { + p := filepath.Join(dir, name) + info, err := os.Stat(p) + if err != nil { + out[name] = fileSnap{exists: false} + continue + } + out[name] = fileSnap{exists: true, size: info.Size(), modUnix: info.ModTime().UnixNano()} + } + return out +} + +func diffSnapshots(before, after map[string]fileSnap) string { + var out string + for name, b := range before { + a := after[name] + if b.exists != a.exists { + out += fmt.Sprintf("- %s: exists %v → %v\n", name, b.exists, a.exists) + continue + } + if !b.exists { + continue + } + if b.size != a.size || b.modUnix != a.modUnix { + out += fmt.Sprintf("- %s: size %d → %d, mtime %s → %s\n", + name, + b.size, + a.size, + time.Unix(0, b.modUnix).UTC().Format(time.RFC3339Nano), + time.Unix(0, a.modUnix).UTC().Format(time.RFC3339Nano), + ) + } + } + return out +} + +func findRepoRoot() string { + wd, err := os.Getwd() + if err != nil { + return "" + } + for i := 0; i < 25; i++ { + if _, err := os.Stat(filepath.Join(wd, "go.mod")); err == nil { + return wd + } + parent := filepath.Dir(wd) + if parent == wd { + break + } + wd = parent + } + return "" +} diff --git a/cmd/bd/test_wait_helper.go b/cmd/bd/test_wait_helper.go index 94efa88a..ee1870e9 100644 --- a/cmd/bd/test_wait_helper.go +++ b/cmd/bd/test_wait_helper.go @@ -47,6 +47,7 @@ func setupGitRepo(t *testing.T) (repoPath string, cleanup func()) { _ = os.Chdir(originalWd) t.Fatalf("failed to init git repo: %v", err) } + git.ResetCaches() // Configure git _ = exec.Command("git", "config", "user.email", "test@test.com").Run() @@ -94,6 +95,7 @@ func setupGitRepoWithBranch(t *testing.T, branch string) (repoPath string, clean _ = os.Chdir(originalWd) t.Fatalf("failed to init git repo: %v", err) } + git.ResetCaches() // Configure git _ = exec.Command("git", "config", "user.email", "test@test.com").Run() diff --git a/cmd/bd/thread_test.go b/cmd/bd/thread_test.go index a529796a..9b2881fc 100644 --- a/cmd/bd/thread_test.go +++ b/cmd/bd/thread_test.go @@ -27,7 +27,7 @@ func TestThreadTraversal(t *testing.T) { IssueType: types.TypeMessage, Assignee: "worker", Sender: "manager", - Wisp: true, + Ephemeral: true, CreatedAt: now, UpdatedAt: now, } @@ -43,7 +43,7 @@ func TestThreadTraversal(t *testing.T) { IssueType: types.TypeMessage, Assignee: "manager", Sender: "worker", - Wisp: true, + Ephemeral: true, CreatedAt: now.Add(time.Minute), UpdatedAt: now.Add(time.Minute), } @@ -59,7 +59,7 @@ func TestThreadTraversal(t *testing.T) { IssueType: types.TypeMessage, Assignee: "worker", Sender: "manager", - Wisp: true, + Ephemeral: true, CreatedAt: now.Add(2 * time.Minute), UpdatedAt: now.Add(2 * time.Minute), } @@ -190,7 +190,7 @@ func TestThreadTraversalEmptyThread(t *testing.T) { IssueType: types.TypeMessage, Assignee: "user", Sender: "sender", - Wisp: true, + Ephemeral: true, CreatedAt: now, UpdatedAt: now, } @@ -228,7 +228,7 @@ func TestThreadTraversalBranching(t *testing.T) { IssueType: types.TypeMessage, Assignee: "user", Sender: "sender", - Wisp: true, + Ephemeral: true, CreatedAt: now, UpdatedAt: now, } @@ -245,7 +245,7 @@ func TestThreadTraversalBranching(t *testing.T) { IssueType: types.TypeMessage, Assignee: "sender", Sender: "user", - Wisp: true, + Ephemeral: true, CreatedAt: now.Add(time.Minute), UpdatedAt: now.Add(time.Minute), } @@ -261,7 +261,7 @@ func TestThreadTraversalBranching(t *testing.T) { IssueType: types.TypeMessage, Assignee: "sender", Sender: "another-user", - Wisp: true, + Ephemeral: true, CreatedAt: now.Add(2 * time.Minute), UpdatedAt: now.Add(2 * time.Minute), } @@ -364,7 +364,7 @@ func TestThreadTraversalOnlyRepliesTo(t *testing.T) { IssueType: types.TypeMessage, Assignee: "user", Sender: "sender", - Wisp: true, + Ephemeral: true, CreatedAt: now, UpdatedAt: now, } @@ -380,7 +380,7 @@ func TestThreadTraversalOnlyRepliesTo(t *testing.T) { IssueType: types.TypeMessage, Assignee: "user", Sender: "sender", - Wisp: true, + Ephemeral: true, CreatedAt: now.Add(time.Minute), UpdatedAt: now.Add(time.Minute), } diff --git a/cmd/bd/wisp.go b/cmd/bd/wisp.go index 2ce28e14..b914924a 100644 --- a/cmd/bd/wisp.go +++ b/cmd/bd/wisp.go @@ -18,33 +18,43 @@ import ( // Wisp commands - manage ephemeral molecules // -// Wisps are ephemeral issues with Wisp=true in the main database. +// Wisps are ephemeral issues with Ephemeral=true in the main database. // They're used for patrol cycles and operational loops that shouldn't // be exported to JSONL (and thus not synced via git). // // Commands: -// bd wisp list - List all wisps in current context -// bd wisp gc - Garbage collect orphaned wisps +// bd mol wisp list - List all wisps in current context +// bd mol wisp gc - Garbage collect orphaned wisps var wispCmd = &cobra.Command{ - Use: "wisp", - Short: "Manage ephemeral molecules (wisps)", - Long: `Manage wisps - ephemeral molecules for operational workflows. + Use: "wisp [proto-id]", + Short: "Create or manage wisps (ephemeral molecules)", + Long: `Create or manage wisps - ephemeral molecules for operational workflows. -Wisps are issues with Wisp=true in the main database. They're stored +When called with a proto-id argument, creates a wisp from that proto. +When called with a subcommand (list, gc), manages existing wisps. + +Wisps are issues with Ephemeral=true in the main database. They're stored locally but NOT exported to JSONL (and thus not synced via git). They're used for patrol cycles, operational loops, and other workflows that shouldn't accumulate in the shared issue database. The wisp lifecycle: - 1. Create: bd wisp create or bd create --wisp - 2. Execute: Normal bd operations work on wisps - 3. Squash: bd mol squash (clears Wisp flag, promotes to persistent) - 4. Or burn: bd mol burn (deletes wisp without creating digest) + 1. Create: bd mol wisp or bd create --ephemeral + 2. Execute: Normal bd operations work on wisp issues + 3. Squash: bd mol squash (clears Ephemeral flag, promotes to persistent) + 4. Or burn: bd mol burn (deletes without creating digest) -Commands: +Examples: + bd mol wisp mol-patrol # Create wisp from proto + bd mol wisp list # List all wisps + bd mol wisp gc # Garbage collect old wisps + +Subcommands: list List all wisps in current context gc Garbage collect orphaned wisps`, + Args: cobra.MaximumNArgs(1), + Run: runWisp, } // WispListItem represents a wisp in list output @@ -68,32 +78,44 @@ type WispListResult struct { // OldThreshold is how old a wisp must be to be flagged as old (time-based, for ephemeral cleanup) const OldThreshold = 24 * time.Hour -// wispCreateCmd instantiates a proto as an ephemeral wisp +// runWisp handles the wisp command when called directly with a proto-id +// It delegates to runWispCreate for the actual work +func runWisp(cmd *cobra.Command, args []string) { + if len(args) == 0 { + // No proto-id provided, show help + cmd.Help() + return + } + // Delegate to the create logic + runWispCreate(cmd, args) +} + +// wispCreateCmd instantiates a proto as an ephemeral wisp (kept for backwards compat) var wispCreateCmd = &cobra.Command{ Use: "create ", - Short: "Instantiate a proto as an ephemeral wisp (solid -> vapor)", + Short: "Instantiate a proto as a wisp (solid -> vapor)", Long: `Create a wisp from a proto - sublimation from solid to vapor. This is the chemistry-inspired command for creating ephemeral work from templates. -The resulting wisp is stored in the main database with Wisp=true and NOT exported to JSONL. +The resulting wisp is stored in the main database with Ephemeral=true and NOT exported to JSONL. Phase transition: Proto (solid) -> Wisp (vapor) -Use wisp create for: +Use wisp for: - Patrol cycles (deacon, witness) - Health checks and monitoring - One-shot orchestration runs - Routine operations with no audit value The wisp will: - - Be stored in main database with Wisp=true flag + - Be stored in main database with Ephemeral=true flag - NOT be exported to JSONL (and thus not synced via git) - Either evaporate (burn) or condense to digest (squash) Examples: - bd wisp create mol-patrol # Ephemeral patrol cycle - bd wisp create mol-health-check # One-time health check - bd wisp create mol-diagnostics --var target=db # Diagnostic run`, + bd mol wisp create mol-patrol # Ephemeral patrol cycle + bd mol wisp create mol-health-check # One-time health check + bd mol wisp create mol-diagnostics --var target=db # Diagnostic run`, Args: cobra.ExactArgs(1), Run: runWispCreate, } @@ -107,7 +129,7 @@ func runWispCreate(cmd *cobra.Command, args []string) { if store == nil { if daemonClient != nil { fmt.Fprintf(os.Stderr, "Error: wisp create requires direct database access\n") - fmt.Fprintf(os.Stderr, "Hint: use --no-daemon flag: bd --no-daemon wisp create %s ...\n", args[0]) + fmt.Fprintf(os.Stderr, "Hint: use --no-daemon flag: bd --no-daemon mol wisp %s ...\n", args[0]) } else { fmt.Fprintf(os.Stderr, "Error: no database connection\n") } @@ -215,7 +237,7 @@ func runWispCreate(cmd *cobra.Command, args []string) { if dryRun { fmt.Printf("\nDry run: would create wisp with %d issues from proto %s\n\n", len(subgraph.Issues), protoID) - fmt.Printf("Storage: main database (wisp=true, not exported to JSONL)\n\n") + fmt.Printf("Storage: main database (ephemeral=true, not exported to JSONL)\n\n") for _, issue := range subgraph.Issues { newTitle := substituteVariables(issue.Title, vars) fmt.Printf(" - %s (from %s)\n", newTitle, issue.ID) @@ -223,15 +245,15 @@ func runWispCreate(cmd *cobra.Command, args []string) { return } - // Spawn as wisp in main database (ephemeral=true sets Wisp flag, skips JSONL export) - // bd-hobo: Use "wisp" prefix for distinct visual recognition - result, err := spawnMolecule(ctx, store, subgraph, vars, "", actor, true, "wisp") + // Spawn as ephemeral in main database (Ephemeral=true, skips JSONL export) + // bd-hobo: Use "eph" prefix for distinct visual recognition + result, err := spawnMolecule(ctx, store, subgraph, vars, "", actor, true, "eph") if err != nil { fmt.Fprintf(os.Stderr, "Error creating wisp: %v\n", err) os.Exit(1) } - // Wisps are in main db but don't trigger JSONL export (Wisp flag excludes them) + // Wisp issues are in main db but don't trigger JSONL export (Ephemeral flag excludes them) if jsonOutput { type wispCreateResult struct { @@ -286,9 +308,9 @@ func resolvePartialIDDirect(ctx context.Context, partial string) (string, error) var wispListCmd = &cobra.Command{ Use: "list", Short: "List all wisps in current context", - Long: `List all ephemeral molecules (wisps) in the current context. + Long: `List all wisps (ephemeral molecules) in the current context. -Wisps are issues with Wisp=true in the main database. They are stored +Wisps are issues with Ephemeral=true in the main database. They are stored locally but not exported to JSONL (and thus not synced via git). The list shows: @@ -300,12 +322,12 @@ The list shows: Old wisp detection: - Old wisps haven't been updated in 24+ hours - - Use 'bd wisp gc' to clean up old/abandoned wisps + - Use 'bd mol wisp gc' to clean up old/abandoned wisps Examples: - bd wisp list # List all wisps - bd wisp list --json # JSON output for programmatic use - bd wisp list --all # Include closed wisps`, + bd mol wisp list # List all wisps + bd mol wisp list --json # JSON output for programmatic use + bd mol wisp list --all # Include closed wisps`, Run: runWispList, } @@ -327,15 +349,15 @@ func runWispList(cmd *cobra.Command, args []string) { return } - // Query wisps from main database using Wisp filter - wispFlag := true + // Query wisps from main database using Ephemeral filter + ephemeralFlag := true var issues []*types.Issue var err error if daemonClient != nil { // Use daemon RPC resp, rpcErr := daemonClient.List(&rpc.ListArgs{ - Wisp: &wispFlag, + Ephemeral: &ephemeralFlag, }) if rpcErr != nil { err = rpcErr @@ -347,7 +369,7 @@ func runWispList(cmd *cobra.Command, args []string) { } else { // Direct database access filter := types.IssueFilter{ - Wisp: &wispFlag, + Ephemeral: &ephemeralFlag, } issues, err = store.SearchIssues(ctx, "", filter) } @@ -444,7 +466,7 @@ func runWispList(cmd *cobra.Command, args []string) { if oldCount > 0 { fmt.Printf("\n%s %d old wisp(s) (not updated in 24+ hours)\n", ui.RenderWarn("⚠"), oldCount) - fmt.Println(" Hint: Use 'bd wisp gc' to clean up old wisps") + fmt.Println(" Hint: Use 'bd mol wisp gc' to clean up old wisps") } } @@ -493,10 +515,10 @@ Note: This uses time-based cleanup, appropriate for ephemeral wisps. For graph-pressure staleness detection (blocking other work), see 'bd mol stale'. Examples: - bd wisp gc # Clean abandoned wisps (default: 1h threshold) - bd wisp gc --dry-run # Preview what would be cleaned - bd wisp gc --age 24h # Custom age threshold - bd wisp gc --all # Also clean closed wisps older than threshold`, + bd mol wisp gc # Clean abandoned wisps (default: 1h threshold) + bd mol wisp gc --dry-run # Preview what would be cleaned + bd mol wisp gc --age 24h # Custom age threshold + bd mol wisp gc --all # Also clean closed wisps older than threshold`, Run: runWispGC, } @@ -532,17 +554,17 @@ func runWispGC(cmd *cobra.Command, args []string) { if store == nil { if daemonClient != nil { fmt.Fprintf(os.Stderr, "Error: wisp gc requires direct database access\n") - fmt.Fprintf(os.Stderr, "Hint: use --no-daemon flag: bd --no-daemon wisp gc\n") + fmt.Fprintf(os.Stderr, "Hint: use --no-daemon flag: bd --no-daemon mol wisp gc\n") } else { fmt.Fprintf(os.Stderr, "Error: no database connection\n") } os.Exit(1) } - // Query wisps from main database using Wisp filter - wispFlag := true + // Query wisps from main database using Ephemeral filter + ephemeralFlag := true filter := types.IssueFilter{ - Wisp: &wispFlag, + Ephemeral: &ephemeralFlag, } issues, err := store.SearchIssues(ctx, "", filter) if err != nil { @@ -634,7 +656,11 @@ func runWispGC(cmd *cobra.Command, args []string) { } func init() { - // Wisp create command flags + // Wisp command flags (for direct create: bd mol wisp ) + wispCmd.Flags().StringSlice("var", []string{}, "Variable substitution (key=value)") + wispCmd.Flags().Bool("dry-run", false, "Preview what would be created") + + // Wisp create command flags (kept for backwards compat: bd mol wisp create ) wispCreateCmd.Flags().StringSlice("var", []string{}, "Variable substitution (key=value)") wispCreateCmd.Flags().Bool("dry-run", false, "Preview what would be created") @@ -647,5 +673,5 @@ func init() { wispCmd.AddCommand(wispCreateCmd) wispCmd.AddCommand(wispListCmd) wispCmd.AddCommand(wispGCCmd) - rootCmd.AddCommand(wispCmd) + molCmd.AddCommand(wispCmd) } diff --git a/cmd/bd/worktree_daemon_test.go b/cmd/bd/worktree_daemon_test.go index 5d38b3d7..5887e0d3 100644 --- a/cmd/bd/worktree_daemon_test.go +++ b/cmd/bd/worktree_daemon_test.go @@ -81,6 +81,7 @@ func TestShouldDisableDaemonForWorktree(t *testing.T) { if err := os.Chdir(worktreeDir); err != nil { t.Fatalf("Failed to change to worktree dir: %v", err) } + git.ResetCaches() // Reset git caches after changing directory (required for IsWorktree to re-detect) git.ResetCaches() @@ -120,6 +121,7 @@ func TestShouldDisableDaemonForWorktree(t *testing.T) { if err := os.Chdir(worktreeDir); err != nil { t.Fatalf("Failed to change to worktree dir: %v", err) } + git.ResetCaches() // Reset git caches after changing directory git.ResetCaches() @@ -155,6 +157,7 @@ func TestShouldDisableDaemonForWorktree(t *testing.T) { if err := os.Chdir(worktreeDir); err != nil { t.Fatalf("Failed to change to worktree dir: %v", err) } + git.ResetCaches() // Reset git caches after changing directory git.ResetCaches() @@ -209,6 +212,7 @@ func TestShouldAutoStartDaemonWorktreeIntegration(t *testing.T) { if err := os.Chdir(worktreeDir); err != nil { t.Fatalf("Failed to change to worktree dir: %v", err) } + git.ResetCaches() // Reset git caches after changing directory git.ResetCaches() @@ -246,6 +250,7 @@ func TestShouldAutoStartDaemonWorktreeIntegration(t *testing.T) { if err := os.Chdir(worktreeDir); err != nil { t.Fatalf("Failed to change to worktree dir: %v", err) } + git.ResetCaches() // Reset git caches after changing directory git.ResetCaches() @@ -283,6 +288,7 @@ func TestShouldAutoStartDaemonWorktreeIntegration(t *testing.T) { if err := os.Chdir(worktreeDir); err != nil { t.Fatalf("Failed to change to worktree dir: %v", err) } + git.ResetCaches() // Reset git caches after changing directory git.ResetCaches() diff --git a/commands/comments.md b/commands/comments.md index 0cffe659..3b0e186c 100644 --- a/commands/comments.md +++ b/commands/comments.md @@ -5,6 +5,8 @@ argument-hint: [issue-id] View or add comments to a beads issue. +Comments are separate from issue properties (title, description, etc.) because they serve a different purpose: they're a **discussion thread** rather than **singular editable fields**. Use `bd comments` for threaded conversations and `bd edit` for core issue metadata. + ## View Comments To view all comments on an issue: diff --git a/commands/update.md b/commands/update.md index 535e1704..9e239ffb 100644 --- a/commands/update.md +++ b/commands/update.md @@ -16,6 +16,8 @@ If arguments are missing, ask the user for: Use the beads MCP `update` tool to apply the changes. Show the updated issue to confirm the change. +**Note:** Comments are managed separately with `bd comments add`. The `update` command is for singular, versioned properties (title, status, priority, etc.), while comments form a discussion thread that's appended to, not updated. + Common workflows: - Start work: Update status to `in_progress` - Mark blocked: Update status to `blocked` diff --git a/docs/ARCHITECTURE.md b/docs/ARCHITECTURE.md index 7547b967..ce2e4ff8 100644 --- a/docs/ARCHITECTURE.md +++ b/docs/ARCHITECTURE.md @@ -275,7 +275,7 @@ open ──▶ in_progress ──▶ closed ``` ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ -│ bd wisp create │───▶│ Wisp Issues │───▶│ bd mol squash │ +│ bd mol wisp │───▶│ Wisp Issues │───▶│ bd mol squash │ │ (from template) │ │ (local-only) │ │ (→ digest) │ └─────────────────┘ └─────────────────┘ └─────────────────┘ ``` diff --git a/docs/CLI_REFERENCE.md b/docs/CLI_REFERENCE.md index 960cd74a..f5b7513c 100644 --- a/docs/CLI_REFERENCE.md +++ b/docs/CLI_REFERENCE.md @@ -350,8 +350,8 @@ Beads uses a chemistry metaphor for template-based workflows. See [MOLECULES.md] | Phase | State | Storage | Command | |-------|-------|---------|---------| | Solid | Proto | `.beads/` | `bd mol catalog` | -| Liquid | Mol | `.beads/` | `bd pour` | -| Vapor | Wisp | `.beads/` (Wisp=true, not exported) | `bd wisp create` | +| Liquid | Mol | `.beads/` | `bd mol pour` | +| Vapor | Wisp | `.beads/` (Ephemeral=true, not exported) | `bd mol wisp` | ### Proto/Template Commands @@ -370,32 +370,32 @@ bd mol distill --json ```bash # Instantiate proto as persistent mol (solid → liquid) -bd pour --var key=value --json +bd mol pour --var key=value --json # Preview what would be created -bd pour --var key=value --dry-run +bd mol pour --var key=value --dry-run # Assign root issue -bd pour --var key=value --assignee alice --json +bd mol pour --var key=value --assignee alice --json # Attach additional protos during pour -bd pour --attach --json +bd mol pour --attach --json ``` ### Wisp Commands ```bash # Instantiate proto as ephemeral wisp (solid → vapor) -bd wisp create --var key=value --json +bd mol wisp --var key=value --json # List all wisps -bd wisp list --json -bd wisp list --all --json # Include closed +bd mol wisp list --json +bd mol wisp list --all --json # Include closed # Garbage collect orphaned wisps -bd wisp gc --json -bd wisp gc --age 24h --json # Custom age threshold -bd wisp gc --dry-run # Preview what would be cleaned +bd mol wisp gc --json +bd mol wisp gc --age 24h --json # Custom age threshold +bd mol wisp gc --dry-run # Preview what would be cleaned ``` ### Bonding (Combining Work) @@ -424,29 +424,29 @@ bd mol bond --dry-run ```bash # Compress wisp to permanent digest -bd mol squash --json +bd mol squash --json # With agent-provided summary -bd mol squash --summary "Work completed" --json +bd mol squash --summary "Work completed" --json # Preview -bd mol squash --dry-run +bd mol squash --dry-run # Keep wisp children after squash -bd mol squash --keep-children --json +bd mol squash --keep-children --json ``` ### Burn (Discard Wisp) ```bash # Delete wisp without digest (destructive) -bd mol burn --json +bd mol burn --json # Preview -bd mol burn --dry-run +bd mol burn --dry-run # Skip confirmation -bd mol burn --force --json +bd mol burn --force --json ``` **Note:** Most mol commands require `--no-daemon` flag when daemon is running. diff --git a/docs/DELETIONS.md b/docs/DELETIONS.md index 2d067e93..4fd55d97 100644 --- a/docs/DELETIONS.md +++ b/docs/DELETIONS.md @@ -202,7 +202,7 @@ The 1-hour grace period ensures tombstones propagate even with minor clock drift ## Wisps: Intentional Tombstone Bypass -**Wisps** (ephemeral issues created by `bd wisp create`) are intentionally excluded from tombstone tracking. +**Wisps** (ephemeral issues created by `bd mol wisp`) are intentionally excluded from tombstone tracking. ### Why Wisps Don't Need Tombstones diff --git a/docs/MOLECULES.md b/docs/MOLECULES.md index 4dba4cec..172cb178 100644 --- a/docs/MOLECULES.md +++ b/docs/MOLECULES.md @@ -128,8 +128,8 @@ For reusable workflows, beads uses a chemistry metaphor: ### Phase Commands ```bash -bd pour # Proto → Mol (persistent instance) -bd wisp create # Proto → Wisp (ephemeral instance) +bd mol pour # Proto → Mol (persistent instance) +bd mol wisp # Proto → Wisp (ephemeral instance) bd mol squash # Mol/Wisp → Digest (permanent record) bd mol burn # Wisp → nothing (discard) ``` @@ -227,10 +227,10 @@ bd close --reason "Done" Wisps accumulate if not squashed/burned: ```bash -bd wisp list # Check for orphans -bd mol squash # Create digest -bd mol burn # Or discard -bd wisp gc # Garbage collect old wisps +bd mol wisp list # Check for orphans +bd mol squash # Create digest +bd mol burn # Or discard +bd mol wisp gc # Garbage collect old wisps ``` ## Layer Cake Architecture @@ -272,8 +272,8 @@ bd dep tree # Show dependency tree ### Molecules ```bash -bd pour --var k=v # Template → persistent mol -bd wisp create # Template → ephemeral wisp +bd mol pour --var k=v # Template → persistent mol +bd mol wisp # Template → ephemeral wisp bd mol bond A B # Connect work graphs bd mol squash # Compress to digest bd mol burn # Discard without record diff --git a/docs/pr-752-chaos-testing-review.md b/docs/pr-752-chaos-testing-review.md new file mode 100644 index 00000000..b6315182 --- /dev/null +++ b/docs/pr-752-chaos-testing-review.md @@ -0,0 +1,204 @@ +# PR #752 Chaos Testing Review + +**PR**: https://github.com/steveyegge/beads/pull/752 +**Author**: jordanhubbard +**Bead**: bd-kx1j +**Status**: Under Review + +## Summary + +Jordan proposes adding chaos testing and E2E test coverage to beads. The PR: +- Adds 4849 lines, removes 511 lines +- Introduces chaos testing framework (random corruption, disk space exhaustion, NFS-like failures) +- Creates side databases for testing recovery scenarios +- Adds E2E tests tracking documented user scenarios +- Brings code coverage to ~48% + +## Key Question from Jordan + +> "Is this level of testing something you actually want with the current pace of progress? +> It comes with an implied obligation to update and add to the tests as well as follow +> the CICD feedback in github (very spammy if your tests don't pass!)" + +## Files Changed (Major Categories) + +### Chaos/Doctor Infrastructure +- `cmd/bd/doctor_repair_chaos_test.go` (378 lines) - Core chaos testing +- `cmd/bd/doctor/fix/database_integrity.go` (116 lines) - DB integrity fixes +- `cmd/bd/doctor/fix/jsonl_integrity.go` (87 lines) - JSONL integrity fixes +- `cmd/bd/doctor/fix/fs.go` (57 lines) - Filesystem fault injection +- `cmd/bd/doctor/fix/sqlite_open.go` (52 lines) - SQLite open handling +- `cmd/bd/doctor/jsonl_integrity.go` (123 lines) - JSONL checks +- `cmd/bd/doctor/git.go` (168 additions) - Git hygiene checks + +### Test Coverage Additions +- `internal/storage/memory/memory_more_coverage_test.go` (921 lines) - Memory storage tests +- `cmd/bd/cli_coverage_show_test.go` (426 lines) - CLI show command tests +- `cmd/bd/daemon_autostart_unit_test.go` (331 lines) - Daemon autostart tests +- `internal/rpc/client_gate_shutdown_test.go` (107 lines) - RPC client tests +- Various other test files + +### Bug Fixes Discovered During Testing +- `internal/storage/sqlite/migrations/021_migrate_edge_fields.go` - Major migration fix +- `internal/storage/sqlite/migrations/022_drop_edge_columns.go` - Column cleanup +- `internal/storage/sqlite/migrations_template_pinned_regression_test.go` - Regression test + +## Tradeoffs + +### Costs +1. **Maintenance burden**: Must keep coverage above 48% (or whatever threshold is set) +2. **CI noise**: Failed tests = spam until fixed +3. **Velocity tax**: Every change needs test updates +4. **Complexity**: Chaos testing framework itself needs maintenance + +### Benefits +1. **Robustness validation**: Proves beads can recover from corruption +2. **Bug discovery**: Already found migration bugs (021, 022) +3. **Confidence**: If chaos tests pass, beads is more robust than feared +4. **Documentation**: E2E tests document expected user scenarios +5. **Regression prevention**: Future changes caught before release + +## Initial Assessment + +**Implementation Quality: HIGH** + +The chaos testing code is well-structured. Key observations: + +### What the Chaos Tests Actually Cover + +From `doctor_repair_chaos_test.go`: + +1. **Complete DB corruption** - Writes "not a database" garbage, verifies recovery from JSONL +2. **Truncated DB without JSONL** - Tests graceful failure when no recovery source exists +3. **Sidecar file backup** - Ensures -wal, -shm, -journal files are preserved during repair +4. **Repair with running daemon** - Tests recovery while daemon holds locks +5. **JSONL integrity** - Malformed lines, re-export from DB + +Each test: +- Uses isolated temp directories +- Builds a fresh `bd` binary for testing +- Uses "side databases" (separate from real data) +- Has proper cleanup + +### Bug Fixes Already Discovered + +The PR includes fixes for bugs found during testing: +- Migration 021/022: `pinned` and `is_template` columns were being clobbered +- Regression test added to prevent recurrence + +### Test Coverage Structure + +Tests are organized by build tags: +- `//go:build chaos` - Chaos/corruption tests (run separately) +- `//go:build e2e` - End-to-end CLI tests +- Regular unit tests - No build tag required + +This means chaos tests only run when explicitly requested, not on every `go test`. + +--- + +## Deep Analysis (Ultrathink) + +### The Core Question + +Is the testing worth the ongoing maintenance cost? + +### Argument FOR Merging + +1. **Beads is more robust than feared**. If Jordan got these tests passing, it means: + - `bd doctor` actually recovers from corruption + - JSONL/DB sync is working correctly + - Migration edge cases are handled + + This validates the core design: SQLite + JSONL + git backstop. + +2. **Bugs already found**. The migration 021/022 bugs are exactly the kind of subtle + issues that would cause data loss in production. Finding them now is worth something. + +3. **Build tag isolation**. Chaos tests won't slow down regular development: + ```bash + go test ./... # Normal tests only + go test -tags=chaos ./... # Include chaos tests + go test -tags=e2e ./... # Include E2E tests + ``` + +4. **48% coverage is a floor, not a target**. The PR doesn't enforce maintaining 48%. + Jordan is asking: "Is this level worth it?" We can always add more later, or let + coverage drift if priorities change. + +5. **Documentation value**. E2E tests document expected user scenarios. When an AI agent + asks "what should happen when X?", the tests provide executable answers. + +### Argument AGAINST Merging + +1. **Velocity tax is real**. Every behavior change needs test updates. This is especially + painful during rapid iteration phases. + +2. **CI noise**. Failed tests block merges. With multiple agents working, flaky tests + become coordination bottlenecks. + +3. **Framework maintenance**. The chaos testing framework itself (side databases, build + tags, test helpers) becomes another thing to maintain. + +4. **False confidence**. Tests passing doesn't mean beads is production-ready. It means + tested scenarios work. Edge cases not covered still fail silently. + +### The Real Question: What Phase Are We In? + +**If beads is still in "rapid prototype" phase**: The testing overhead is premature. +Focus on features, fix crashes as they happen, lean on git backstop. + +**If beads is approaching "reliable tool" phase**: Testing is essential. Multi-agent +workflows amplify bugs. Corruption during a 10-agent batch is expensive. + +**Current reality**: Beads is being dogfooded seriously. Multiple agents, real work, +real data loss when things break. We're closer to "reliable tool" than "prototype." + +### ROI Calculation + +**Cost of NOT testing**: When corruption happens: +- Agent loses context (30-60 min recovery) +- Human has to debug (variable, often 15-60 min) +- Trust erosion (hard to quantify) + +**Cost of testing**: +- Review this PR (1-2 hours, one time) +- Update tests when behavior changes (5-15 min per change) +- Fix flaky tests when they appear (variable) + +If corruption happens once a month, testing ROI is marginal. +If corruption happens weekly (or with each new feature), testing pays for itself. + +--- + +## Recommendation + +**MERGE WITH MODIFICATIONS** + +### Why Merge + +1. The implementation quality is high +2. Bugs already found justify the effort +3. Build tag isolation minimizes velocity impact +4. Beads is past the prototype phase + +### Suggested Modifications + +1. **No hard coverage threshold in CI**. Let coverage drift naturally. The value is in + the chaos tests catching corruption, not in hitting a percentage. + +2. **Chaos tests optional in CI**. Run chaos tests on release branches, not every PR. + This reduces CI noise during active development. + +3. **Clear ownership**. Jordan should document how to add new chaos scenarios. Future + contributors need to know when to add vs skip tests. + +### Decision Framework for User + +If you answer YES to 2+ of these, merge: +- [ ] Are you dogfooding beads for real work? +- [ ] Has corruption caused you to lose time in the last month? +- [ ] Do you expect multiple agents using beads concurrently? +- [ ] Is beads approaching a "v1.0" milestone? + +If you answer NO to all, defer the PR until beads stabilizes. diff --git a/internal/beads/beads_hash_multiclone_test.go b/internal/beads/beads_hash_multiclone_test.go index 89ee842a..ea5a4383 100644 --- a/internal/beads/beads_hash_multiclone_test.go +++ b/internal/beads/beads_hash_multiclone_test.go @@ -48,10 +48,10 @@ func TestMain(m *testing.M) { fmt.Fprintf(os.Stderr, "Failed to build bd binary: %v\n%s\n", err, out) os.Exit(1) } - + // Optimize git for tests os.Setenv("GIT_CONFIG_NOSYSTEM", "1") - + os.Exit(m.Run()) } @@ -85,35 +85,35 @@ func TestHashIDs_MultiCloneConverge(t *testing.T) { } t.Parallel() tmpDir := testutil.TempDirInMemory(t) - + bdPath := getBDPath() if _, err := os.Stat(bdPath); err != nil { t.Fatalf("bd binary not found at %s", bdPath) } - + // Setup remote and 3 clones remoteDir := setupBareRepo(t, tmpDir) cloneA := setupClone(t, tmpDir, remoteDir, "A", bdPath) cloneB := setupClone(t, tmpDir, remoteDir, "B", bdPath) cloneC := setupClone(t, tmpDir, remoteDir, "C", bdPath) - + // Each clone creates unique issue (different content = different hash ID) createIssueInClone(t, cloneA, "Issue from clone A") createIssueInClone(t, cloneB, "Issue from clone B") createIssueInClone(t, cloneC, "Issue from clone C") - + // Sync all clones once (hash IDs prevent collisions, don't need multiple rounds) for _, clone := range []string{cloneA, cloneB, cloneC} { runCmdOutputWithEnvAllowError(t, clone, map[string]string{"BEADS_NO_DAEMON": "1"}, true, bdPath, "sync") } - + // Verify all clones have all 3 issues expectedTitles := map[string]bool{ "Issue from clone A": true, "Issue from clone B": true, "Issue from clone C": true, } - + allConverged := true for name, dir := range map[string]string{"A": cloneA, "B": cloneB, "C": cloneC} { titles := getTitlesFromClone(t, dir) @@ -122,7 +122,7 @@ func TestHashIDs_MultiCloneConverge(t *testing.T) { allConverged = false } } - + if allConverged { t.Log("✓ All 3 clones converged with hash-based IDs") } else { @@ -138,26 +138,26 @@ func TestHashIDs_IdenticalContentDedup(t *testing.T) { } t.Parallel() tmpDir := testutil.TempDirInMemory(t) - + bdPath := getBDPath() if _, err := os.Stat(bdPath); err != nil { t.Fatalf("bd binary not found at %s", bdPath) } - + // Setup remote and 2 clones remoteDir := setupBareRepo(t, tmpDir) cloneA := setupClone(t, tmpDir, remoteDir, "A", bdPath) cloneB := setupClone(t, tmpDir, remoteDir, "B", bdPath) - + // Both clones create identical issue (same content = same hash ID) createIssueInClone(t, cloneA, "Identical issue") createIssueInClone(t, cloneB, "Identical issue") - + // Sync both clones once (hash IDs handle dedup automatically) for _, clone := range []string{cloneA, cloneB} { runCmdOutputWithEnvAllowError(t, clone, map[string]string{"BEADS_NO_DAEMON": "1"}, true, bdPath, "sync") } - + // Verify both clones have exactly 1 issue (deduplication worked) for name, dir := range map[string]string{"A": cloneA, "B": cloneB} { titles := getTitlesFromClone(t, dir) @@ -168,7 +168,7 @@ func TestHashIDs_IdenticalContentDedup(t *testing.T) { t.Errorf("Clone %s missing expected issue: %v", name, sortedKeys(titles)) } } - + t.Log("✓ Identical content deduplicated correctly with hash-based IDs") } @@ -177,36 +177,36 @@ func TestHashIDs_IdenticalContentDedup(t *testing.T) { func setupBareRepo(t *testing.T, tmpDir string) string { t.Helper() remoteDir := filepath.Join(tmpDir, "remote.git") - runCmd(t, tmpDir, "git", "init", "--bare", remoteDir) - + runCmd(t, tmpDir, "git", "init", "--bare", "-b", "master", remoteDir) + tempClone := filepath.Join(tmpDir, "temp-init") runCmd(t, tmpDir, "git", "clone", remoteDir, tempClone) runCmd(t, tempClone, "git", "commit", "--allow-empty", "-m", "Initial commit") runCmd(t, tempClone, "git", "push", "origin", "master") - + return remoteDir } func setupClone(t *testing.T, tmpDir, remoteDir, name, bdPath string) string { t.Helper() cloneDir := filepath.Join(tmpDir, "clone-"+strings.ToLower(name)) - + // Use shallow, shared clones for speed runCmd(t, tmpDir, "git", "clone", "--shared", "--depth=1", "--no-tags", remoteDir, cloneDir) - + // Disable hooks to avoid overhead emptyHooks := filepath.Join(cloneDir, ".empty-hooks") os.MkdirAll(emptyHooks, 0755) runCmd(t, cloneDir, "git", "config", "core.hooksPath", emptyHooks) - + // Speed configs runCmd(t, cloneDir, "git", "config", "gc.auto", "0") runCmd(t, cloneDir, "git", "config", "core.fsync", "false") runCmd(t, cloneDir, "git", "config", "commit.gpgSign", "false") - + bdCmd := getBDCommand() copyFile(t, bdPath, filepath.Join(cloneDir, filepath.Base(bdCmd))) - + if name == "A" { runCmd(t, cloneDir, bdCmd, "init", "--quiet", "--prefix", "test") runCmd(t, cloneDir, "git", "add", ".beads") @@ -216,7 +216,7 @@ func setupClone(t *testing.T, tmpDir, remoteDir, name, bdPath string) string { runCmd(t, cloneDir, "git", "pull", "origin", "master") runCmd(t, cloneDir, bdCmd, "init", "--quiet", "--prefix", "test") } - + return cloneDir } @@ -231,13 +231,13 @@ func getTitlesFromClone(t *testing.T, cloneDir string) map[string]bool { "BEADS_NO_DAEMON": "1", "BD_NO_AUTO_IMPORT": "1", }, getBDCommand(), "list", "--json") - + jsonStart := strings.Index(listJSON, "[") if jsonStart == -1 { return make(map[string]bool) } listJSON = listJSON[jsonStart:] - + var issues []struct { Title string `json:"title"` } @@ -245,7 +245,7 @@ func getTitlesFromClone(t *testing.T, cloneDir string) map[string]bool { t.Logf("Failed to parse JSON: %v", err) return make(map[string]bool) } - + titles := make(map[string]bool) for _, issue := range issues { titles[issue.Title] = true @@ -280,7 +280,7 @@ func installGitHooks(t *testing.T, repoDir string) { hooksDir := filepath.Join(repoDir, ".git", "hooks") // Ensure POSIX-style path for sh scripts (even on Windows) bdCmd := strings.ReplaceAll(getBDCommand(), "\\", "/") - + preCommit := fmt.Sprintf(`#!/bin/sh %s --no-daemon export -o .beads/issues.jsonl >/dev/null 2>&1 || true git add .beads/issues.jsonl >/dev/null 2>&1 || true diff --git a/internal/hooks/hooks_test.go b/internal/hooks/hooks_test.go index db4204e5..4e73ab3f 100644 --- a/internal/hooks/hooks_test.go +++ b/internal/hooks/hooks_test.go @@ -336,8 +336,8 @@ func TestRun_Async(t *testing.T) { outputFile := filepath.Join(tmpDir, "async_output.txt") // Create a hook that writes to a file - hookScript := `#!/bin/sh -echo "async" > ` + outputFile + hookScript := "#!/bin/sh\n" + + "echo \"async\" > \"" + outputFile + "\"\n" if err := os.WriteFile(hookPath, []byte(hookScript), 0755); err != nil { t.Fatalf("Failed to create hook file: %v", err) } @@ -348,15 +348,17 @@ echo "async" > ` + outputFile // Run should return immediately runner.Run(EventClose, issue) - // Wait for the async hook to complete with retries + // Wait for the async hook to complete with retries. + // Under high test load the goroutine scheduling + exec can be delayed. var output []byte var err error - for i := 0; i < 10; i++ { - time.Sleep(100 * time.Millisecond) + deadline := time.Now().Add(3 * time.Second) + for time.Now().Before(deadline) { output, err = os.ReadFile(outputFile) if err == nil { break } + time.Sleep(50 * time.Millisecond) } if err != nil { diff --git a/internal/routing/routes.go b/internal/routing/routes.go index ae823498..d22967c8 100644 --- a/internal/routing/routes.go +++ b/internal/routing/routes.go @@ -67,6 +67,49 @@ func ExtractPrefix(id string) string { return id[:idx+1] // Include the hyphen } +// ExtractProjectFromPath extracts the project name from a route path. +// For "beads/mayor/rig", returns "beads". +// For "gastown/crew/max", returns "gastown". +func ExtractProjectFromPath(path string) string { + // Get the first component of the path + parts := strings.Split(path, "/") + if len(parts) > 0 && parts[0] != "" { + return parts[0] + } + return "" +} + +// ResolveToExternalRef attempts to convert a foreign issue ID to an external reference +// using routes.jsonl for prefix-based routing. +// +// If the ID's prefix matches a route, returns "external::". +// Otherwise, returns empty string (no route found). +// +// Example: If routes.jsonl has {"prefix": "bd-", "path": "beads/mayor/rig"} +// then ResolveToExternalRef("bd-abc", beadsDir) returns "external:beads:bd-abc" +func ResolveToExternalRef(id, beadsDir string) string { + routes, err := LoadRoutes(beadsDir) + if err != nil || len(routes) == 0 { + return "" + } + + prefix := ExtractPrefix(id) + if prefix == "" { + return "" + } + + for _, route := range routes { + if route.Prefix == prefix { + project := ExtractProjectFromPath(route.Path) + if project != "" { + return fmt.Sprintf("external:%s:%s", project, id) + } + } + } + + return "" +} + // ResolveBeadsDirForID determines which beads directory contains the given issue ID. // It first checks the local beads directory, then consults routes.jsonl for prefix-based routing. // diff --git a/internal/routing/routing_test.go b/internal/routing/routing_test.go index 97170e88..13e19906 100644 --- a/internal/routing/routing_test.go +++ b/internal/routing/routing_test.go @@ -88,3 +88,57 @@ func TestDetectUserRole_Fallback(t *testing.T) { t.Errorf("DetectUserRole() = %v, want %v (fallback)", role, Contributor) } } + +func TestExtractPrefix(t *testing.T) { + tests := []struct { + id string + want string + }{ + {"gt-abc123", "gt-"}, + {"bd-xyz", "bd-"}, + {"hq-1234", "hq-"}, + {"abc123", ""}, // No hyphen + {"", ""}, // Empty string + {"-abc", "-"}, // Starts with hyphen + } + + for _, tt := range tests { + t.Run(tt.id, func(t *testing.T) { + got := ExtractPrefix(tt.id) + if got != tt.want { + t.Errorf("ExtractPrefix(%q) = %q, want %q", tt.id, got, tt.want) + } + }) + } +} + +func TestExtractProjectFromPath(t *testing.T) { + tests := []struct { + path string + want string + }{ + {"beads/mayor/rig", "beads"}, + {"gastown/crew/max", "gastown"}, + {"simple", "simple"}, + {"", ""}, + {"/absolute/path", ""}, // Starts with /, first component is empty + } + + for _, tt := range tests { + t.Run(tt.path, func(t *testing.T) { + got := ExtractProjectFromPath(tt.path) + if got != tt.want { + t.Errorf("ExtractProjectFromPath(%q) = %q, want %q", tt.path, got, tt.want) + } + }) + } +} + +func TestResolveToExternalRef(t *testing.T) { + // This test is limited since it requires a routes.jsonl file + // Just test that it returns empty string for nonexistent directory + got := ResolveToExternalRef("bd-abc", "/nonexistent/path") + if got != "" { + t.Errorf("ResolveToExternalRef() = %q, want empty string for nonexistent path", got) + } +} diff --git a/internal/rpc/client_gate_shutdown_test.go b/internal/rpc/client_gate_shutdown_test.go new file mode 100644 index 00000000..66cacfe5 --- /dev/null +++ b/internal/rpc/client_gate_shutdown_test.go @@ -0,0 +1,107 @@ +package rpc + +import ( + "encoding/json" + "testing" + "time" + + "github.com/steveyegge/beads/internal/types" +) + +func TestClient_GateLifecycleAndShutdown(t *testing.T) { + _, client, cleanup := setupTestServer(t) + defer cleanup() + + createResp, err := client.GateCreate(&GateCreateArgs{ + Title: "Test Gate", + AwaitType: "human", + AwaitID: "", + Timeout: 5 * time.Minute, + Waiters: []string{"mayor/"}, + }) + if err != nil { + t.Fatalf("GateCreate: %v", err) + } + + var created GateCreateResult + if err := json.Unmarshal(createResp.Data, &created); err != nil { + t.Fatalf("unmarshal GateCreateResult: %v", err) + } + if created.ID == "" { + t.Fatalf("expected created gate ID") + } + + listResp, err := client.GateList(&GateListArgs{All: false}) + if err != nil { + t.Fatalf("GateList: %v", err) + } + var openGates []*types.Issue + if err := json.Unmarshal(listResp.Data, &openGates); err != nil { + t.Fatalf("unmarshal GateList: %v", err) + } + if len(openGates) != 1 || openGates[0].ID != created.ID { + t.Fatalf("unexpected open gates: %+v", openGates) + } + + showResp, err := client.GateShow(&GateShowArgs{ID: created.ID}) + if err != nil { + t.Fatalf("GateShow: %v", err) + } + var gate types.Issue + if err := json.Unmarshal(showResp.Data, &gate); err != nil { + t.Fatalf("unmarshal GateShow: %v", err) + } + if gate.ID != created.ID || gate.IssueType != types.TypeGate { + t.Fatalf("unexpected gate: %+v", gate) + } + + waitResp, err := client.GateWait(&GateWaitArgs{ID: created.ID, Waiters: []string{"deacon/"}}) + if err != nil { + t.Fatalf("GateWait: %v", err) + } + var waitResult GateWaitResult + if err := json.Unmarshal(waitResp.Data, &waitResult); err != nil { + t.Fatalf("unmarshal GateWaitResult: %v", err) + } + if waitResult.AddedCount != 1 { + t.Fatalf("expected 1 waiter added, got %d", waitResult.AddedCount) + } + + closeResp, err := client.GateClose(&GateCloseArgs{ID: created.ID, Reason: "done"}) + if err != nil { + t.Fatalf("GateClose: %v", err) + } + var closedGate types.Issue + if err := json.Unmarshal(closeResp.Data, &closedGate); err != nil { + t.Fatalf("unmarshal GateClose: %v", err) + } + if closedGate.Status != types.StatusClosed { + t.Fatalf("expected closed status, got %q", closedGate.Status) + } + + listResp, err = client.GateList(&GateListArgs{All: false}) + if err != nil { + t.Fatalf("GateList open: %v", err) + } + if err := json.Unmarshal(listResp.Data, &openGates); err != nil { + t.Fatalf("unmarshal GateList open: %v", err) + } + if len(openGates) != 0 { + t.Fatalf("expected no open gates, got %+v", openGates) + } + + listResp, err = client.GateList(&GateListArgs{All: true}) + if err != nil { + t.Fatalf("GateList all: %v", err) + } + if err := json.Unmarshal(listResp.Data, &openGates); err != nil { + t.Fatalf("unmarshal GateList all: %v", err) + } + if len(openGates) != 1 || openGates[0].ID != created.ID { + t.Fatalf("expected 1 total gate, got %+v", openGates) + } + + if err := client.Shutdown(); err != nil { + t.Fatalf("Shutdown: %v", err) + } +} diff --git a/internal/rpc/protocol.go b/internal/rpc/protocol.go index 00d12907..e6099c94 100644 --- a/internal/rpc/protocol.go +++ b/internal/rpc/protocol.go @@ -89,11 +89,12 @@ type CreateArgs struct { WaitsFor string `json:"waits_for,omitempty"` // Spawner issue ID to wait for WaitsForGate string `json:"waits_for_gate,omitempty"` // Gate type: all-children or any-children // Messaging fields (bd-kwro) - Sender string `json:"sender,omitempty"` // Who sent this (for messages) - Wisp bool `json:"wisp,omitempty"` // Wisp = ephemeral vapor from the Steam Engine; bulk-deleted when closed + Sender string `json:"sender,omitempty"` // Who sent this (for messages) + Ephemeral bool `json:"ephemeral,omitempty"` // If true, not exported to JSONL; bulk-deleted when closed RepliesTo string `json:"replies_to,omitempty"` // Issue ID for conversation threading // ID generation (bd-hobo) - IDPrefix string `json:"id_prefix,omitempty"` // Override prefix for ID generation (mol, wisp, etc.) + IDPrefix string `json:"id_prefix,omitempty"` // Override prefix for ID generation (mol, eph, etc.) + CreatedBy string `json:"created_by,omitempty"` // Who created the issue } // UpdateArgs represents arguments for the update operation @@ -114,8 +115,8 @@ type UpdateArgs struct { RemoveLabels []string `json:"remove_labels,omitempty"` SetLabels []string `json:"set_labels,omitempty"` // Messaging fields (bd-kwro) - Sender *string `json:"sender,omitempty"` // Who sent this (for messages) - Wisp *bool `json:"wisp,omitempty"` // Wisp = ephemeral vapor from the Steam Engine; bulk-deleted when closed + Sender *string `json:"sender,omitempty"` // Who sent this (for messages) + Ephemeral *bool `json:"ephemeral,omitempty"` // If true, not exported to JSONL; bulk-deleted when closed RepliesTo *string `json:"replies_to,omitempty"` // Issue ID for conversation threading // Graph link fields (bd-fu83) RelatesTo *string `json:"relates_to,omitempty"` // JSON array of related issue IDs @@ -192,8 +193,8 @@ type ListArgs struct { // Parent filtering (bd-yqhh) ParentID string `json:"parent_id,omitempty"` - // Wisp filtering (bd-bkul) - Wisp *bool `json:"wisp,omitempty"` + // Ephemeral filtering (bd-bkul) + Ephemeral *bool `json:"ephemeral,omitempty"` } // CountArgs represents arguments for the count operation diff --git a/internal/rpc/server_issues_epics.go b/internal/rpc/server_issues_epics.go index 918d19ed..6998f4d6 100644 --- a/internal/rpc/server_issues_epics.go +++ b/internal/rpc/server_issues_epics.go @@ -81,8 +81,8 @@ func updatesFromArgs(a UpdateArgs) map[string]interface{} { if a.Sender != nil { u["sender"] = *a.Sender } - if a.Wisp != nil { - u["wisp"] = *a.Wisp + if a.Ephemeral != nil { + u["ephemeral"] = *a.Ephemeral } if a.RepliesTo != nil { u["replies_to"] = *a.RepliesTo @@ -176,11 +176,12 @@ func (s *Server) handleCreate(req *Request) Response { EstimatedMinutes: createArgs.EstimatedMinutes, Status: types.StatusOpen, // Messaging fields (bd-kwro) - Sender: createArgs.Sender, - Wisp: createArgs.Wisp, + Sender: createArgs.Sender, + Ephemeral: createArgs.Ephemeral, // NOTE: RepliesTo now handled via replies-to dependency (Decision 004) // ID generation (bd-hobo) - IDPrefix: createArgs.IDPrefix, + IDPrefix: createArgs.IDPrefix, + CreatedBy: createArgs.CreatedBy, } // Check if any dependencies are discovered-from type @@ -843,8 +844,8 @@ func (s *Server) handleList(req *Request) Response { filter.ParentID = &listArgs.ParentID } - // Wisp filtering (bd-bkul) - filter.Wisp = listArgs.Wisp + // Ephemeral filtering (bd-bkul) + filter.Ephemeral = listArgs.Ephemeral // Guard against excessive ID lists to avoid SQLite parameter limits const maxIDs = 1000 @@ -1221,12 +1222,16 @@ func (s *Server) handleShow(req *Request) Response { } } + // Fetch comments + comments, _ := store.GetIssueComments(ctx, issue.ID) + // Create detailed response with related data type IssueDetails struct { *types.Issue Labels []string `json:"labels,omitempty"` Dependencies []*types.IssueWithDependencyMetadata `json:"dependencies,omitempty"` Dependents []*types.IssueWithDependencyMetadata `json:"dependents,omitempty"` + Comments []*types.Comment `json:"comments,omitempty"` } details := &IssueDetails{ @@ -1234,6 +1239,7 @@ func (s *Server) handleShow(req *Request) Response { Labels: labels, Dependencies: deps, Dependents: dependents, + Comments: comments, } data, _ := json.Marshal(details) @@ -1474,7 +1480,7 @@ func (s *Server) handleGateCreate(req *Request) Response { Status: types.StatusOpen, Priority: 1, // Gates are typically high priority Assignee: "deacon/", - Wisp: true, // Gates are wisps (ephemeral) + Ephemeral: true, // Gates are wisps (ephemeral) AwaitType: args.AwaitType, AwaitID: args.AwaitID, Timeout: args.Timeout, diff --git a/internal/rpc/server_mutations_test.go b/internal/rpc/server_mutations_test.go index 83f5d2b4..61631389 100644 --- a/internal/rpc/server_mutations_test.go +++ b/internal/rpc/server_mutations_test.go @@ -1,6 +1,7 @@ package rpc import ( + "context" "encoding/json" "testing" "time" @@ -9,6 +10,49 @@ import ( "github.com/steveyegge/beads/internal/types" ) +// TestHandleCreate_SetsCreatedBy verifies that CreatedBy is passed through RPC and stored (GH#748) +func TestHandleCreate_SetsCreatedBy(t *testing.T) { + store := memory.New("/tmp/test.jsonl") + server := NewServer("/tmp/test.sock", store, "/tmp", "/tmp/test.db") + + createArgs := CreateArgs{ + Title: "Test CreatedBy Field", + IssueType: "task", + Priority: 2, + CreatedBy: "test-actor", + } + createJSON, _ := json.Marshal(createArgs) + createReq := &Request{ + Operation: OpCreate, + Args: createJSON, + Actor: "test-actor", + } + + resp := server.handleCreate(createReq) + if !resp.Success { + t.Fatalf("create failed: %s", resp.Error) + } + + var createdIssue types.Issue + if err := json.Unmarshal(resp.Data, &createdIssue); err != nil { + t.Fatalf("failed to parse response: %v", err) + } + + // Verify CreatedBy was set in the response + if createdIssue.CreatedBy != "test-actor" { + t.Errorf("expected CreatedBy 'test-actor' in response, got %q", createdIssue.CreatedBy) + } + + // Verify CreatedBy was persisted to storage + storedIssue, err := store.GetIssue(context.Background(), createdIssue.ID) + if err != nil { + t.Fatalf("failed to get issue from storage: %v", err) + } + if storedIssue.CreatedBy != "test-actor" { + t.Errorf("expected CreatedBy 'test-actor' in storage, got %q", storedIssue.CreatedBy) + } +} + func TestEmitMutation(t *testing.T) { store := memory.New("/tmp/test.jsonl") server := NewServer("/tmp/test.sock", store, "/tmp", "/tmp/test.db") diff --git a/internal/storage/memory/memory_more_coverage_test.go b/internal/storage/memory/memory_more_coverage_test.go new file mode 100644 index 00000000..82651647 --- /dev/null +++ b/internal/storage/memory/memory_more_coverage_test.go @@ -0,0 +1,921 @@ +package memory + +import ( + "context" + "testing" + "time" + + "github.com/steveyegge/beads/internal/storage" + "github.com/steveyegge/beads/internal/types" +) + +func TestMemoryStorage_LoadFromIssues_IndexesAndCounters(t *testing.T) { + store := New("/tmp/example.jsonl") + defer store.Close() + + extRef := "ext-1" + issues := []*types.Issue{ + nil, + { + ID: "bd-10", + Title: "Ten", + Status: types.StatusOpen, + Priority: 1, + IssueType: types.TypeTask, + ExternalRef: &extRef, + Dependencies: []*types.Dependency{{ + IssueID: "bd-10", + DependsOnID: "bd-2", + Type: types.DepBlocks, + }}, + Labels: []string{"l1"}, + Comments: []*types.Comment{{ID: 1, IssueID: "bd-10", Author: "a", Text: "c"}}, + }, + {ID: "bd-2", Title: "Two", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeTask}, + {ID: "bd-a3f8e9", Title: "Parent", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeTask}, + {ID: "bd-a3f8e9.3", Title: "Child", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeTask}, + } + + if err := store.LoadFromIssues(issues); err != nil { + t.Fatalf("LoadFromIssues: %v", err) + } + + ctx := context.Background() + + got, err := store.GetIssueByExternalRef(ctx, "ext-1") + if err != nil { + t.Fatalf("GetIssueByExternalRef: %v", err) + } + if got == nil || got.ID != "bd-10" { + t.Fatalf("GetIssueByExternalRef got=%v", got) + } + if len(got.Dependencies) != 1 || got.Dependencies[0].DependsOnID != "bd-2" { + t.Fatalf("expected deps attached") + } + if len(got.Labels) != 1 || got.Labels[0] != "l1" { + t.Fatalf("expected labels attached") + } + + // Exercise CreateIssue ID generation based on the loaded counter (bd-10 => next should be bd-11). + if err := store.SetConfig(ctx, "issue_prefix", "bd"); err != nil { + t.Fatalf("SetConfig: %v", err) + } + newIssue := &types.Issue{Title: "New", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeTask} + if err := store.CreateIssue(ctx, newIssue, "actor"); err != nil { + t.Fatalf("CreateIssue: %v", err) + } + if newIssue.ID != "bd-11" { + t.Fatalf("expected generated id bd-11, got %q", newIssue.ID) + } + + // Hierarchical counter for parent extracted from bd-a3f8e9.3. + childID, err := store.GetNextChildID(ctx, "bd-a3f8e9") + if err != nil { + t.Fatalf("GetNextChildID: %v", err) + } + if childID != "bd-a3f8e9.4" { + t.Fatalf("expected bd-a3f8e9.4, got %q", childID) + } +} + +func TestMemoryStorage_GetAllIssues_SortsAndCopies(t *testing.T) { + store := setupTestMemory(t) + defer store.Close() + ctx := context.Background() + + // Create out-of-order IDs. + a := &types.Issue{ID: "bd-2", Title: "A", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeTask} + b := &types.Issue{ID: "bd-1", Title: "B", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeTask} + if err := store.CreateIssue(ctx, a, "actor"); err != nil { + t.Fatalf("CreateIssue a: %v", err) + } + if err := store.CreateIssue(ctx, b, "actor"); err != nil { + t.Fatalf("CreateIssue b: %v", err) + } + + if err := store.AddLabel(ctx, a.ID, "l1", "actor"); err != nil { + t.Fatalf("AddLabel: %v", err) + } + + all := store.GetAllIssues() + if len(all) != 2 { + t.Fatalf("expected 2 issues, got %d", len(all)) + } + if all[0].ID != "bd-1" || all[1].ID != "bd-2" { + t.Fatalf("expected sorted by ID, got %q then %q", all[0].ID, all[1].ID) + } + + // Returned issues must be copies (mutating should not affect stored issue struct). + all[1].Title = "mutated" + got, err := store.GetIssue(ctx, "bd-2") + if err != nil { + t.Fatalf("GetIssue: %v", err) + } + if got.Title != "A" { + t.Fatalf("expected stored title unchanged, got %q", got.Title) + } +} + +func TestMemoryStorage_CreateIssues_DefaultPrefix_DuplicateExisting_ExternalRef(t *testing.T) { + store := New("") + defer store.Close() + ctx := context.Background() + + // Default prefix should be "bd" when unset. + issues := []*types.Issue{{Title: "A", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeTask}} + if err := store.CreateIssues(ctx, issues, "actor"); err != nil { + t.Fatalf("CreateIssues: %v", err) + } + if issues[0].ID != "bd-1" { + t.Fatalf("expected bd-1, got %q", issues[0].ID) + } + + ext := "ext" + batch := []*types.Issue{{ID: "bd-x", Title: "B", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeTask, ExternalRef: &ext}} + if err := store.CreateIssues(ctx, batch, "actor"); err != nil { + t.Fatalf("CreateIssues: %v", err) + } + if got, _ := store.GetIssueByExternalRef(ctx, "ext"); got == nil || got.ID != "bd-x" { + t.Fatalf("expected external ref indexed") + } + + // Duplicate existing issue ID branch. + dup := []*types.Issue{{ID: "bd-x", Title: "Dup", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeTask}} + if err := store.CreateIssues(ctx, dup, "actor"); err == nil { + t.Fatalf("expected duplicate existing issue error") + } +} + +func TestMemoryStorage_GetIssueByExternalRef_IndexPointsToMissingIssue(t *testing.T) { + store := setupTestMemory(t) + defer store.Close() + ctx := context.Background() + + store.mu.Lock() + store.externalRefToID["dangling"] = "bd-nope" + store.mu.Unlock() + + got, err := store.GetIssueByExternalRef(ctx, "dangling") + if err != nil { + t.Fatalf("GetIssueByExternalRef: %v", err) + } + if got != nil { + t.Fatalf("expected nil for dangling ref") + } +} + +func TestMemoryStorage_DependencyCounts_Records_Tree_Cycles(t *testing.T) { + store := setupTestMemory(t) + defer store.Close() + ctx := context.Background() + + a := &types.Issue{ID: "bd-1", Title: "A", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeTask} + b := &types.Issue{ID: "bd-2", Title: "B", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeTask} + c := &types.Issue{ID: "bd-3", Title: "C", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeTask} + d := &types.Issue{ID: "bd-4", Title: "D", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeTask} + for _, iss := range []*types.Issue{a, b, c, d} { + if err := store.CreateIssue(ctx, iss, "actor"); err != nil { + t.Fatalf("CreateIssue %s: %v", iss.ID, err) + } + } + + if err := store.AddDependency(ctx, &types.Dependency{IssueID: a.ID, DependsOnID: b.ID, Type: types.DepBlocks}, "actor"); err != nil { + t.Fatalf("AddDependency: %v", err) + } + if err := store.AddDependency(ctx, &types.Dependency{IssueID: a.ID, DependsOnID: c.ID, Type: types.DepBlocks}, "actor"); err != nil { + t.Fatalf("AddDependency: %v", err) + } + if err := store.AddDependency(ctx, &types.Dependency{IssueID: d.ID, DependsOnID: b.ID, Type: types.DepBlocks}, "actor"); err != nil { + t.Fatalf("AddDependency: %v", err) + } + + counts, err := store.GetDependencyCounts(ctx, []string{a.ID, b.ID, "bd-missing"}) + if err != nil { + t.Fatalf("GetDependencyCounts: %v", err) + } + if counts[a.ID].DependencyCount != 2 || counts[a.ID].DependentCount != 0 { + t.Fatalf("unexpected counts for A: %+v", counts[a.ID]) + } + if counts[b.ID].DependencyCount != 0 || counts[b.ID].DependentCount != 2 { + t.Fatalf("unexpected counts for B: %+v", counts[b.ID]) + } + if counts["bd-missing"].DependencyCount != 0 || counts["bd-missing"].DependentCount != 0 { + t.Fatalf("unexpected counts for missing: %+v", counts["bd-missing"]) + } + + deps, err := store.GetDependencyRecords(ctx, a.ID) + if err != nil { + t.Fatalf("GetDependencyRecords: %v", err) + } + if len(deps) != 2 { + t.Fatalf("expected 2 deps, got %d", len(deps)) + } + + allDeps, err := store.GetAllDependencyRecords(ctx) + if err != nil { + t.Fatalf("GetAllDependencyRecords: %v", err) + } + if len(allDeps[a.ID]) != 2 { + t.Fatalf("expected all deps for A") + } + + nodes, err := store.GetDependencyTree(ctx, a.ID, 3, false, false) + if err != nil { + t.Fatalf("GetDependencyTree: %v", err) + } + if len(nodes) != 2 || nodes[0].Depth != 1 { + t.Fatalf("unexpected tree: %+v", nodes) + } + + cycles, err := store.DetectCycles(ctx) + if err != nil { + t.Fatalf("DetectCycles: %v", err) + } + if cycles != nil { + t.Fatalf("expected nil cycles, got %+v", cycles) + } +} + +func TestMemoryStorage_HashTracking_NoOps(t *testing.T) { + store := setupTestMemory(t) + defer store.Close() + ctx := context.Background() + + if hash, err := store.GetDirtyIssueHash(ctx, "bd-1"); err != nil || hash != "" { + t.Fatalf("GetDirtyIssueHash: hash=%q err=%v", hash, err) + } + if hash, err := store.GetExportHash(ctx, "bd-1"); err != nil || hash != "" { + t.Fatalf("GetExportHash: hash=%q err=%v", hash, err) + } + if err := store.SetExportHash(ctx, "bd-1", "h"); err != nil { + t.Fatalf("SetExportHash: %v", err) + } + if err := store.ClearAllExportHashes(ctx); err != nil { + t.Fatalf("ClearAllExportHashes: %v", err) + } + if hash, err := store.GetJSONLFileHash(ctx); err != nil || hash != "" { + t.Fatalf("GetJSONLFileHash: hash=%q err=%v", hash, err) + } + if err := store.SetJSONLFileHash(ctx, "h"); err != nil { + t.Fatalf("SetJSONLFileHash: %v", err) + } +} + +func TestMemoryStorage_LabelsAndCommentsHelpers(t *testing.T) { + store := setupTestMemory(t) + defer store.Close() + ctx := context.Background() + + a := &types.Issue{ID: "bd-1", Title: "A", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeTask} + b := &types.Issue{ID: "bd-2", Title: "B", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeTask} + if err := store.CreateIssue(ctx, a, "actor"); err != nil { + t.Fatalf("CreateIssue a: %v", err) + } + if err := store.CreateIssue(ctx, b, "actor"); err != nil { + t.Fatalf("CreateIssue b: %v", err) + } + + if err := store.AddLabel(ctx, a.ID, "l1", "actor"); err != nil { + t.Fatalf("AddLabel: %v", err) + } + if err := store.AddLabel(ctx, b.ID, "l2", "actor"); err != nil { + t.Fatalf("AddLabel: %v", err) + } + + labels, err := store.GetLabelsForIssues(ctx, []string{a.ID, b.ID, "bd-missing"}) + if err != nil { + t.Fatalf("GetLabelsForIssues: %v", err) + } + if len(labels) != 2 { + t.Fatalf("expected 2 entries, got %d", len(labels)) + } + if labels[a.ID][0] != "l1" { + t.Fatalf("unexpected labels for A: %+v", labels[a.ID]) + } + + issues, err := store.GetIssuesByLabel(ctx, "l1") + if err != nil { + t.Fatalf("GetIssuesByLabel: %v", err) + } + if len(issues) != 1 || issues[0].ID != a.ID { + t.Fatalf("unexpected issues: %+v", issues) + } + + if _, err := store.AddIssueComment(ctx, a.ID, "author", "text"); err != nil { + t.Fatalf("AddIssueComment: %v", err) + } + comments, err := store.GetCommentsForIssues(ctx, []string{a.ID, b.ID}) + if err != nil { + t.Fatalf("GetCommentsForIssues: %v", err) + } + if len(comments[a.ID]) != 1 { + t.Fatalf("expected comments for A") + } +} + +func TestMemoryStorage_StaleEventsCustomStatusAndLifecycleHelpers(t *testing.T) { + store := New("/tmp/x.jsonl") + defer store.Close() + ctx := context.Background() + + if store.Path() != "/tmp/x.jsonl" { + t.Fatalf("Path mismatch") + } + if store.UnderlyingDB() != nil { + t.Fatalf("expected nil UnderlyingDB") + } + if _, err := store.UnderlyingConn(ctx); err == nil { + t.Fatalf("expected UnderlyingConn error") + } + if err := store.RunInTransaction(ctx, func(tx storage.Transaction) error { return nil }); err == nil { + t.Fatalf("expected RunInTransaction error") + } + + if err := store.SetConfig(ctx, "issue_prefix", "bd"); err != nil { + t.Fatalf("SetConfig: %v", err) + } + a := &types.Issue{ID: "bd-1", Title: "A", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeTask} + if err := store.CreateIssue(ctx, a, "actor"); err != nil { + t.Fatalf("CreateIssue: %v", err) + } + + // Force updated_at into the past for stale detection. + store.mu.Lock() + a.UpdatedAt = time.Now().Add(-10 * 24 * time.Hour) + store.mu.Unlock() + + stale, err := store.GetStaleIssues(ctx, types.StaleFilter{Days: 7, Limit: 10}) + if err != nil { + t.Fatalf("GetStaleIssues: %v", err) + } + if len(stale) != 1 || stale[0].ID != a.ID { + t.Fatalf("unexpected stale: %+v", stale) + } + + if err := store.AddComment(ctx, a.ID, "actor", "c"); err != nil { + t.Fatalf("AddComment: %v", err) + } + if err := store.MarkIssueDirty(ctx, a.ID); err != nil { + t.Fatalf("MarkIssueDirty: %v", err) + } + + // Generate multiple events and ensure limiting returns the last N. + if err := store.UpdateIssue(ctx, a.ID, map[string]interface{}{"title": "t1"}, "actor"); err != nil { + t.Fatalf("UpdateIssue: %v", err) + } + if err := store.UpdateIssue(ctx, a.ID, map[string]interface{}{"title": "t2"}, "actor"); err != nil { + t.Fatalf("UpdateIssue: %v", err) + } + evs, err := store.GetEvents(ctx, a.ID, 2) + if err != nil { + t.Fatalf("GetEvents: %v", err) + } + if len(evs) != 2 { + t.Fatalf("expected 2 events, got %d", len(evs)) + } + + if err := store.SetConfig(ctx, "status.custom", " triage, blocked , ,done "); err != nil { + t.Fatalf("SetConfig: %v", err) + } + statuses, err := store.GetCustomStatuses(ctx) + if err != nil { + t.Fatalf("GetCustomStatuses: %v", err) + } + if len(statuses) != 3 || statuses[0] != "triage" || statuses[1] != "blocked" || statuses[2] != "done" { + t.Fatalf("unexpected statuses: %+v", statuses) + } + if got := parseCustomStatuses(""); got != nil { + t.Fatalf("expected nil for empty parseCustomStatuses") + } + + // Empty custom statuses. + if err := store.DeleteConfig(ctx, "status.custom"); err != nil { + t.Fatalf("DeleteConfig: %v", err) + } + statuses, err = store.GetCustomStatuses(ctx) + if err != nil { + t.Fatalf("GetCustomStatuses(empty): %v", err) + } + if statuses != nil { + t.Fatalf("expected nil statuses when unset, got %+v", statuses) + } + + if _, err := store.GetEpicsEligibleForClosure(ctx); err != nil { + t.Fatalf("GetEpicsEligibleForClosure: %v", err) + } + + if err := store.UpdateIssueID(ctx, "old", "new", nil, "actor"); err == nil { + t.Fatalf("expected UpdateIssueID error") + } + if err := store.RenameDependencyPrefix(ctx, "old", "new"); err != nil { + t.Fatalf("RenameDependencyPrefix: %v", err) + } + if err := store.RenameCounterPrefix(ctx, "old", "new"); err != nil { + t.Fatalf("RenameCounterPrefix: %v", err) + } +} + +func TestMemoryStorage_AddLabelAndAddDependency_ErrorPaths(t *testing.T) { + store := setupTestMemory(t) + defer store.Close() + ctx := context.Background() + + issue := &types.Issue{ID: "bd-1", Title: "A", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeTask} + if err := store.CreateIssue(ctx, issue, "actor"); err != nil { + t.Fatalf("CreateIssue: %v", err) + } + + if err := store.AddLabel(ctx, "bd-missing", "l", "actor"); err == nil { + t.Fatalf("expected AddLabel error for missing issue") + } + if err := store.AddLabel(ctx, issue.ID, "l", "actor"); err != nil { + t.Fatalf("AddLabel: %v", err) + } + // Duplicate label is a no-op. + if err := store.AddLabel(ctx, issue.ID, "l", "actor"); err != nil { + t.Fatalf("AddLabel duplicate: %v", err) + } + + // AddDependency error paths. + if err := store.AddDependency(ctx, &types.Dependency{IssueID: "bd-missing", DependsOnID: issue.ID, Type: types.DepBlocks}, "actor"); err == nil { + t.Fatalf("expected AddDependency error for missing IssueID") + } + if err := store.AddDependency(ctx, &types.Dependency{IssueID: issue.ID, DependsOnID: "bd-missing", Type: types.DepBlocks}, "actor"); err == nil { + t.Fatalf("expected AddDependency error for missing DependsOnID") + } +} + +func TestMemoryStorage_GetNextChildID_Errors(t *testing.T) { + store := setupTestMemory(t) + defer store.Close() + ctx := context.Background() + + if _, err := store.GetNextChildID(ctx, "bd-missing"); err == nil { + t.Fatalf("expected error for missing parent") + } + + deep := &types.Issue{ID: "bd-1.1.1.1", Title: "Deep", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeTask} + if err := store.CreateIssue(ctx, deep, "actor"); err != nil { + t.Fatalf("CreateIssue: %v", err) + } + if _, err := store.GetNextChildID(ctx, deep.ID); err == nil { + t.Fatalf("expected max depth error") + } +} + +func TestMemoryStorage_GetAllIssues_AttachesDependenciesAndComments(t *testing.T) { + store := setupTestMemory(t) + defer store.Close() + ctx := context.Background() + + a := &types.Issue{ID: "bd-1", Title: "A", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeTask} + b := &types.Issue{ID: "bd-2", Title: "B", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeTask} + if err := store.CreateIssue(ctx, a, "actor"); err != nil { + t.Fatalf("CreateIssue a: %v", err) + } + if err := store.CreateIssue(ctx, b, "actor"); err != nil { + t.Fatalf("CreateIssue b: %v", err) + } + if err := store.AddDependency(ctx, &types.Dependency{IssueID: a.ID, DependsOnID: b.ID, Type: types.DepBlocks}, "actor"); err != nil { + t.Fatalf("AddDependency: %v", err) + } + if _, err := store.AddIssueComment(ctx, a.ID, "author", "text"); err != nil { + t.Fatalf("AddIssueComment: %v", err) + } + + all := store.GetAllIssues() + var gotA *types.Issue + for _, iss := range all { + if iss.ID == a.ID { + gotA = iss + break + } + } + if gotA == nil { + t.Fatalf("expected to find issue A") + } + if len(gotA.Dependencies) != 1 || gotA.Dependencies[0].DependsOnID != b.ID { + t.Fatalf("expected deps attached") + } + if len(gotA.Comments) != 1 || gotA.Comments[0].Text != "text" { + t.Fatalf("expected comments attached") + } +} + +func TestMemoryStorage_GetStaleIssues_FilteringAndLimit(t *testing.T) { + store := setupTestMemory(t) + defer store.Close() + ctx := context.Background() + + old := &types.Issue{ID: "bd-1", Title: "Old", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeTask} + newer := &types.Issue{ID: "bd-2", Title: "Newer", Status: types.StatusInProgress, Priority: 1, IssueType: types.TypeTask} + closed := &types.Issue{ID: "bd-3", Title: "Closed", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeTask} + for _, iss := range []*types.Issue{old, newer, closed} { + if err := store.CreateIssue(ctx, iss, "actor"); err != nil { + t.Fatalf("CreateIssue %s: %v", iss.ID, err) + } + } + if err := store.CloseIssue(ctx, closed.ID, "done", "actor"); err != nil { + t.Fatalf("CloseIssue: %v", err) + } + + store.mu.Lock() + store.issues[old.ID].UpdatedAt = time.Now().Add(-20 * 24 * time.Hour) + store.issues[newer.ID].UpdatedAt = time.Now().Add(-10 * 24 * time.Hour) + store.issues[closed.ID].UpdatedAt = time.Now().Add(-30 * 24 * time.Hour) + store.mu.Unlock() + + stale, err := store.GetStaleIssues(ctx, types.StaleFilter{Days: 7, Status: "in_progress"}) + if err != nil { + t.Fatalf("GetStaleIssues: %v", err) + } + if len(stale) != 1 || stale[0].ID != newer.ID { + t.Fatalf("unexpected stale filtered: %+v", stale) + } + + stale, err = store.GetStaleIssues(ctx, types.StaleFilter{Days: 7, Limit: 1}) + if err != nil { + t.Fatalf("GetStaleIssues: %v", err) + } + if len(stale) != 1 || stale[0].ID != old.ID { + t.Fatalf("expected oldest stale first, got %+v", stale) + } +} + +func TestMemoryStorage_Statistics_EpicsEligibleForClosure_Counting(t *testing.T) { + store := setupTestMemory(t) + defer store.Close() + ctx := context.Background() + + ep := &types.Issue{ID: "bd-1", Title: "Epic", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeEpic} + c1 := &types.Issue{ID: "bd-2", Title: "Child1", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeTask} + c2 := &types.Issue{ID: "bd-3", Title: "Child2", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeTask} + for _, iss := range []*types.Issue{ep, c1, c2} { + if err := store.CreateIssue(ctx, iss, "actor"); err != nil { + t.Fatalf("CreateIssue %s: %v", iss.ID, err) + } + } + if err := store.CloseIssue(ctx, c1.ID, "done", "actor"); err != nil { + t.Fatalf("CloseIssue c1: %v", err) + } + if err := store.CloseIssue(ctx, c2.ID, "done", "actor"); err != nil { + t.Fatalf("CloseIssue c2: %v", err) + } + // Parent-child deps: child -> epic. + if err := store.AddDependency(ctx, &types.Dependency{IssueID: c1.ID, DependsOnID: ep.ID, Type: types.DepParentChild}, "actor"); err != nil { + t.Fatalf("AddDependency: %v", err) + } + if err := store.AddDependency(ctx, &types.Dependency{IssueID: c2.ID, DependsOnID: ep.ID, Type: types.DepParentChild}, "actor"); err != nil { + t.Fatalf("AddDependency: %v", err) + } + + stats, err := store.GetStatistics(ctx) + if err != nil { + t.Fatalf("GetStatistics: %v", err) + } + if stats.EpicsEligibleForClosure != 1 { + t.Fatalf("expected 1 epic eligible, got %d", stats.EpicsEligibleForClosure) + } +} + +func TestMemoryStorage_UpdateIssue_SearchIssues_ReadyWork_BlockedIssues(t *testing.T) { + store := setupTestMemory(t) + defer store.Close() + ctx := context.Background() + + now := time.Now() + assignee := "alice" + + parent := &types.Issue{ID: "bd-1", Title: "Parent", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeEpic} + child := &types.Issue{ID: "bd-2", Title: "Child", Status: types.StatusOpen, Priority: 2, IssueType: types.TypeTask, Assignee: assignee} + blocker := &types.Issue{ID: "bd-3", Title: "Blocker", Status: types.StatusOpen, Priority: 3, IssueType: types.TypeTask} + pinned := &types.Issue{ID: "bd-4", Title: "Pinned", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeTask, Pinned: true} + workflow := &types.Issue{ID: "bd-5", Title: "Workflow", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeMergeRequest} + for _, iss := range []*types.Issue{parent, child, blocker, pinned, workflow} { + if err := store.CreateIssue(ctx, iss, "actor"); err != nil { + t.Fatalf("CreateIssue %s: %v", iss.ID, err) + } + } + + // Make created_at deterministic for sorting. + store.mu.Lock() + store.issues[parent.ID].CreatedAt = now.Add(-100 * time.Hour) + store.issues[child.ID].CreatedAt = now.Add(-1 * time.Hour) + store.issues[blocker.ID].CreatedAt = now.Add(-2 * time.Hour) + store.issues[pinned.ID].CreatedAt = now.Add(-3 * time.Hour) + store.issues[workflow.ID].CreatedAt = now.Add(-4 * time.Hour) + store.mu.Unlock() + + // Dependencies: child is a child of parent; child is blocked by blocker. + if err := store.AddDependency(ctx, &types.Dependency{IssueID: child.ID, DependsOnID: parent.ID, Type: types.DepParentChild}, "actor"); err != nil { + t.Fatalf("AddDependency parent-child: %v", err) + } + if err := store.AddDependency(ctx, &types.Dependency{IssueID: child.ID, DependsOnID: blocker.ID, Type: types.DepBlocks}, "actor"); err != nil { + t.Fatalf("AddDependency blocks: %v", err) + } + + // AddDependency duplicate error path. + if err := store.AddDependency(ctx, &types.Dependency{IssueID: child.ID, DependsOnID: blocker.ID, Type: types.DepBlocks}, "actor"); err == nil { + t.Fatalf("expected duplicate dependency error") + } + + // UpdateIssue: exercise assignee nil, external_ref update+clear, and closed_at behavior. + ext := "old-ext" + store.mu.Lock() + store.issues[child.ID].ExternalRef = &ext + store.externalRefToID[ext] = child.ID + store.mu.Unlock() + + if err := store.UpdateIssue(ctx, child.ID, map[string]interface{}{"assignee": nil, "external_ref": "new-ext"}, "actor"); err != nil { + t.Fatalf("UpdateIssue: %v", err) + } + if got, _ := store.GetIssueByExternalRef(ctx, "old-ext"); got != nil { + t.Fatalf("expected old-ext removed") + } + if got, _ := store.GetIssueByExternalRef(ctx, "new-ext"); got == nil || got.ID != child.ID { + t.Fatalf("expected new-ext mapping") + } + + if err := store.UpdateIssue(ctx, child.ID, map[string]interface{}{"status": string(types.StatusClosed)}, "actor"); err != nil { + t.Fatalf("UpdateIssue close: %v", err) + } + closed, _ := store.GetIssue(ctx, child.ID) + if closed.ClosedAt == nil { + t.Fatalf("expected ClosedAt set") + } + if err := store.UpdateIssue(ctx, child.ID, map[string]interface{}{"status": string(types.StatusOpen), "external_ref": nil}, "actor"); err != nil { + t.Fatalf("UpdateIssue reopen: %v", err) + } + reopened, _ := store.GetIssue(ctx, child.ID) + if reopened.ClosedAt != nil { + t.Fatalf("expected ClosedAt cleared") + } + if got, _ := store.GetIssueByExternalRef(ctx, "new-ext"); got != nil { + t.Fatalf("expected new-ext cleared") + } + + // SearchIssues: query, label AND/OR, IDs filter, ParentID filter, limit. + if err := store.AddLabel(ctx, parent.ID, "l1", "actor"); err != nil { + t.Fatalf("AddLabel: %v", err) + } + if err := store.AddLabel(ctx, child.ID, "l1", "actor"); err != nil { + t.Fatalf("AddLabel: %v", err) + } + if err := store.AddLabel(ctx, child.ID, "l2", "actor"); err != nil { + t.Fatalf("AddLabel: %v", err) + } + + st := types.StatusOpen + res, err := store.SearchIssues(ctx, "parent", types.IssueFilter{Status: &st}) + if err != nil { + t.Fatalf("SearchIssues: %v", err) + } + if len(res) != 1 || res[0].ID != parent.ID { + t.Fatalf("unexpected SearchIssues results: %+v", res) + } + + res, err = store.SearchIssues(ctx, "", types.IssueFilter{Labels: []string{"l1", "l2"}}) + if err != nil { + t.Fatalf("SearchIssues labels AND: %v", err) + } + if len(res) != 1 || res[0].ID != child.ID { + t.Fatalf("unexpected labels AND results: %+v", res) + } + + res, err = store.SearchIssues(ctx, "", types.IssueFilter{IDs: []string{child.ID}}) + if err != nil { + t.Fatalf("SearchIssues IDs: %v", err) + } + if len(res) != 1 || res[0].ID != child.ID { + t.Fatalf("unexpected IDs results: %+v", res) + } + + res, err = store.SearchIssues(ctx, "", types.IssueFilter{ParentID: &parent.ID}) + if err != nil { + t.Fatalf("SearchIssues ParentID: %v", err) + } + if len(res) != 1 || res[0].ID != child.ID { + t.Fatalf("unexpected ParentID results: %+v", res) + } + + res, err = store.SearchIssues(ctx, "", types.IssueFilter{LabelsAny: []string{"l2", "missing"}, Limit: 1}) + if err != nil { + t.Fatalf("SearchIssues labels OR: %v", err) + } + if len(res) != 1 { + t.Fatalf("expected limit 1") + } + + // Ready work: child is blocked, pinned excluded, workflow excluded by default. + ready, err := store.GetReadyWork(ctx, types.WorkFilter{}) + if err != nil { + t.Fatalf("GetReadyWork: %v", err) + } + if len(ready) != 2 { // parent + blocker + t.Fatalf("expected 2 ready issues, got %d: %+v", len(ready), ready) + } + + // Filter by workflow type explicitly. + ready, err = store.GetReadyWork(ctx, types.WorkFilter{Type: string(types.TypeMergeRequest)}) + if err != nil { + t.Fatalf("GetReadyWork type: %v", err) + } + if len(ready) != 1 || ready[0].ID != workflow.ID { + t.Fatalf("expected only workflow issue, got %+v", ready) + } + + // Status + priority filters. + prio := 3 + ready, err = store.GetReadyWork(ctx, types.WorkFilter{Status: types.StatusOpen, Priority: &prio}) + if err != nil { + t.Fatalf("GetReadyWork status+priority: %v", err) + } + if len(ready) != 1 || ready[0].ID != blocker.ID { + t.Fatalf("expected blocker only, got %+v", ready) + } + + // Label filters. + ready, err = store.GetReadyWork(ctx, types.WorkFilter{Labels: []string{"l1"}}) + if err != nil { + t.Fatalf("GetReadyWork labels AND: %v", err) + } + if len(ready) != 1 || ready[0].ID != parent.ID { + t.Fatalf("expected parent only, got %+v", ready) + } + ready, err = store.GetReadyWork(ctx, types.WorkFilter{LabelsAny: []string{"l2"}}) + if err != nil { + t.Fatalf("GetReadyWork labels OR: %v", err) + } + if len(ready) != 0 { + t.Fatalf("expected 0 because only l2 issue is blocked") + } + + // Assignee filter vs Unassigned precedence. + ready, err = store.GetReadyWork(ctx, types.WorkFilter{Assignee: &assignee}) + if err != nil { + t.Fatalf("GetReadyWork assignee: %v", err) + } + if len(ready) != 0 { + t.Fatalf("expected 0 due to child being blocked") + } + ready, err = store.GetReadyWork(ctx, types.WorkFilter{Unassigned: true}) + if err != nil { + t.Fatalf("GetReadyWork unassigned: %v", err) + } + for _, iss := range ready { + if iss.Assignee != "" { + t.Fatalf("expected unassigned only") + } + } + + // Sort policies + limit. + ready, err = store.GetReadyWork(ctx, types.WorkFilter{SortPolicy: types.SortPolicyOldest, Limit: 1}) + if err != nil { + t.Fatalf("GetReadyWork oldest: %v", err) + } + if len(ready) != 1 || ready[0].ID != parent.ID { + t.Fatalf("expected oldest=parent, got %+v", ready) + } + ready, err = store.GetReadyWork(ctx, types.WorkFilter{SortPolicy: types.SortPolicyPriority}) + if err != nil { + t.Fatalf("GetReadyWork priority: %v", err) + } + if len(ready) < 2 || ready[0].Priority > ready[1].Priority { + t.Fatalf("expected priority sort") + } + // Hybrid: recent issues first. + ready, err = store.GetReadyWork(ctx, types.WorkFilter{SortPolicy: types.SortPolicyHybrid}) + if err != nil { + t.Fatalf("GetReadyWork hybrid: %v", err) + } + if len(ready) != 2 || ready[0].ID != blocker.ID { + t.Fatalf("expected recent (blocker) first in hybrid, got %+v", ready) + } + + // Blocked issues: child is blocked by an open blocker. + blocked, err := store.GetBlockedIssues(ctx, types.WorkFilter{}) + if err != nil { + t.Fatalf("GetBlockedIssues: %v", err) + } + if len(blocked) != 1 || blocked[0].ID != child.ID || blocked[0].BlockedByCount != 1 { + t.Fatalf("unexpected blocked issues: %+v", blocked) + } + + // Cover getOpenBlockers missing-blocker branch. + missing := &types.Issue{ID: "bd-6", Title: "Missing blocker dep", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeTask} + if err := store.CreateIssue(ctx, missing, "actor"); err != nil { + t.Fatalf("CreateIssue: %v", err) + } + // Bypass AddDependency validation to cover the missing-blocker branch in getOpenBlockers. + store.mu.Lock() + store.dependencies[missing.ID] = append(store.dependencies[missing.ID], &types.Dependency{IssueID: missing.ID, DependsOnID: "bd-does-not-exist", Type: types.DepBlocks}) + store.mu.Unlock() + blocked, err = store.GetBlockedIssues(ctx, types.WorkFilter{}) + if err != nil { + t.Fatalf("GetBlockedIssues: %v", err) + } + if len(blocked) != 2 { + t.Fatalf("expected 2 blocked issues, got %d", len(blocked)) + } +} + +func TestMemoryStorage_UpdateIssue_CoversMoreFields(t *testing.T) { + store := setupTestMemory(t) + defer store.Close() + ctx := context.Background() + + iss := &types.Issue{ID: "bd-1", Title: "A", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeTask} + if err := store.CreateIssue(ctx, iss, "actor"); err != nil { + t.Fatalf("CreateIssue: %v", err) + } + + if err := store.UpdateIssue(ctx, iss.ID, map[string]interface{}{ + "description": "d", + "design": "design", + "acceptance_criteria": "ac", + "notes": "n", + "priority": 2, + "issue_type": string(types.TypeBug), + "assignee": "bob", + "status": string(types.StatusInProgress), + }, "actor"); err != nil { + t.Fatalf("UpdateIssue: %v", err) + } + + got, _ := store.GetIssue(ctx, iss.ID) + if got.Description != "d" || got.Design != "design" || got.AcceptanceCriteria != "ac" || got.Notes != "n" { + t.Fatalf("expected text fields updated") + } + if got.Priority != 2 || got.IssueType != types.TypeBug || got.Assignee != "bob" || got.Status != types.StatusInProgress { + t.Fatalf("expected fields updated") + } + + // Status closed when already closed should not clear ClosedAt. + if err := store.CloseIssue(ctx, iss.ID, "done", "actor"); err != nil { + t.Fatalf("CloseIssue: %v", err) + } + closedOnce, _ := store.GetIssue(ctx, iss.ID) + if closedOnce.ClosedAt == nil { + t.Fatalf("expected ClosedAt") + } + if err := store.UpdateIssue(ctx, iss.ID, map[string]interface{}{"status": string(types.StatusClosed)}, "actor"); err != nil { + t.Fatalf("UpdateIssue closed->closed: %v", err) + } + closedTwice, _ := store.GetIssue(ctx, iss.ID) + if closedTwice.ClosedAt == nil { + t.Fatalf("expected ClosedAt preserved") + } +} + +func TestMemoryStorage_CountEpicsEligibleForClosure_CoversBranches(t *testing.T) { + store := setupTestMemory(t) + defer store.Close() + ctx := context.Background() + + ep1 := &types.Issue{ID: "bd-1", Title: "Epic1", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeEpic} + epClosed := &types.Issue{ID: "bd-2", Title: "EpicClosed", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeEpic} + nonEpic := &types.Issue{ID: "bd-3", Title: "NotEpic", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeTask} + c := &types.Issue{ID: "bd-4", Title: "Child", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeTask} + for _, iss := range []*types.Issue{ep1, epClosed, nonEpic, c} { + if err := store.CreateIssue(ctx, iss, "actor"); err != nil { + t.Fatalf("CreateIssue %s: %v", iss.ID, err) + } + } + if err := store.CloseIssue(ctx, epClosed.ID, "done", "actor"); err != nil { + t.Fatalf("CloseIssue: %v", err) + } + // Child -> ep1 (eligible once child is closed). + if err := store.AddDependency(ctx, &types.Dependency{IssueID: c.ID, DependsOnID: ep1.ID, Type: types.DepParentChild}, "actor"); err != nil { + t.Fatalf("AddDependency: %v", err) + } + // Child -> nonEpic should not count. + if err := store.AddDependency(ctx, &types.Dependency{IssueID: c.ID, DependsOnID: nonEpic.ID, Type: types.DepParentChild}, "actor"); err != nil { + t.Fatalf("AddDependency: %v", err) + } + // Child -> missing epic should not count. + store.mu.Lock() + store.dependencies[c.ID] = append(store.dependencies[c.ID], &types.Dependency{IssueID: c.ID, DependsOnID: "bd-missing", Type: types.DepParentChild}) + store.mu.Unlock() + + // Close child to make ep1 eligible. + if err := store.CloseIssue(ctx, c.ID, "done", "actor"); err != nil { + t.Fatalf("CloseIssue child: %v", err) + } + + stats, err := store.GetStatistics(ctx) + if err != nil { + t.Fatalf("GetStatistics: %v", err) + } + if stats.EpicsEligibleForClosure != 1 { + t.Fatalf("expected 1 eligible epic, got %d", stats.EpicsEligibleForClosure) + } +} + +func TestExtractParentAndChildNumber_CoversFailures(t *testing.T) { + if _, _, ok := extractParentAndChildNumber("no-dot"); ok { + t.Fatalf("expected ok=false") + } + if _, _, ok := extractParentAndChildNumber("parent.bad"); ok { + t.Fatalf("expected ok=false") + } +} diff --git a/internal/storage/sqlite/dependencies.go b/internal/storage/sqlite/dependencies.go index 282da900..11837cc9 100644 --- a/internal/storage/sqlite/dependencies.go +++ b/internal/storage/sqlite/dependencies.go @@ -885,7 +885,7 @@ func (s *SQLiteStorage) scanIssues(ctx context.Context, rows *sql.Rows) ([]*type issue.Sender = sender.String } if wisp.Valid && wisp.Int64 != 0 { - issue.Wisp = true + issue.Ephemeral = true } // Pinned field (bd-7h5) if pinned.Valid && pinned.Int64 != 0 { @@ -1006,7 +1006,7 @@ func (s *SQLiteStorage) scanIssuesWithDependencyType(ctx context.Context, rows * issue.Sender = sender.String } if wisp.Valid && wisp.Int64 != 0 { - issue.Wisp = true + issue.Ephemeral = true } // Pinned field (bd-7h5) if pinned.Valid && pinned.Int64 != 0 { diff --git a/internal/storage/sqlite/graph_links_test.go b/internal/storage/sqlite/graph_links_test.go index 72f85908..b81a58c1 100644 --- a/internal/storage/sqlite/graph_links_test.go +++ b/internal/storage/sqlite/graph_links_test.go @@ -295,7 +295,7 @@ func TestRepliesTo(t *testing.T) { IssueType: types.TypeMessage, Sender: "alice", Assignee: "bob", - Wisp: true, + Ephemeral: true, CreatedAt: time.Now(), UpdatedAt: time.Now(), } @@ -307,7 +307,7 @@ func TestRepliesTo(t *testing.T) { IssueType: types.TypeMessage, Sender: "bob", Assignee: "alice", - Wisp: true, + Ephemeral: true, CreatedAt: time.Now(), UpdatedAt: time.Now(), } @@ -363,7 +363,7 @@ func TestRepliesTo_Chain(t *testing.T) { IssueType: types.TypeMessage, Sender: "user", Assignee: "inbox", - Wisp: true, + Ephemeral: true, CreatedAt: time.Now(), UpdatedAt: time.Now(), } @@ -415,7 +415,7 @@ func TestWispField(t *testing.T) { Status: types.StatusOpen, Priority: 2, IssueType: types.TypeMessage, - Wisp: true, + Ephemeral: true, CreatedAt: time.Now(), UpdatedAt: time.Now(), } @@ -426,7 +426,7 @@ func TestWispField(t *testing.T) { Status: types.StatusOpen, Priority: 2, IssueType: types.TypeTask, - Wisp: false, + Ephemeral: false, CreatedAt: time.Now(), UpdatedAt: time.Now(), } @@ -443,7 +443,7 @@ func TestWispField(t *testing.T) { if err != nil { t.Fatalf("GetIssue failed: %v", err) } - if !savedWisp.Wisp { + if !savedWisp.Ephemeral { t.Error("Wisp issue should have Wisp=true") } @@ -451,7 +451,7 @@ func TestWispField(t *testing.T) { if err != nil { t.Fatalf("GetIssue failed: %v", err) } - if savedPermanent.Wisp { + if savedPermanent.Ephemeral { t.Error("Permanent issue should have Wisp=false") } } @@ -468,7 +468,7 @@ func TestWispFilter(t *testing.T) { Status: types.StatusClosed, // Closed for cleanup test Priority: 2, IssueType: types.TypeMessage, - Wisp: true, + Ephemeral: true, CreatedAt: time.Now(), UpdatedAt: time.Now(), } @@ -483,7 +483,7 @@ func TestWispFilter(t *testing.T) { Status: types.StatusClosed, Priority: 2, IssueType: types.TypeTask, - Wisp: false, + Ephemeral: false, CreatedAt: time.Now(), UpdatedAt: time.Now(), } @@ -497,7 +497,7 @@ func TestWispFilter(t *testing.T) { closedStatus := types.StatusClosed wispFilter := types.IssueFilter{ Status: &closedStatus, - Wisp: &wispTrue, + Ephemeral: &wispTrue, } wispIssues, err := store.SearchIssues(ctx, "", wispFilter) @@ -512,7 +512,7 @@ func TestWispFilter(t *testing.T) { wispFalse := false nonWispFilter := types.IssueFilter{ Status: &closedStatus, - Wisp: &wispFalse, + Ephemeral: &wispFalse, } permanentIssues, err := store.SearchIssues(ctx, "", nonWispFilter) diff --git a/internal/storage/sqlite/issues.go b/internal/storage/sqlite/issues.go index 41d221f3..7c566165 100644 --- a/internal/storage/sqlite/issues.go +++ b/internal/storage/sqlite/issues.go @@ -28,7 +28,7 @@ func insertIssue(ctx context.Context, conn *sql.Conn, issue *types.Issue) error } wisp := 0 - if issue.Wisp { + if issue.Ephemeral { wisp = 1 } pinned := 0 @@ -94,7 +94,7 @@ func insertIssues(ctx context.Context, conn *sql.Conn, issues []*types.Issue) er } wisp := 0 - if issue.Wisp { + if issue.Ephemeral { wisp = 1 } pinned := 0 diff --git a/internal/storage/sqlite/migrations/019_messaging_fields.go b/internal/storage/sqlite/migrations/019_messaging_fields.go index d5eddd65..6b44d237 100644 --- a/internal/storage/sqlite/migrations/019_messaging_fields.go +++ b/internal/storage/sqlite/migrations/019_messaging_fields.go @@ -20,10 +20,6 @@ func MigrateMessagingFields(db *sql.DB) error { }{ {"sender", "TEXT DEFAULT ''"}, {"ephemeral", "INTEGER DEFAULT 0"}, - {"replies_to", "TEXT DEFAULT ''"}, - {"relates_to", "TEXT DEFAULT ''"}, - {"duplicate_of", "TEXT DEFAULT ''"}, - {"superseded_by", "TEXT DEFAULT ''"}, } for _, col := range columns { @@ -59,11 +55,5 @@ func MigrateMessagingFields(db *sql.DB) error { return fmt.Errorf("failed to create sender index: %w", err) } - // Add index for replies_to (for efficient thread queries) - _, err = db.Exec(`CREATE INDEX IF NOT EXISTS idx_issues_replies_to ON issues(replies_to) WHERE replies_to != ''`) - if err != nil { - return fmt.Errorf("failed to create replies_to index: %w", err) - } - return nil } diff --git a/internal/storage/sqlite/migrations/021_migrate_edge_fields.go b/internal/storage/sqlite/migrations/021_migrate_edge_fields.go index 35f1b295..69013a54 100644 --- a/internal/storage/sqlite/migrations/021_migrate_edge_fields.go +++ b/internal/storage/sqlite/migrations/021_migrate_edge_fields.go @@ -21,137 +21,176 @@ import ( func MigrateEdgeFields(db *sql.DB) error { now := time.Now() + hasColumn := func(name string) (bool, error) { + var exists bool + err := db.QueryRow(` + SELECT COUNT(*) > 0 + FROM pragma_table_info('issues') + WHERE name = ? + `, name).Scan(&exists) + return exists, err + } + + hasRepliesTo, err := hasColumn("replies_to") + if err != nil { + return fmt.Errorf("failed to check replies_to column: %w", err) + } + hasRelatesTo, err := hasColumn("relates_to") + if err != nil { + return fmt.Errorf("failed to check relates_to column: %w", err) + } + hasDuplicateOf, err := hasColumn("duplicate_of") + if err != nil { + return fmt.Errorf("failed to check duplicate_of column: %w", err) + } + hasSupersededBy, err := hasColumn("superseded_by") + if err != nil { + return fmt.Errorf("failed to check superseded_by column: %w", err) + } + + if !hasRepliesTo && !hasRelatesTo && !hasDuplicateOf && !hasSupersededBy { + return nil + } + // Migrate replies_to fields to replies-to edges // For thread_id, use the parent's ID as the thread root for first-level replies // (more sophisticated thread detection would require recursive queries) - rows, err := db.Query(` - SELECT id, replies_to - FROM issues - WHERE replies_to != '' AND replies_to IS NOT NULL - `) - if err != nil { - return fmt.Errorf("failed to query replies_to fields: %w", err) - } - defer rows.Close() - - for rows.Next() { - var issueID, repliesTo string - if err := rows.Scan(&issueID, &repliesTo); err != nil { - return fmt.Errorf("failed to scan replies_to row: %w", err) - } - - // Use repliesTo as thread_id (the root of the thread) - // This is a simplification - existing threads will have the parent as thread root - _, err := db.Exec(` - INSERT OR IGNORE INTO dependencies (issue_id, depends_on_id, type, created_at, created_by, metadata, thread_id) - VALUES (?, ?, 'replies-to', ?, 'migration', '{}', ?) - `, issueID, repliesTo, now, repliesTo) + if hasRepliesTo { + rows, err := db.Query(` + SELECT id, replies_to + FROM issues + WHERE replies_to != '' AND replies_to IS NOT NULL + `) if err != nil { - return fmt.Errorf("failed to create replies-to edge for %s: %w", issueID, err) + return fmt.Errorf("failed to query replies_to fields: %w", err) + } + defer rows.Close() + + for rows.Next() { + var issueID, repliesTo string + if err := rows.Scan(&issueID, &repliesTo); err != nil { + return fmt.Errorf("failed to scan replies_to row: %w", err) + } + + // Use repliesTo as thread_id (the root of the thread) + // This is a simplification - existing threads will have the parent as thread root + _, err := db.Exec(` + INSERT OR IGNORE INTO dependencies (issue_id, depends_on_id, type, created_at, created_by, metadata, thread_id) + VALUES (?, ?, 'replies-to', ?, 'migration', '{}', ?) + `, issueID, repliesTo, now, repliesTo) + if err != nil { + return fmt.Errorf("failed to create replies-to edge for %s: %w", issueID, err) + } + } + if err := rows.Err(); err != nil { + return fmt.Errorf("error iterating replies_to rows: %w", err) } - } - if err := rows.Err(); err != nil { - return fmt.Errorf("error iterating replies_to rows: %w", err) } // Migrate relates_to fields to relates-to edges // relates_to is stored as JSON array string - rows, err = db.Query(` - SELECT id, relates_to - FROM issues - WHERE relates_to != '' AND relates_to != '[]' AND relates_to IS NOT NULL - `) - if err != nil { - return fmt.Errorf("failed to query relates_to fields: %w", err) - } - defer rows.Close() - - for rows.Next() { - var issueID, relatesTo string - if err := rows.Scan(&issueID, &relatesTo); err != nil { - return fmt.Errorf("failed to scan relates_to row: %w", err) + if hasRelatesTo { + rows, err := db.Query(` + SELECT id, relates_to + FROM issues + WHERE relates_to != '' AND relates_to != '[]' AND relates_to IS NOT NULL + `) + if err != nil { + return fmt.Errorf("failed to query relates_to fields: %w", err) } + defer rows.Close() - // Parse JSON array - var relatedIDs []string - if err := json.Unmarshal([]byte(relatesTo), &relatedIDs); err != nil { - // Skip malformed JSON - continue - } + for rows.Next() { + var issueID, relatesTo string + if err := rows.Scan(&issueID, &relatesTo); err != nil { + return fmt.Errorf("failed to scan relates_to row: %w", err) + } - for _, relatedID := range relatedIDs { - if relatedID == "" { + // Parse JSON array + var relatedIDs []string + if err := json.Unmarshal([]byte(relatesTo), &relatedIDs); err != nil { + // Skip malformed JSON continue } - _, err := db.Exec(` - INSERT OR IGNORE INTO dependencies (issue_id, depends_on_id, type, created_at, created_by, metadata, thread_id) - VALUES (?, ?, 'relates-to', ?, 'migration', '{}', '') - `, issueID, relatedID, now) - if err != nil { - return fmt.Errorf("failed to create relates-to edge for %s -> %s: %w", issueID, relatedID, err) + + for _, relatedID := range relatedIDs { + if relatedID == "" { + continue + } + _, err := db.Exec(` + INSERT OR IGNORE INTO dependencies (issue_id, depends_on_id, type, created_at, created_by, metadata, thread_id) + VALUES (?, ?, 'relates-to', ?, 'migration', '{}', '') + `, issueID, relatedID, now) + if err != nil { + return fmt.Errorf("failed to create relates-to edge for %s -> %s: %w", issueID, relatedID, err) + } } } - } - if err := rows.Err(); err != nil { - return fmt.Errorf("error iterating relates_to rows: %w", err) + if err := rows.Err(); err != nil { + return fmt.Errorf("error iterating relates_to rows: %w", err) + } } // Migrate duplicate_of fields to duplicates edges - rows, err = db.Query(` - SELECT id, duplicate_of - FROM issues - WHERE duplicate_of != '' AND duplicate_of IS NOT NULL - `) - if err != nil { - return fmt.Errorf("failed to query duplicate_of fields: %w", err) - } - defer rows.Close() - - for rows.Next() { - var issueID, duplicateOf string - if err := rows.Scan(&issueID, &duplicateOf); err != nil { - return fmt.Errorf("failed to scan duplicate_of row: %w", err) - } - - _, err := db.Exec(` - INSERT OR IGNORE INTO dependencies (issue_id, depends_on_id, type, created_at, created_by, metadata, thread_id) - VALUES (?, ?, 'duplicates', ?, 'migration', '{}', '') - `, issueID, duplicateOf, now) + if hasDuplicateOf { + rows, err := db.Query(` + SELECT id, duplicate_of + FROM issues + WHERE duplicate_of != '' AND duplicate_of IS NOT NULL + `) if err != nil { - return fmt.Errorf("failed to create duplicates edge for %s: %w", issueID, err) + return fmt.Errorf("failed to query duplicate_of fields: %w", err) + } + defer rows.Close() + + for rows.Next() { + var issueID, duplicateOf string + if err := rows.Scan(&issueID, &duplicateOf); err != nil { + return fmt.Errorf("failed to scan duplicate_of row: %w", err) + } + + _, err := db.Exec(` + INSERT OR IGNORE INTO dependencies (issue_id, depends_on_id, type, created_at, created_by, metadata, thread_id) + VALUES (?, ?, 'duplicates', ?, 'migration', '{}', '') + `, issueID, duplicateOf, now) + if err != nil { + return fmt.Errorf("failed to create duplicates edge for %s: %w", issueID, err) + } + } + if err := rows.Err(); err != nil { + return fmt.Errorf("error iterating duplicate_of rows: %w", err) } - } - if err := rows.Err(); err != nil { - return fmt.Errorf("error iterating duplicate_of rows: %w", err) } // Migrate superseded_by fields to supersedes edges - rows, err = db.Query(` - SELECT id, superseded_by - FROM issues - WHERE superseded_by != '' AND superseded_by IS NOT NULL - `) - if err != nil { - return fmt.Errorf("failed to query superseded_by fields: %w", err) - } - defer rows.Close() - - for rows.Next() { - var issueID, supersededBy string - if err := rows.Scan(&issueID, &supersededBy); err != nil { - return fmt.Errorf("failed to scan superseded_by row: %w", err) - } - - _, err := db.Exec(` - INSERT OR IGNORE INTO dependencies (issue_id, depends_on_id, type, created_at, created_by, metadata, thread_id) - VALUES (?, ?, 'supersedes', ?, 'migration', '{}', '') - `, issueID, supersededBy, now) + if hasSupersededBy { + rows, err := db.Query(` + SELECT id, superseded_by + FROM issues + WHERE superseded_by != '' AND superseded_by IS NOT NULL + `) if err != nil { - return fmt.Errorf("failed to create supersedes edge for %s: %w", issueID, err) + return fmt.Errorf("failed to query superseded_by fields: %w", err) + } + defer rows.Close() + + for rows.Next() { + var issueID, supersededBy string + if err := rows.Scan(&issueID, &supersededBy); err != nil { + return fmt.Errorf("failed to scan superseded_by row: %w", err) + } + + _, err := db.Exec(` + INSERT OR IGNORE INTO dependencies (issue_id, depends_on_id, type, created_at, created_by, metadata, thread_id) + VALUES (?, ?, 'supersedes', ?, 'migration', '{}', '') + `, issueID, supersededBy, now) + if err != nil { + return fmt.Errorf("failed to create supersedes edge for %s: %w", issueID, err) + } + } + if err := rows.Err(); err != nil { + return fmt.Errorf("error iterating superseded_by rows: %w", err) } - } - if err := rows.Err(); err != nil { - return fmt.Errorf("error iterating superseded_by rows: %w", err) } return nil diff --git a/internal/storage/sqlite/migrations/022_drop_edge_columns.go b/internal/storage/sqlite/migrations/022_drop_edge_columns.go index 944bddb5..b2cee13e 100644 --- a/internal/storage/sqlite/migrations/022_drop_edge_columns.go +++ b/internal/storage/sqlite/migrations/022_drop_edge_columns.go @@ -57,6 +57,57 @@ func MigrateDropEdgeColumns(db *sql.DB) error { return nil } + // Preserve newer columns if they already exist (migration may run on partially-migrated DBs). + hasPinned, err := checkCol("pinned") + if err != nil { + return fmt.Errorf("failed to check pinned column: %w", err) + } + hasIsTemplate, err := checkCol("is_template") + if err != nil { + return fmt.Errorf("failed to check is_template column: %w", err) + } + hasAwaitType, err := checkCol("await_type") + if err != nil { + return fmt.Errorf("failed to check await_type column: %w", err) + } + hasAwaitID, err := checkCol("await_id") + if err != nil { + return fmt.Errorf("failed to check await_id column: %w", err) + } + hasTimeoutNs, err := checkCol("timeout_ns") + if err != nil { + return fmt.Errorf("failed to check timeout_ns column: %w", err) + } + hasWaiters, err := checkCol("waiters") + if err != nil { + return fmt.Errorf("failed to check waiters column: %w", err) + } + + pinnedExpr := "0" + if hasPinned { + pinnedExpr = "pinned" + } + isTemplateExpr := "0" + if hasIsTemplate { + isTemplateExpr = "is_template" + } + awaitTypeExpr := "''" + if hasAwaitType { + awaitTypeExpr = "await_type" + } + awaitIDExpr := "''" + if hasAwaitID { + awaitIDExpr = "await_id" + } + timeoutNsExpr := "0" + if hasTimeoutNs { + timeoutNsExpr = "timeout_ns" + } + waitersExpr := "''" + if hasWaiters { + waitersExpr = "waiters" + } + // SQLite 3.35.0+ supports DROP COLUMN, but we use table recreation for compatibility // This is idempotent - we recreate the table without the deprecated columns @@ -117,6 +168,12 @@ func MigrateDropEdgeColumns(db *sql.DB) error { original_type TEXT DEFAULT '', sender TEXT DEFAULT '', ephemeral INTEGER DEFAULT 0, + pinned INTEGER DEFAULT 0, + is_template INTEGER DEFAULT 0, + await_type TEXT, + await_id TEXT, + timeout_ns INTEGER, + waiters TEXT, close_reason TEXT DEFAULT '', CHECK ((status = 'closed') = (closed_at IS NOT NULL)) ) @@ -132,7 +189,8 @@ func MigrateDropEdgeColumns(db *sql.DB) error { notes, status, priority, issue_type, assignee, estimated_minutes, created_at, updated_at, closed_at, external_ref, source_repo, compaction_level, compacted_at, compacted_at_commit, original_size, deleted_at, - deleted_by, delete_reason, original_type, sender, ephemeral, close_reason + deleted_by, delete_reason, original_type, sender, ephemeral, pinned, is_template, + await_type, await_id, timeout_ns, waiters, close_reason ) SELECT id, content_hash, title, description, design, acceptance_criteria, @@ -140,9 +198,11 @@ func MigrateDropEdgeColumns(db *sql.DB) error { created_at, updated_at, closed_at, external_ref, COALESCE(source_repo, ''), compaction_level, compacted_at, compacted_at_commit, original_size, deleted_at, deleted_by, delete_reason, original_type, sender, ephemeral, + %s, %s, + %s, %s, %s, %s, COALESCE(close_reason, '') FROM issues - `) + `, pinnedExpr, isTemplateExpr, awaitTypeExpr, awaitIDExpr, timeoutNsExpr, waitersExpr) if err != nil { return fmt.Errorf("failed to copy issues data: %w", err) } diff --git a/internal/storage/sqlite/migrations/023_pinned_column.go b/internal/storage/sqlite/migrations/023_pinned_column.go index 9854f8e0..73c238dc 100644 --- a/internal/storage/sqlite/migrations/023_pinned_column.go +++ b/internal/storage/sqlite/migrations/023_pinned_column.go @@ -20,6 +20,11 @@ func MigratePinnedColumn(db *sql.DB) error { } if columnExists { + // Column exists (e.g. created by new schema); ensure index exists. + _, err = db.Exec(`CREATE INDEX IF NOT EXISTS idx_issues_pinned ON issues(pinned) WHERE pinned = 1`) + if err != nil { + return fmt.Errorf("failed to create pinned index: %w", err) + } return nil } diff --git a/internal/storage/sqlite/migrations/024_is_template_column.go b/internal/storage/sqlite/migrations/024_is_template_column.go index 07f9462c..fee0316d 100644 --- a/internal/storage/sqlite/migrations/024_is_template_column.go +++ b/internal/storage/sqlite/migrations/024_is_template_column.go @@ -21,6 +21,11 @@ func MigrateIsTemplateColumn(db *sql.DB) error { } if columnExists { + // Column exists (e.g. created by new schema); ensure index exists. + _, err = db.Exec(`CREATE INDEX IF NOT EXISTS idx_issues_is_template ON issues(is_template) WHERE is_template = 1`) + if err != nil { + return fmt.Errorf("failed to create is_template index: %w", err) + } return nil } diff --git a/internal/storage/sqlite/migrations/028_tombstone_closed_at.go b/internal/storage/sqlite/migrations/028_tombstone_closed_at.go index 2991bb8b..966eff72 100644 --- a/internal/storage/sqlite/migrations/028_tombstone_closed_at.go +++ b/internal/storage/sqlite/migrations/028_tombstone_closed_at.go @@ -3,6 +3,7 @@ package migrations import ( "database/sql" "fmt" + "strings" ) // MigrateTombstoneClosedAt updates the closed_at constraint to allow tombstones @@ -22,8 +23,20 @@ func MigrateTombstoneClosedAt(db *sql.DB) error { // SQLite doesn't support ALTER TABLE to modify CHECK constraints // We must recreate the table with the new constraint + // Idempotency check: see if the new CHECK constraint already exists + // The new constraint contains "status = 'tombstone'" which the old one didn't + var tableSql string + err := db.QueryRow(`SELECT sql FROM sqlite_master WHERE type='table' AND name='issues'`).Scan(&tableSql) + if err != nil { + return fmt.Errorf("failed to get issues table schema: %w", err) + } + // If the schema already has the tombstone clause, migration is already applied + if strings.Contains(tableSql, "status = 'tombstone'") || strings.Contains(tableSql, `status = "tombstone"`) { + return nil + } + // Step 0: Drop views that depend on the issues table - _, err := db.Exec(`DROP VIEW IF EXISTS ready_issues`) + _, err = db.Exec(`DROP VIEW IF EXISTS ready_issues`) if err != nil { return fmt.Errorf("failed to drop ready_issues view: %w", err) } @@ -48,6 +61,7 @@ func MigrateTombstoneClosedAt(db *sql.DB) error { assignee TEXT, estimated_minutes INTEGER, created_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP, + created_by TEXT DEFAULT '', updated_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP, closed_at DATETIME, external_ref TEXT, @@ -81,10 +95,73 @@ func MigrateTombstoneClosedAt(db *sql.DB) error { } // Step 2: Copy data from old table to new table - _, err = db.Exec(` - INSERT INTO issues_new - SELECT * FROM issues - `) + // We need to check if created_by column exists in the old table + // If not, we insert a default empty string for it + var hasCreatedBy bool + rows, err := db.Query(`PRAGMA table_info(issues)`) + if err != nil { + return fmt.Errorf("failed to get table info: %w", err) + } + for rows.Next() { + var cid int + var name, ctype string + var notnull, pk int + var dflt interface{} + if err := rows.Scan(&cid, &name, &ctype, ¬null, &dflt, &pk); err != nil { + rows.Close() + return fmt.Errorf("failed to scan table info: %w", err) + } + if name == "created_by" { + hasCreatedBy = true + break + } + } + rows.Close() + + var insertSQL string + if hasCreatedBy { + // Old table has created_by, copy all columns directly + insertSQL = ` + INSERT INTO issues_new ( + id, content_hash, title, description, design, acceptance_criteria, notes, + status, priority, issue_type, assignee, estimated_minutes, created_at, + created_by, updated_at, closed_at, external_ref, source_repo, compaction_level, + compacted_at, compacted_at_commit, original_size, deleted_at, deleted_by, + delete_reason, original_type, sender, ephemeral, close_reason, pinned, + is_template, await_type, await_id, timeout_ns, waiters + ) + SELECT + id, content_hash, title, description, design, acceptance_criteria, notes, + status, priority, issue_type, assignee, estimated_minutes, created_at, + created_by, updated_at, closed_at, external_ref, source_repo, compaction_level, + compacted_at, compacted_at_commit, original_size, deleted_at, deleted_by, + delete_reason, original_type, sender, ephemeral, close_reason, pinned, + is_template, await_type, await_id, timeout_ns, waiters + FROM issues + ` + } else { + // Old table doesn't have created_by, use empty string default + insertSQL = ` + INSERT INTO issues_new ( + id, content_hash, title, description, design, acceptance_criteria, notes, + status, priority, issue_type, assignee, estimated_minutes, created_at, + created_by, updated_at, closed_at, external_ref, source_repo, compaction_level, + compacted_at, compacted_at_commit, original_size, deleted_at, deleted_by, + delete_reason, original_type, sender, ephemeral, close_reason, pinned, + is_template, await_type, await_id, timeout_ns, waiters + ) + SELECT + id, content_hash, title, description, design, acceptance_criteria, notes, + status, priority, issue_type, assignee, estimated_minutes, created_at, + '', updated_at, closed_at, external_ref, source_repo, compaction_level, + compacted_at, compacted_at_commit, original_size, deleted_at, deleted_by, + delete_reason, original_type, sender, ephemeral, close_reason, pinned, + is_template, await_type, await_id, timeout_ns, waiters + FROM issues + ` + } + + _, err = db.Exec(insertSQL) if err != nil { return fmt.Errorf("failed to copy issues data: %w", err) } diff --git a/internal/storage/sqlite/migrations_template_pinned_regression_test.go b/internal/storage/sqlite/migrations_template_pinned_regression_test.go new file mode 100644 index 00000000..818596bb --- /dev/null +++ b/internal/storage/sqlite/migrations_template_pinned_regression_test.go @@ -0,0 +1,59 @@ +package sqlite + +import ( + "context" + "path/filepath" + "testing" + + "github.com/steveyegge/beads/internal/types" +) + +func TestRunMigrations_DoesNotResetPinnedOrTemplate(t *testing.T) { + ctx := context.Background() + dir := t.TempDir() + dbPath := filepath.Join(dir, "beads.db") + + s, err := New(ctx, dbPath) + if err != nil { + t.Fatalf("New: %v", err) + } + t.Cleanup(func() { _ = s.Close() }) + + if err := s.SetConfig(ctx, "issue_prefix", "test"); err != nil { + t.Fatalf("SetConfig(issue_prefix): %v", err) + } + + issue := &types.Issue{ + Title: "Pinned template", + Status: types.StatusOpen, + Priority: 2, + IssueType: types.TypeTask, + Pinned: true, + IsTemplate: true, + } + if err := s.CreateIssue(ctx, issue, "test-user"); err != nil { + t.Fatalf("CreateIssue: %v", err) + } + + _ = s.Close() + + s2, err := New(ctx, dbPath) + if err != nil { + t.Fatalf("New(reopen): %v", err) + } + defer func() { _ = s2.Close() }() + + got, err := s2.GetIssue(ctx, issue.ID) + if err != nil { + t.Fatalf("GetIssue: %v", err) + } + if got == nil { + t.Fatalf("expected issue to exist") + } + if !got.Pinned { + t.Fatalf("expected issue to remain pinned") + } + if !got.IsTemplate { + t.Fatalf("expected issue to remain template") + } +} diff --git a/internal/storage/sqlite/multirepo.go b/internal/storage/sqlite/multirepo.go index 34f37fdb..c2509ff7 100644 --- a/internal/storage/sqlite/multirepo.go +++ b/internal/storage/sqlite/multirepo.go @@ -282,7 +282,7 @@ func (s *SQLiteStorage) upsertIssueInTx(ctx context.Context, tx *sql.Tx, issue * err := tx.QueryRowContext(ctx, `SELECT id FROM issues WHERE id = ?`, issue.ID).Scan(&existingID) wisp := 0 - if issue.Wisp { + if issue.Ephemeral { wisp = 1 } pinned := 0 diff --git a/internal/storage/sqlite/multirepo_export.go b/internal/storage/sqlite/multirepo_export.go index 48d1943e..0d79741f 100644 --- a/internal/storage/sqlite/multirepo_export.go +++ b/internal/storage/sqlite/multirepo_export.go @@ -54,7 +54,7 @@ func (s *SQLiteStorage) ExportToMultiRepo(ctx context.Context) (map[string]int, // Wisps exist only in SQLite and are shared via .beads/redirect, not JSONL. filtered := make([]*types.Issue, 0, len(allIssues)) for _, issue := range allIssues { - if !issue.Wisp { + if !issue.Ephemeral { filtered = append(filtered, issue) } } diff --git a/internal/storage/sqlite/multirepo_test.go b/internal/storage/sqlite/multirepo_test.go index 5741fd60..18d229b4 100644 --- a/internal/storage/sqlite/multirepo_test.go +++ b/internal/storage/sqlite/multirepo_test.go @@ -909,7 +909,7 @@ func TestUpsertPreservesGateFields(t *testing.T) { Status: types.StatusOpen, Priority: 1, IssueType: types.TypeGate, - Wisp: true, + Ephemeral: true, AwaitType: "gh:run", AwaitID: "123456789", Timeout: 30 * 60 * 1000000000, // 30 minutes in nanoseconds diff --git a/internal/storage/sqlite/queries.go b/internal/storage/sqlite/queries.go index 07325aaf..f11fe85f 100644 --- a/internal/storage/sqlite/queries.go +++ b/internal/storage/sqlite/queries.go @@ -349,7 +349,7 @@ func (s *SQLiteStorage) GetIssue(ctx context.Context, id string) (*types.Issue, issue.Sender = sender.String } if wisp.Valid && wisp.Int64 != 0 { - issue.Wisp = true + issue.Ephemeral = true } // Pinned field (bd-7h5) if pinned.Valid && pinned.Int64 != 0 { @@ -562,7 +562,7 @@ func (s *SQLiteStorage) GetIssueByExternalRef(ctx context.Context, externalRef s issue.Sender = sender.String } if wisp.Valid && wisp.Int64 != 0 { - issue.Wisp = true + issue.Ephemeral = true } // Pinned field (bd-7h5) if pinned.Valid && pinned.Int64 != 0 { @@ -1652,8 +1652,8 @@ func (s *SQLiteStorage) SearchIssues(ctx context.Context, query string, filter t } // Wisp filtering (bd-kwro.9) - if filter.Wisp != nil { - if *filter.Wisp { + if filter.Ephemeral != nil { + if *filter.Ephemeral { whereClauses = append(whereClauses, "ephemeral = 1") // SQL column is still 'ephemeral' } else { whereClauses = append(whereClauses, "(ephemeral = 0 OR ephemeral IS NULL)") diff --git a/internal/storage/sqlite/ready.go b/internal/storage/sqlite/ready.go index 29604142..840d3d24 100644 --- a/internal/storage/sqlite/ready.go +++ b/internal/storage/sqlite/ready.go @@ -17,7 +17,8 @@ import ( // Excludes pinned issues which are persistent anchors, not actionable work (bd-92u) func (s *SQLiteStorage) GetReadyWork(ctx context.Context, filter types.WorkFilter) ([]*types.Issue, error) { whereClauses := []string{ - "i.pinned = 0", // Exclude pinned issues (bd-92u) + "i.pinned = 0", // Exclude pinned issues (bd-92u) + "(i.ephemeral = 0 OR i.ephemeral IS NULL)", // Exclude wisps (hq-t15s) } args := []interface{}{} @@ -399,7 +400,7 @@ func (s *SQLiteStorage) GetStaleIssues(ctx context.Context, filter types.StaleFi issue.Sender = sender.String } if ephemeral.Valid && ephemeral.Int64 != 0 { - issue.Wisp = true + issue.Ephemeral = true } // Pinned field (bd-7h5) if pinned.Valid && pinned.Int64 != 0 { diff --git a/internal/storage/sqlite/schema.go b/internal/storage/sqlite/schema.go index 898b13f4..3eb19f45 100644 --- a/internal/storage/sqlite/schema.go +++ b/internal/storage/sqlite/schema.go @@ -230,6 +230,7 @@ WITH RECURSIVE SELECT i.* FROM issues i WHERE i.status = 'open' + AND (i.ephemeral = 0 OR i.ephemeral IS NULL) AND NOT EXISTS ( SELECT 1 FROM blocked_transitively WHERE issue_id = i.id ); diff --git a/internal/storage/sqlite/transaction.go b/internal/storage/sqlite/transaction.go index b7e4067b..82607f4a 100644 --- a/internal/storage/sqlite/transaction.go +++ b/internal/storage/sqlite/transaction.go @@ -1089,8 +1089,8 @@ func (t *sqliteTxStorage) SearchIssues(ctx context.Context, query string, filter } // Wisp filtering (bd-kwro.9) - if filter.Wisp != nil { - if *filter.Wisp { + if filter.Ephemeral != nil { + if *filter.Ephemeral { whereClauses = append(whereClauses, "ephemeral = 1") // SQL column is still 'ephemeral' } else { whereClauses = append(whereClauses, "(ephemeral = 0 OR ephemeral IS NULL)") @@ -1244,7 +1244,7 @@ func scanIssueRow(row scanner) (*types.Issue, error) { issue.Sender = sender.String } if wisp.Valid && wisp.Int64 != 0 { - issue.Wisp = true + issue.Ephemeral = true } // Pinned field (bd-7h5) if pinned.Valid && pinned.Int64 != 0 { diff --git a/internal/syncbranch/worktree_sync_test.go b/internal/syncbranch/worktree_sync_test.go index 148d609d..38c11eb1 100644 --- a/internal/syncbranch/worktree_sync_test.go +++ b/internal/syncbranch/worktree_sync_test.go @@ -413,4 +413,3 @@ func setupTestRepoWithRemote(t *testing.T) string { return tmpDir } - diff --git a/internal/types/types.go b/internal/types/types.go index 105ca94b..5f7beed8 100644 --- a/internal/types/types.go +++ b/internal/types/types.go @@ -44,8 +44,8 @@ type Issue struct { OriginalType string `json:"original_type,omitempty"` // Issue type before deletion (for tombstones) // Messaging fields (bd-kwro): inter-agent communication support - Sender string `json:"sender,omitempty"` // Who sent this (for messages) - Wisp bool `json:"wisp,omitempty"` // Wisp = ephemeral vapor from the Steam Engine; bulk-deleted when closed + Sender string `json:"sender,omitempty"` // Who sent this (for messages) + Ephemeral bool `json:"ephemeral,omitempty"` // If true, not exported to JSONL; bulk-deleted when closed // NOTE: RepliesTo, RelatesTo, DuplicateOf, SupersededBy moved to dependencies table // per Decision 004 (Edge Schema Consolidation). Use dependency API instead. @@ -598,8 +598,8 @@ type IssueFilter struct { // Tombstone filtering (bd-1bu) IncludeTombstones bool // If false (default), exclude tombstones from results - // Wisp filtering (bd-kwro.9) - Wisp *bool // Filter by wisp flag (nil = any, true = only wisps, false = only non-wisps) + // Ephemeral filtering (bd-kwro.9) + Ephemeral *bool // Filter by ephemeral flag (nil = any, true = only ephemeral, false = only persistent) // Pinned filtering (bd-7h5) Pinned *bool // Filter by pinned flag (nil = any, true = only pinned, false = only non-pinned) diff --git a/scripts/bump-version.sh b/scripts/bump-version.sh index 3bda2392..a77f4930 100755 --- a/scripts/bump-version.sh +++ b/scripts/bump-version.sh @@ -12,10 +12,13 @@ set -e # QUICK START (for typical release): # # # 1. Update CHANGELOG.md and cmd/bd/info.go with release notes (manual) -# # 2. Run version bump with all local installations: -# ./scripts/bump-version.sh X.Y.Z --commit --all -# # 3. Test locally, then push: -# git push origin main && git push origin vX.Y.Z +# # 2. Run version bump with chaos tests and all local installations: +# ./scripts/bump-version.sh X.Y.Z --run-chaos-tests --commit --tag --push --all +# +# Or step by step: +# ./scripts/bump-version.sh X.Y.Z --run-chaos-tests # Run chaos tests first +# ./scripts/bump-version.sh X.Y.Z --commit --all # Commit and install +# git push origin main && git push origin vX.Y.Z # Push # # WHAT --all DOES: # --install - Build bd and install to ~/go/bin AND ~/.local/bin @@ -47,7 +50,7 @@ NC='\033[0m' # No Color # Usage message usage() { - echo "Usage: $0 [--commit] [--tag] [--push] [--install] [--upgrade-mcp] [--mcp-local] [--restart-daemons] [--publish-npm] [--publish-pypi] [--publish-all] [--all]" + echo "Usage: $0 [--commit] [--tag] [--push] [--install] [--upgrade-mcp] [--mcp-local] [--restart-daemons] [--run-chaos-tests] [--publish-npm] [--publish-pypi] [--publish-all] [--all]" echo "" echo "Bump version across all beads components." echo "" @@ -60,6 +63,7 @@ usage() { echo " --upgrade-mcp Upgrade local beads-mcp installation via pip after version bump" echo " --mcp-local Install beads-mcp from local source (for pre-PyPI testing)" echo " --restart-daemons Restart all bd daemons to pick up new version" + echo " --run-chaos-tests Run chaos/corruption recovery tests before tagging" echo " --publish-npm Publish npm package to registry (requires npm login)" echo " --publish-pypi Publish beads-mcp to PyPI (requires TWINE credentials)" echo " --publish-all Shorthand for --publish-npm --publish-pypi" @@ -75,7 +79,11 @@ usage() { echo " $0 0.9.3 --commit --tag --push # Full release preparation" echo " $0 0.9.3 --all # Install bd, local MCP, and restart daemons" echo " $0 0.9.3 --commit --all # Commit and install everything locally" + echo " $0 0.9.3 --run-chaos-tests # Run chaos tests before proceeding" echo " $0 0.9.3 --publish-all # Publish to npm and PyPI" + echo "" + echo "Recommended release command (includes chaos testing):" + echo " $0 X.Y.Z --run-chaos-tests --commit --tag --push --all" exit 1 } @@ -153,6 +161,7 @@ main() { AUTO_RESTART_DAEMONS=false AUTO_PUBLISH_NPM=false AUTO_PUBLISH_PYPI=false + AUTO_RUN_CHAOS_TESTS=false # Parse flags shift # Remove version argument @@ -189,6 +198,9 @@ main() { AUTO_PUBLISH_NPM=true AUTO_PUBLISH_PYPI=true ;; + --run-chaos-tests) + AUTO_RUN_CHAOS_TESTS=true + ;; --all) AUTO_INSTALL=true AUTO_MCP_LOCAL=true @@ -602,6 +614,34 @@ main() { echo "" fi + # Run chaos tests if requested (before commit/tag to catch issues early) + if [ "$AUTO_RUN_CHAOS_TESTS" = true ]; then + echo "Running chaos/corruption recovery tests..." + echo " (This tests database corruption recovery, may take a few minutes)" + echo "" + + # Run chaos tests with the chaos build tag + if go test -tags=chaos -timeout=10m ./cmd/bd/...; then + echo -e "${GREEN}✓ Chaos tests passed${NC}" + echo "" + else + echo -e "${RED}✗ Chaos tests failed${NC}" + echo -e "${YELLOW} Fix the failures before releasing.${NC}" + exit 1 + fi + + # Also run E2E tests if available + echo "Running E2E tests..." + if go test -tags=e2e -timeout=10m ./cmd/bd/...; then + echo -e "${GREEN}✓ E2E tests passed${NC}" + echo "" + else + echo -e "${RED}✗ E2E tests failed${NC}" + echo -e "${YELLOW} Fix the failures before releasing.${NC}" + exit 1 + fi + fi + # Check if cmd/bd/info.go has been updated with the new version if ! grep -q "\"$NEW_VERSION\"" cmd/bd/info.go; then echo -e "${YELLOW}Warning: cmd/bd/info.go does not contain an entry for $NEW_VERSION${NC}" diff --git a/scripts/test.sh b/scripts/test.sh index dc826936..d5f2c3ba 100755 --- a/scripts/test.sh +++ b/scripts/test.sh @@ -24,6 +24,9 @@ TIMEOUT="${TEST_TIMEOUT:-3m}" SKIP_PATTERN=$(build_skip_pattern) VERBOSE="${TEST_VERBOSE:-}" RUN_PATTERN="${TEST_RUN:-}" +COVERAGE="${TEST_COVER:-}" +COVERPROFILE="${TEST_COVERPROFILE:-/tmp/beads.coverage.out}" +COVERPKG="${TEST_COVERPKG:-}" # Parse arguments PACKAGES=() @@ -77,10 +80,25 @@ if [[ -n "$RUN_PATTERN" ]]; then CMD+=(-run "$RUN_PATTERN") fi +if [[ -n "$COVERAGE" ]]; then + CMD+=(-covermode=atomic -coverprofile "$COVERPROFILE") + if [[ -n "$COVERPKG" ]]; then + CMD+=(-coverpkg "$COVERPKG") + fi +fi + CMD+=("${PACKAGES[@]}") echo "Running: ${CMD[*]}" >&2 echo "Skipping: $SKIP_PATTERN" >&2 echo "" >&2 -exec "${CMD[@]}" +"${CMD[@]}" +status=$? + +if [[ -n "$COVERAGE" ]]; then + total=$(go tool cover -func="$COVERPROFILE" | awk '/^total:/ {print $NF}') + echo "Total coverage: ${total} (profile: ${COVERPROFILE})" >&2 +fi + +exit $status diff --git a/skills/beads/references/MOLECULES.md b/skills/beads/references/MOLECULES.md index 9484b832..4312511d 100644 --- a/skills/beads/references/MOLECULES.md +++ b/skills/beads/references/MOLECULES.md @@ -83,8 +83,8 @@ bd mol spawn mol-release --var version=2.0 # With variable substitution **Chemistry shortcuts:** ```bash -bd pour mol-feature # Shortcut for spawn --pour -bd wisp create mol-patrol # Explicit wisp creation +bd mol pour mol-feature # Shortcut for spawn --pour +bd mol wisp mol-patrol # Explicit wisp creation ``` ### Spawn with Immediate Execution @@ -164,7 +164,7 @@ bd mol bond mol-feature mol-deploy --as "Feature with Deploy" ### Creating Wisps ```bash -bd wisp create mol-patrol # From proto +bd mol wisp mol-patrol # From proto bd mol spawn mol-patrol # Same (spawn defaults to wisp) bd mol spawn mol-check --var target=db # With variables ``` @@ -172,8 +172,8 @@ bd mol spawn mol-check --var target=db # With variables ### Listing Wisps ```bash -bd wisp list # List all wisps -bd wisp list --json # Machine-readable +bd mol wisp list # List all wisps +bd mol wisp list --json # Machine-readable ``` ### Ending Wisps @@ -198,7 +198,7 @@ Use burn for routine work with no archival value. ### Garbage Collection ```bash -bd wisp gc # Clean up orphaned wisps +bd mol wisp gc # Clean up orphaned wisps ``` --- @@ -289,7 +289,7 @@ bd mol spawn mol-weekly-review --pour ```bash # Patrol proto exists -bd wisp create mol-patrol +bd mol wisp mol-patrol # Execute patrol work... @@ -327,10 +327,10 @@ bd mol distill bd-release-epic --as "Release Process" --var version=X.Y.Z | `bd mol distill ` | Extract proto from ad-hoc work | | `bd mol squash ` | Compress wisp children to digest | | `bd mol burn ` | Delete wisp without trace | -| `bd pour ` | Shortcut for `spawn --pour` | -| `bd wisp create ` | Create ephemeral wisp | -| `bd wisp list` | List all wisps | -| `bd wisp gc` | Garbage collect orphaned wisps | +| `bd mol pour ` | Shortcut for `spawn --pour` | +| `bd mol wisp ` | Create ephemeral wisp | +| `bd mol wisp list` | List all wisps | +| `bd mol wisp gc` | Garbage collect orphaned wisps | | `bd ship ` | Publish capability for cross-project deps | --- @@ -347,7 +347,7 @@ bd mol distill bd-release-epic --as "Release Process" --var version=X.Y.Z **"Wisp commands fail"** - Wisps stored in `.beads-wisp/` (separate from `.beads/`) -- Check `bd wisp list` for active wisps +- Check `bd mol wisp list` for active wisps **"External dependency not satisfied"** - Target project must have closed issue with `provides:` label