Add rename-prefix command (bd-420)

- Implement bd rename-prefix command with --dry-run and --json flags
- Add prefix validation (max 8 chars, lowercase, starts with letter)
- Update all issue IDs and text references atomically per issue
- Update dependencies, labels, events, and counters
- Fix counter merge to use MAX() to prevent ID collisions
- Update snapshot tables for FK integrity
- Add comprehensive tests for validation and rename workflow
- Document in README.md and AGENTS.md

Known limitation: Each issue updates in its own transaction.
A failure mid-way could leave mixed state. Acceptable for
intended use case (infrequent operation on small DBs).

Amp-Thread-ID: https://ampcode.com/threads/T-7e77b779-bd88-44f2-9f0b-a9f2ccd54d38
Co-authored-by: Amp <amp@ampcode.com>
This commit is contained in:
Steve Yegge
2025-10-16 17:05:27 -07:00
parent 598476d3b5
commit 363cd3b4e6
7 changed files with 600 additions and 0 deletions

View File

@@ -324,10 +324,36 @@
{"id":"bd-391","title":"Data model allows inconsistent status/closed_at states","description":"Issue bd-89 demonstrates a data model inconsistency: an issue can have status='open' but also have a closed_at timestamp set. This creates a liminal state that violates the expected invariant that closed_at should only be set when status='closed'.\n\nRoot causes:\n1. Import (bd import) updates status field independently from closed_at field\n2. UpdateIssue allows status changes without managing closed_at\n3. No database constraint enforcing the invariant\n4. Export includes both fields independently in JSONL\n\nCurrent behavior:\n- bd close: Sets status='closed' AND closed_at (correct)\n- bd update --status open: Sets status='open' but leaves closed_at unchanged (creates inconsistency)\n- bd import: Can import inconsistent data from JSONL\n\nImpact:\n- 'bd ready' shows issues that appear closed (have closed_at)\n- Confusing for users and downstream tools\n- Stats may be inaccurate\n\nPotential solutions:\nA) Add CHECK constraint: (status = 'closed') = (closed_at IS NOT NULL)\nB) Update import/update logic to enforce invariant in application code\nC) Add a 'reopened' event that explicitly clears closed_at\nD) Remove closed_at field entirely (calculate from events or use status only)\n\nSee bd-89 for concrete example.","status":"in_progress","priority":1,"issue_type":"bug","created_at":"2025-10-16T15:25:11.436559-07:00","updated_at":"2025-10-16T15:25:11.436559-07:00","dependencies":[{"issue_id":"bd-391","depends_on_id":"bd-390","type":"blocks","created_at":"2025-10-16T15:25:11.700502-07:00","created_by":"import"}]}
{"id":"bd-392","title":"Epic: Add intelligent database compaction with Claude Haiku","description":"Implement multi-tier database compaction using Claude Haiku to semantically compress old, closed issues. This keeps the database lightweight and agent-friendly while preserving essential context.\n\nGoals:\n- 70-95% space reduction for eligible issues\n- Full restore capability via snapshots\n- Opt-in with dry-run safety\n- ~$1 per 1,000 issues compacted","acceptance_criteria":"- Schema migration with snapshots table\n- Haiku integration for summarization\n- Two-tier compaction (30d, 90d)\n- CLI with dry-run, restore, stats\n- Full test coverage\n- Documentation complete","status":"open","priority":2,"issue_type":"epic","created_at":"2025-10-16T15:25:11.436559-07:00","updated_at":"2025-10-16T15:25:11.436559-07:00"}
{"id":"bd-393","title":"Critical: Auto-import was skipping collisions instead of remapping them","description":"The auto-import mechanism was SKIPPING colliding issues instead of automatically remapping them to new IDs. This caused work from other workers/devices to be LOST during git pull operations.\n\nRoot cause: Lines 283-326 in main.go were filtering out colliding issues instead of calling RemapCollisions() to resolve them.\n\nImpact: Multi-device workflows would silently lose issues when two devices created issues with the same ID.","acceptance_criteria":"- Auto-import detects collisions\n- Calls ScoreCollisions + RemapCollisions automatically\n- Shows remapping notification to user\n- No work is lost from other workers\n- All tests pass","status":"closed","priority":0,"issue_type":"bug","created_at":"2025-10-16T15:25:11.436559-07:00","updated_at":"2025-10-16T15:25:11.436559-07:00","closed_at":"2025-10-16T15:00:37.591033-07:00"}
{"id":"bd-394","title":"GH-11: Add Docker support for hosted/shared instance","description":"Request for Docker container hosting to use beads across multiple dev machines. Would need to consider: centralized database (PostgreSQL?), authentication, concurrent access, API server, etc. This is a significant architectural change from the current local-first model.","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-10-16T16:03:31.498107-07:00","updated_at":"2025-10-16T16:03:31.498107-07:00","closed_at":"2025-10-16T14:37:09.712087-07:00","external_ref":"gh-11"}
{"id":"bd-395","title":"Record git commit hash during compaction","description":"Update compact command to capture current git HEAD commit hash and store in compacted_at_commit field","status":"open","priority":1,"issue_type":"task","created_at":"2025-10-16T16:03:31.501666-07:00","updated_at":"2025-10-16T16:03:31.501666-07:00"}
{"id":"bd-396","title":"Epic: Fix status/closed_at inconsistency (bd-224 solution)","description":"Implement hybrid solution to enforce status/closed_at invariant:\n- Database CHECK constraint\n- Smart UpdateIssue logic\n- Import enforcement\n- Reopen command\n\nThis is a data integrity issue that affects statistics and will cause problems for agent swarms. The hybrid approach provides defense-in-depth.\n\nSee ULTRATHINK_BD224.md for full analysis and rationale.\n\nParent of all implementation tasks for this fix.","status":"open","priority":1,"issue_type":"epic","created_at":"2025-10-16T16:03:31.504655-07:00","updated_at":"2025-10-16T16:03:31.504655-07:00"}
{"id":"bd-397","title":"Audit and document all inconsistent issues in database","description":"Before we add the constraint, we need to know what data is inconsistent.\n\nSteps:\n1. Query database for status/closed_at mismatches\n2. Document each case (how many, which issues, patterns)\n3. Decide on cleanup strategy (trust status vs trust closed_at)\n4. Create SQL script for cleanup\n\nOutput: Document with counts and cleanup SQL ready to review.\n\nThis unblocks the migration work.","status":"open","priority":1,"issue_type":"task","created_at":"2025-10-16T16:03:31.520913-07:00","updated_at":"2025-10-16T16:03:31.520913-07:00"}
{"id":"bd-398","title":"Reach 1.0 release milestone","description":"Stabilize API, finalize documentation, comprehensive testing","status":"open","priority":1,"issue_type":"epic","created_at":"2025-10-16T16:03:31.523258-07:00","updated_at":"2025-10-16T16:03:31.523258-07:00"}
{"id":"bd-399","title":"Code review follow-up: Post-PR #8 merge improvements","description":"Follow-up tasks from the ultrathink code review of PR #8 merge (bd-62).\n\n**Context:** PR #8 successfully merged atomic counter + dirty tracking. Core functionality is solid but several improvements identified.\n\n**Critical (P0-P1):**\n- bd-64: Fix SyncAllCounters performance bottleneck (P0)\n- bd-65: Add migration for issue_counters table (P1)\n- bd-66: Make import counter sync failure fatal (P1)\n\n**Nice to have (P2-P3):**\n- bd-67: Update test comments (P2)\n- bd-68: Add performance benchmarks (P2)\n- bd-69: Add metrics/logging (P3)\n- bd-70: Add EXPLAIN QUERY PLAN tests (P3)\n\n**Overall assessment:** 4/5 stars - Excellent implementation with one critical performance issue. After bd-64 is fixed, this becomes 5/5.\n\n**Review document:** Available if needed","notes":"Status update: All P0-P1 critical tasks completed! bd-64 (performance), bd-65 (migration), bd-66 (fatal error), bd-67 (comments) are all done. Atomic counter implementation is now production-ready. Remaining tasks are P2-P3 enhancements.","status":"open","priority":1,"issue_type":"epic","created_at":"2025-10-16T16:03:31.525208-07:00","updated_at":"2025-10-16T16:03:31.525208-07:00"}
{"id":"bd-4","title":"Low priority chore","description":"","status":"open","priority":4,"issue_type":"chore","created_at":"2025-10-15T00:30:51.630061-07:00","updated_at":"2025-10-15T12:44:26.870358-07:00"}
{"id":"bd-40","title":"Make auto-flush debounce duration configurable","description":"flushDebounce is hardcoded to 5 seconds. Make it configurable via environment variable BEADS_FLUSH_DEBOUNCE (e.g., '500ms', '10s'). Current 5-second value is reasonable for interactive use, but CI/automated scenarios might want faster flush. Add getDebounceDuration() helper function. Located in cmd/bd/main.go:31.","status":"open","priority":3,"issue_type":"feature","created_at":"2025-10-15T00:30:51.640789-07:00","updated_at":"2025-10-16T15:25:11.147603-07:00"}
{"id":"bd-400","title":"Code review: Auto-import collision detection fix (bd-228)","description":"Request thorough code review of the auto-import collision detection implementation.\n\n## Files Changed\n- cmd/bd/main.go: autoImportIfNewer() and new autoImportWithoutCollisionDetection()\n\n## Review Focus Areas\n\n### 1. Correctness\n- Does collision detection properly identify conflicts?\n- Are colliding issues correctly filtered from import?\n- Is the fallback function correct for non-SQLite backends?\n\n### 2. Edge Cases\n- What happens if DetectCollisions() fails?\n- What if all issues are collisions?\n- What if JSONL is malformed?\n- Race conditions with concurrent auto-imports?\n\n### 3. User Experience\n- Is the warning message clear and actionable?\n- Should we log to a file instead of/in addition to stderr?\n- Should there be a --auto-resolve flag?\n\n### 4. Performance\n- Does collision detection add significant latency?\n- Will this work with 1000+ issues?\n- Any unnecessary N+1 queries?\n\n### 5. Testing Gaps\n- Do we need integration tests for collision scenarios?\n- Should we test the warning output?\n- Test autoImportWithoutCollisionDetection() fallback?\n\n## Questions for Reviewer\n1. Should auto-import be more aggressive (auto-resolve) or more conservative (fail)?\n2. Should we add a counter for how many times collisions occurred?\n3. Should there be a config option to disable collision detection?\n4. Is the warning too verbose for typical workflows?\n\n## Current Behavior\n- Skips colliding issues (preserves local)\n- Prints warning to stderr\n- Suggests manual resolution command\n- Continues with non-colliding issues","status":"open","priority":1,"issue_type":"task","created_at":"2025-10-16T16:03:31.536355-07:00","updated_at":"2025-10-16T16:03:31.536355-07:00"}
{"id":"bd-401","title":"Add integration tests for auto-import collision detection","description":"The auto-import collision detection fix (bd-376) needs comprehensive integration tests.\n\n## Test Scenarios Needed\n\n### 1. Basic Collision Detection\n```go\nTestAutoImportWithCollisions\n- Setup: Create issue in DB with status=closed\n- Simulate: Git pull with JSONL containing same issue with status=open\n- Verify: Local changes preserved, warning printed, issue still closed\n```\n\n### 2. Multiple Collisions\n```go\nTestAutoImportWithMultipleCollisions\n- Setup: 5 issues with local modifications\n- Simulate: JSONL with 3 colliding, 2 matching\n- Verify: 3 skipped, 2 updated, appropriate warnings\n```\n\n### 3. No Collisions (Happy Path)\n```go\nTestAutoImportWithoutCollisions\n- Setup: Clean database\n- Simulate: JSONL with new issues + exact matches\n- Verify: All imported successfully, no warnings\n```\n\n### 4. All Collisions\n```go\nTestAutoImportAllCollisions\n- Setup: Every issue has local modifications\n- Simulate: JSONL with conflicting versions\n- Verify: All skipped, warning lists all issues\n```\n\n### 5. Collision Detection Failure\n```go\nTestAutoImportCollisionDetectionError\n- Setup: Mock DetectCollisions() to return error\n- Verify: Import skipped entirely (safe failure mode)\n```\n\n### 6. Hash-Based Skip\n```go\nTestAutoImportHashUnchanged\n- Setup: JSONL hash matches last_import_hash\n- Verify: No collision detection runs, fast path\n```\n\n### 7. Fallback for Non-SQLite\n```go\nTestAutoImportWithoutCollisionDetection\n- Setup: Mock non-SQLite storage backend\n- Verify: Falls back to old behavior, no collision detection\n```\n\n## Test Infrastructure Needed\n\n1. **Helper functions:**\n - createTestDBWithIssues()\n - writeJSONLFile()\n - captureStderrWarnings()\n - simulateGitPull()\n\n2. **Fixtures:**\n - sample-collisions.jsonl\n - sample-exact-matches.jsonl\n - sample-mixed.jsonl\n\n3. **Assertions:**\n - assertIssueState(id, expectedStatus)\n - assertWarningContains(text)\n - assertCollisionCount(n)\n\n## Acceptance Criteria\n- All 7 test scenarios implemented\n- Tests pass consistently\n- Code coverage \u003e80% for autoImportIfNewer()\n- Tests run in \u003c5 seconds total\n- Clear test names and documentation","status":"open","priority":1,"issue_type":"task","created_at":"2025-10-16T16:03:31.546156-07:00","updated_at":"2025-10-16T16:03:31.546156-07:00"}
{"id":"bd-402","title":"Consider batching API for bulk issue creation (recovered from bd-222)","description":"Current CreateIssue acquires a dedicated connection for each call. For bulk imports or agent workflows creating many issues, a batched API could improve performance:\n\nCreateIssues(ctx, issues []*Issue, actor string) error\n\nThis would:\n- Acquire one connection\n- Use one IMMEDIATE transaction\n- Insert all issues atomically\n- Reduce connection overhead\n\nNot urgent - current approach is correct and fast enough for typical use.\n\n**Recovered from:** bd-360 (lost in auto-import bug, see LOST_ISSUES_RECOVERY.md)","status":"open","priority":2,"issue_type":"feature","created_at":"2025-10-16T16:03:31.547562-07:00","updated_at":"2025-10-16T16:03:31.547562-07:00"}
{"id":"bd-403","title":"Use safer placeholder pattern in replaceIDReferences","description":"Currently uses bd-324 which could theoretically collide with user text. Use a truly unique placeholder like null bytes: \\x00REMAP\\x00_0_\\x00 which are unlikely to appear in normal text. Located in collision.go:324. Very low probability issue but worth fixing for completeness.","status":"open","priority":3,"issue_type":"task","created_at":"2025-10-16T16:03:31.550675-07:00","updated_at":"2025-10-16T16:03:31.550675-07:00"}
{"id":"bd-404","title":"Git-based restoration for compacted issues","description":"Store git commit hash at compaction time to enable restoration of full issue history from version control. When issues are compacted, record the current git commit hash so users can restore the original uncompacted issue from git history.","status":"open","priority":1,"issue_type":"epic","created_at":"2025-10-16T16:03:31.564412-07:00","updated_at":"2025-10-16T16:03:31.564412-07:00"}
{"id":"bd-405","title":"Add compacted_at_commit field to Issue type","description":"Add optional compacted_at_commit string field to store git commit hash when issue is compacted","status":"open","priority":1,"issue_type":"task","created_at":"2025-10-16T16:03:31.57487-07:00","updated_at":"2025-10-16T16:03:31.57487-07:00"}
{"id":"bd-406","title":"GH-3: Debug zsh killed error on bd init","description":"User reports 'zsh: killed bd init' when running bd init or just bd command. Likely a crash or signal. Need to reproduce and investigate cause.","notes":"Awaiting user feedback - cannot reproduce locally, waiting for user to provide more details about environment and error message","status":"blocked","priority":1,"issue_type":"bug","created_at":"2025-10-16T16:03:31.580261-07:00","updated_at":"2025-10-16T16:03:31.580261-07:00","external_ref":"gh-3"}
{"id":"bd-407","title":"Implement bd restore command","description":"Create new restore command that checks out git commit from compacted_at_commit, reads issue from JSONL, and displays full history","status":"open","priority":1,"issue_type":"feature","created_at":"2025-10-16T16:03:31.581845-07:00","updated_at":"2025-10-16T16:03:31.581845-07:00"}
{"id":"bd-408","title":"Add tests for git-based restoration","description":"Test compaction stores commit hash, restore command works, handles missing git repo gracefully","status":"open","priority":2,"issue_type":"task","created_at":"2025-10-16T16:03:31.582488-07:00","updated_at":"2025-10-16T16:03:31.582488-07:00"}
{"id":"bd-409","title":"Document git-based restoration feature","description":"Update COMPACTION.md and README.md with restoration workflow and examples","status":"open","priority":2,"issue_type":"task","created_at":"2025-10-16T16:03:31.584232-07:00","updated_at":"2025-10-16T16:03:31.584232-07:00"}
{"id":"bd-41","title":"Add godoc comments for auto-flush functions","description":"Add comprehensive godoc comments for findJSONLPath(), markDirtyAndScheduleFlush(), and flushToJSONL() explaining behavior, concurrency considerations, and error handling. Include notes about debouncing behavior (timer resets on each write, flush occurs 5s after LAST operation) and flush-on-exit guarantees. Located in cmd/bd/main.go:188-307.","status":"open","priority":4,"issue_type":"chore","created_at":"2025-10-15T00:30:51.641159-07:00","updated_at":"2025-10-16T15:25:11.148405-07:00"}
{"id":"bd-410","title":"Add performance benchmarks document","description":"Document actual performance metrics with hyperfine tests","status":"open","priority":3,"issue_type":"task","created_at":"2025-10-16T16:03:31.585083-07:00","updated_at":"2025-10-16T16:03:31.585083-07:00"}
{"id":"bd-411","title":"parallel_test_2","description":"","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-16T16:03:31.585784-07:00","updated_at":"2025-10-16T16:03:31.585784-07:00","closed_at":"2025-10-15T16:26:05.345118-07:00"}
{"id":"bd-412","title":"parallel_test_5","description":"","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-16T16:03:31.586415-07:00","updated_at":"2025-10-16T16:03:31.586415-07:00","closed_at":"2025-10-15T16:26:05.358813-07:00"}
{"id":"bd-413","title":"parallel_test_6","description":"","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-16T16:03:31.588315-07:00","updated_at":"2025-10-16T16:03:31.588315-07:00","closed_at":"2025-10-15T16:26:05.375985-07:00"}
{"id":"bd-414","title":"parallel_test_4","description":"","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-16T16:03:31.589071-07:00","updated_at":"2025-10-16T16:03:31.589071-07:00","closed_at":"2025-10-15T16:26:05.404261-07:00"}
{"id":"bd-415","title":"parallel_test_1","description":"","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-16T16:03:31.592055-07:00","updated_at":"2025-10-16T16:03:31.592055-07:00","closed_at":"2025-10-15T16:26:05.415413-07:00"}
{"id":"bd-416","title":"Add migration scripts for GitHub Issues","description":"Create scripts to import from GitHub Issues API or exported JSON","status":"open","priority":2,"issue_type":"feature","created_at":"2025-10-16T16:03:31.596085-07:00","updated_at":"2025-10-16T16:03:31.596085-07:00"}
{"id":"bd-417","title":"Data model allows inconsistent status/closed_at states","description":"Issue bd-89 demonstrates a data model inconsistency: an issue can have status='open' but also have a closed_at timestamp set. This creates a liminal state that violates the expected invariant that closed_at should only be set when status='closed'.\n\nRoot causes:\n1. Import (bd import) updates status field independently from closed_at field\n2. UpdateIssue allows status changes without managing closed_at\n3. No database constraint enforcing the invariant\n4. Export includes both fields independently in JSONL\n\nCurrent behavior:\n- bd close: Sets status='closed' AND closed_at (correct)\n- bd update --status open: Sets status='open' but leaves closed_at unchanged (creates inconsistency)\n- bd import: Can import inconsistent data from JSONL\n\nImpact:\n- 'bd ready' shows issues that appear closed (have closed_at)\n- Confusing for users and downstream tools\n- Stats may be inaccurate\n\nPotential solutions:\nA) Add CHECK constraint: (status = 'closed') = (closed_at IS NOT NULL)\nB) Update import/update logic to enforce invariant in application code\nC) Add a 'reopened' event that explicitly clears closed_at\nD) Remove closed_at field entirely (calculate from events or use status only)\n\nSee bd-89 for concrete example.","status":"in_progress","priority":1,"issue_type":"bug","created_at":"2025-10-16T16:03:31.596875-07:00","updated_at":"2025-10-16T16:03:31.596875-07:00"}
{"id":"bd-418","title":"parallel_test_4","description":"","design":"Sections:\n1. Introduction\n2. Authentication\n3. REST Endpoints\n4. WebSocket Protocol\n5. Error Handling\n6. Rate Limits\n7. Examples\n\nFor each endpoint:\n- URL and method\n- Request parameters\n- Request body schema\n- Response schema\n- Error responses\n- Code example","acceptance_criteria":"- API.md created with all sections\n- All endpoints documented\n- Request/response schemas\n- WebSocket protocol docs\n- Error codes and handling\n- Rate limit documentation\n- Examples in multiple languages\n- Links from README.md\n\n---","status":"open","priority":2,"issue_type":"task","created_at":"2025-10-16T16:03:31.597564-07:00","updated_at":"2025-10-16T16:03:31.597564-07:00"}
{"id":"bd-42","title":"Add test coverage for auto-flush feature","description":"Add comprehensive tests for auto-flush functionality:\\n- Test that markDirtyAndScheduleFlush() is called after CRUD operations\\n- Test debounce timing (rapid operations result in single flush)\\n- Test --no-auto-flush flag disables feature\\n- Test flush on program exit\\n- Test concurrent operations don't cause races\\n- Test error scenarios (disk full, permission denied, etc.)\\n- Test import command triggers auto-flush\\n\\nCurrent implementation has no test coverage for the auto-flush feature. Located in cmd/bd/main_test.go (to be created).","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-15T00:30:51.643736-07:00","updated_at":"2025-10-16T15:25:11.149432-07:00","closed_at":"2025-10-15T12:44:26.87198-07:00"}
{"id":"bd-420","title":"Add command to rename issue prefix","description":"Allow users to rename their issue prefix (e.g., from 'knowledge-work-' to 'kw-'). Must update the prefix in all issues and update all text references across all issue fields (title, description, design, acceptance criteria, notes) using regexp. Should validate new prefix format and prevent collisions with existing IDs.","design":"Prefix validation rules:\n- Max length: 8 characters\n- Allowed characters: lowercase letters, numbers, hyphens\n- Must start with a letter\n- Must end with a hyphen (e.g., 'kw-', 'work-')\n- Cannot be empty or just a hyphen\n\nImplementation:\n- Add validation function for prefix format\n- Update all issue IDs in the database\n- Use regexp to find and replace all text references across all fields\n- Atomic operation (rollback on failure)","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-10-16T16:03:31.599214-07:00","updated_at":"2025-10-16T16:10:38.040325-07:00","closed_at":"2025-10-16T16:10:38.040325-07:00"}
{"id":"bd-43","title":"Test auto-sync feature","description":"","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-15T00:30:51.644096-07:00","updated_at":"2025-10-16T15:25:11.151011-07:00","closed_at":"2025-10-15T12:44:26.872427-07:00"}
{"id":"bd-44","title":"Regular auto-ID issue","description":"","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-15T00:30:51.644899-07:00","updated_at":"2025-10-16T15:25:11.151769-07:00","closed_at":"2025-10-15T12:44:26.872998-07:00"}
{"id":"bd-45","title":"Test flush tracking","description":"","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-15T00:30:51.645446-07:00","updated_at":"2025-10-16T15:25:11.152226-07:00","closed_at":"2025-10-15T12:44:26.873448-07:00"}