issue updates

This commit is contained in:
Steve Yegge
2025-10-15 21:15:51 -07:00
parent 512996b87a
commit dfc62054f1

View File

@@ -165,6 +165,7 @@
{"id":"bd-248","title":"Test reopen command","description":"","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-15T16:28:44.246154-07:00","updated_at":"2025-10-15T17:05:23.644788-07:00","closed_at":"2025-10-15T17:05:23.644788-07:00"}
{"id":"bd-249","title":"Test reopen command","description":"","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-15T16:28:49.924381-07:00","updated_at":"2025-10-15T16:28:55.491141-07:00","closed_at":"2025-10-15T16:28:55.491141-07:00"}
{"id":"bd-25","title":"Add transaction support to storage layer for atomic multi-operation workflows","description":"Currently each storage method (CreateIssue, UpdateIssue, etc.) starts its own transaction. This makes it impossible to perform atomic multi-step operations like collision resolution. Add support for passing *sql.Tx through the storage interface, or create transaction-aware versions of methods. This would make remapCollisions and other batch operations truly atomic.","status":"closed","priority":4,"issue_type":"feature","created_at":"2025-10-14T14:43:06.910892-07:00","updated_at":"2025-10-15T16:27:22.001363-07:00","closed_at":"2025-10-15T03:01:29.570206-07:00"}
{"id":"bd-250","title":"Implement --format flag for bd list (from PR #46)","description":"PR #46 by tmc adds --format flag with Go template support for bd list, including presets for 'digraph' and 'dot' (Graphviz) output with status-based color coding. Unfortunately the PR is based on old main and would delete labels, reopen, and storage tests. Need to reimplement the feature atop current main.\n\nFeatures to implement:\n- --format flag for bd list\n- 'digraph' preset: basic 'from to' format for golang.org/x/tools/cmd/digraph\n- 'dot' preset: Graphviz compatible output with color-coded statuses\n- Custom Go template support with vars: IssueID, DependsOnID, Type, Issue, Dependency\n- Status-based colors: open=white, in_progress=lightyellow, blocked=lightcoral, closed=lightgray\n\nExamples:\n- bd list --format=digraph | digraph nodes\n- bd list --format=dot | dot -Tsvg -o deps.svg\n- bd list --format='{{.IssueID}} -\u003e {{.DependsOnID}} [{{.Type}}]'\n\nOriginal PR: https://github.com/steveyegge/beads/pull/46","status":"open","priority":2,"issue_type":"feature","created_at":"2025-10-15T21:13:11.6698-07:00","updated_at":"2025-10-15T21:13:11.6698-07:00","external_ref":"gh-46"}
{"id":"bd-26","title":"Optimize reference updates to avoid loading all issues into memory","description":"In updateReferences(), we call SearchIssues with no filter to get ALL issues for updating references. For large databases (10k+ issues), this loads everything into memory. Options: 1) Use batched processing with LIMIT/OFFSET, 2) Use SQL UPDATE with REPLACE() directly, 3) Stream results instead of loading all at once. Located in collision.go:266","status":"open","priority":2,"issue_type":"task","created_at":"2025-10-14T14:43:06.911497-07:00","updated_at":"2025-10-15T16:27:22.001829-07:00"}
{"id":"bd-27","title":"Cache compiled regexes in replaceIDReferences for performance","description":"replaceIDReferences() compiles the same regex patterns on every call. With 100 issues and 10 ID mappings, that's 1000 regex compilations. Pre-compile regexes once and reuse. Can use a struct with compiled regex, placeholder, and newID. Located in collision.go:329. Estimated performance improvement: 10-100x for large batches.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-14T14:43:06.911892-07:00","updated_at":"2025-10-15T16:27:22.002496-07:00","closed_at":"2025-10-15T03:01:29.570955-07:00"}
{"id":"bd-28","title":"Improve error handling in dependency removal during remapping","description":"In updateDependencyReferences(), RemoveDependency errors are caught and ignored with continue (line 392). Comment says 'if dependency doesn't exist' but this catches ALL errors including real failures. Should check error type with errors.Is(err, ErrDependencyNotFound) and only ignore not-found errors, returning other errors properly.","status":"open","priority":3,"issue_type":"bug","created_at":"2025-10-14T14:43:06.912228-07:00","updated_at":"2025-10-15T16:27:22.003145-07:00"}
@@ -180,7 +181,7 @@
{"id":"bd-37","title":"Refactor duplicate flush logic in PersistentPostRun","description":"PersistentPostRun contains a complete copy of the flush logic instead of calling flushToJSONL(). This violates DRY principle and makes maintenance harder. Refactor to use flushToJSONL() with a force parameter to bypass isDirty check, or extract shared logic into a helper function. Located in cmd/bd/main.go:104-138.","status":"open","priority":3,"issue_type":"task","created_at":"2025-10-14T14:43:06.916555-07:00","updated_at":"2025-10-15T16:27:22.008657-07:00"}
{"id":"bd-38","title":"Add visibility for auto-flush failures","description":"flushToJSONL() writes warnings to stderr when flush fails, but calling code has no way to know if flush succeeded or failed. This means a command could return success even though JSONL is now out of sync. Consider maintaining a 'last flush status' variable or counter for failed flushes, and warn user after multiple consecutive failures (e.g., 3+). Located in cmd/bd/main.go:227-307.","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-10-14T14:43:06.916932-07:00","updated_at":"2025-10-15T16:27:22.009229-07:00","closed_at":"2025-10-15T03:01:29.575612-07:00"}
{"id":"bd-39","title":"Optimize auto-flush to use incremental updates","description":"Every flush exports ALL issues and ALL dependencies, even if only one issue changed. For large projects (1000+ issues), this could be expensive. Current approach guarantees consistency, which is fine for MVP, but future optimization could track which issues changed and use incremental updates. Located in cmd/bd/main.go:255-276.","status":"closed","priority":3,"issue_type":"feature","created_at":"2025-10-14T14:43:06.917357-07:00","updated_at":"2025-10-15T16:27:22.009844-07:00","closed_at":"2025-10-15T03:01:29.576131-07:00"}
{"id":"bd-4","title":"Low priority chore","description":"","status":"open","priority":4,"issue_type":"chore","created_at":"2025-10-14T14:43:06.917877-07:00","updated_at":"2025-10-15T16:27:22.010511-07:00","dependencies":[{"issue_id":"bd-4","depends_on_id":"bd-8","type":"parent-child","created_at":"2025-10-14T14:43:06.959864-07:00","created_by":"auto-import"}]}
{"id":"bd-4","title":"Low priority chore","description":"","status":"closed","priority":4,"issue_type":"chore","created_at":"2025-10-14T14:43:06.917877-07:00","updated_at":"2025-10-15T20:58:30.418891-07:00","closed_at":"2025-10-15T20:58:30.418891-07:00","dependencies":[{"issue_id":"bd-4","depends_on_id":"bd-8","type":"parent-child","created_at":"2025-10-14T14:43:06.959864-07:00","created_by":"auto-import"}]}
{"id":"bd-40","title":"Make auto-flush debounce duration configurable","description":"flushDebounce is hardcoded to 5 seconds. Make it configurable via environment variable BEADS_FLUSH_DEBOUNCE (e.g., '500ms', '10s'). Current 5-second value is reasonable for interactive use, but CI/automated scenarios might want faster flush. Add getDebounceDuration() helper function. Located in cmd/bd/main.go:31.","status":"open","priority":3,"issue_type":"feature","created_at":"2025-10-14T14:43:06.919549-07:00","updated_at":"2025-10-15T16:27:22.011016-07:00"}
{"id":"bd-41","title":"Add godoc comments for auto-flush functions","description":"Add comprehensive godoc comments for findJSONLPath(), markDirtyAndScheduleFlush(), and flushToJSONL() explaining behavior, concurrency considerations, and error handling. Include notes about debouncing behavior (timer resets on each write, flush occurs 5s after LAST operation) and flush-on-exit guarantees. Located in cmd/bd/main.go:188-307.","status":"open","priority":4,"issue_type":"chore","created_at":"2025-10-14T14:43:06.920127-07:00","updated_at":"2025-10-15T16:27:22.01152-07:00"}
{"id":"bd-42","title":"Add test coverage for auto-flush feature","description":"Add comprehensive tests for auto-flush functionality:\\n- Test that markDirtyAndScheduleFlush() is called after CRUD operations\\n- Test debounce timing (rapid operations result in single flush)\\n- Test --no-auto-flush flag disables feature\\n- Test flush on program exit\\n- Test concurrent operations don't cause races\\n- Test error scenarios (disk full, permission denied, etc.)\\n- Test import command triggers auto-flush\\n\\nCurrent implementation has no test coverage for the auto-flush feature. Located in cmd/bd/main_test.go (to be created).","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-14T14:43:06.920746-07:00","updated_at":"2025-10-15T16:27:22.012002-07:00","closed_at":"2025-10-15T03:01:29.577612-07:00"}
@@ -202,7 +203,7 @@
{"id":"bd-57","title":"Test plugin installation and functionality","description":"Verify the plugin works end-to-end.\n\nTest cases:\n- Fresh installation via /plugin command\n- All slash commands work correctly\n- MCP server tools are accessible\n- Configuration options work\n- Documentation is accurate\n- Works in both terminal and VS Code","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-14T14:43:06.928041-07:00","updated_at":"2025-10-15T16:27:22.023211-07:00","closed_at":"2025-10-15T03:01:29.584432-07:00","dependencies":[{"issue_id":"bd-57","depends_on_id":"bd-52","type":"parent-child","created_at":"2025-10-14T14:46:10.737369-07:00","created_by":"auto-import"}]}
{"id":"bd-58","title":"Parent's blocker should block children in ready work calculation","description":"GitHub issue #19: If epic1 blocks epic2, children of epic2 should also be considered blocked when calculating ready work. Currently epic2's children show as ready even though their parent is blocked. This breaks the natural hierarchy of dependencies and can cause agents to work on tasks out of order.\n\nExpected: ready work calculation should traverse up parent-child hierarchy and check if any ancestor has blocking dependencies.\n\nSee: https://github.com/anthropics/claude-code/issues/19","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-10-14T14:43:06.928389-07:00","updated_at":"2025-10-15T16:27:22.023632-07:00","closed_at":"2025-10-15T03:01:29.584797-07:00"}
{"id":"bd-59","title":"Add composite index on dependencies(depends_on_id, type)","description":"The hierarchical blocking query does:\nJOIN dependencies d ON d.depends_on_id = bt.issue_id\nWHERE d.type = 'parent-child'\n\nCurrently we only have idx_dependencies_depends_on (line 41 in schema.go), which covers depends_on_id but not the type filter.\n\n**Impact:**\n- Query has to scan ALL dependencies for a given depends_on_id, then filter by type\n- With 10k+ issues and many dependencies, this could cause slowdowns\n- The blocker propagation happens recursively, amplifying the cost\n\n**Solution:**\nAdd composite index: CREATE INDEX idx_dependencies_depends_on_type ON dependencies(depends_on_id, type)\n\n**Testing:**\nRun EXPLAIN QUERY PLAN on GetReadyWork query before/after to verify index usage.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-14T14:43:06.929017-07:00","updated_at":"2025-10-15T16:27:22.024053-07:00","closed_at":"2025-10-15T03:01:29.585188-07:00"}
{"id":"bd-6","title":"Add migration scripts for GitHub Issues","description":"Create scripts to import from GitHub Issues API or exported JSON","status":"open","priority":2,"issue_type":"feature","created_at":"2025-10-14T14:43:06.929582-07:00","updated_at":"2025-10-15T16:27:22.024549-07:00","dependencies":[{"issue_id":"bd-6","depends_on_id":"bd-8","type":"parent-child","created_at":"2025-10-14T14:43:06.960703-07:00","created_by":"auto-import"}]}
{"id":"bd-6","title":"Add migration scripts for GitHub Issues","description":"Create scripts to import from GitHub Issues API or exported JSON","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-10-14T14:43:06.929582-07:00","updated_at":"2025-10-15T21:00:51.057232-07:00","closed_at":"2025-10-15T21:00:51.057232-07:00","dependencies":[{"issue_id":"bd-6","depends_on_id":"bd-8","type":"parent-child","created_at":"2025-10-14T14:43:06.960703-07:00","created_by":"auto-import"}]}
{"id":"bd-60","title":"Update ready_issues VIEW to use hierarchical blocking","description":"The ready_issues VIEW (schema.go:97-108) uses the OLD blocking logic that doesn't propagate through parent-child hierarchies.\n\n**Problem:**\n- GetReadyWork() function now uses recursive CTE with propagation\n- But the ready_issues VIEW still uses simple NOT EXISTS check\n- Any code using the VIEW will get DIFFERENT results than GetReadyWork()\n- This creates inconsistency and confusion\n\n**Impact:**\n- Unknown if the VIEW is actually used anywhere in the codebase\n- If it is used, it's returning incorrect results (showing children as ready when parent is blocked)\n\n**Solution:**\nEither:\n1. Update VIEW to match GetReadyWork logic (complex CTE in a view)\n2. Drop the VIEW entirely if unused\n3. Make VIEW call GetReadyWork as a function (if SQLite supports it)\n\n**Investigation needed:**\nGrep for 'ready_issues' to see if the view is actually used.","notes":"**Investigation results:**\nGrepped the codebase - the ready_issues VIEW appears in:\n- schema.go (definition)\n- WORKFLOW.md, DESIGN.md (documentation)\n- No actual Go code queries it directly\n\n**Conclusion:** The VIEW is defined but appears UNUSED by actual code. GetReadyWork() function is used instead.\n\n**Recommended solution:** Drop the VIEW entirely to avoid confusion. It serves no purpose if unused and creates a maintenance burden (needs to stay in sync with GetReadyWork logic).\n\n**Alternative:** If we want to keep it for direct SQL access, update the VIEW definition to match the new recursive CTE logic.","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-10-14T14:43:06.930111-07:00","updated_at":"2025-10-15T16:27:22.02559-07:00","closed_at":"2025-10-15T03:01:29.586382-07:00"}
{"id":"bd-61","title":"Add test for deep hierarchy blocking (50+ levels)","description":"Current tests verify 2-level depth (grandparent → parent → child). The depth limit is hardcoded to 50 in the recursive CTE, but we don't test edge cases near that limit.\n\n**Test cases needed:**\n1. Verify 50-level deep hierarchy works correctly\n2. Verify depth limit prevents runaway recursion\n3. Measure performance impact of deep hierarchies\n4. Consider if 50 is the right limit (why not 100? why not 20?)\n\n**Rationale:**\n- Most hierarchies are 2-5 levels deep\n- But pathological cases (malicious or accidental) could create 50+ level nesting\n- Need to ensure graceful degradation, not catastrophic failure\n\n**Implementation:**\nAdd TestDeepHierarchyBlocking to ready_test.go","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-14T14:43:06.930479-07:00","updated_at":"2025-10-15T16:27:22.02608-07:00","closed_at":"2025-10-15T03:01:29.586825-07:00"}
{"id":"bd-62","title":"Document hierarchical blocking behavior in README","description":"The fix for bd-58 changes user-visible behavior: children of blocked epics are now automatically blocked.\n\n**What needs documenting:**\n1. README.md dependency section should explain blocking propagation\n2. Clarify that 'blocks' + 'parent-child' together create transitive blocking\n3. Note that 'related' and 'discovered-from' do NOT propagate blocking\n4. Add example showing epic → child blocking propagation\n\n**Example to add:**\n```bash\n# If epic is blocked, children are too\nbd create \"Epic 1\" -t epic -p 1\nbd create \"Task 1\" -t task -p 1\nbd dep add task-1 epic-1 --type parent-child\n\n# Block the epic\nbd create \"Blocker\" -t task -p 0\nbd dep add epic-1 blocker-1 --type blocks\n\n# Now both epic-1 AND task-1 are blocked\nbd ready # Neither will show up\n```","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-14T14:43:06.930812-07:00","updated_at":"2025-10-15T16:27:22.026496-07:00","closed_at":"2025-10-15T03:01:29.587207-07:00"}
@@ -213,7 +214,7 @@
{"id":"bd-67","title":"Create version bump script","description":"Create scripts/bump-version.sh to automate version syncing across all components.\n\nThe script should:\n1. Take a version number as argument (e.g., ./scripts/bump-version.sh 0.9.3)\n2. Update all version files:\n - cmd/bd/version.go (Version constant)\n - .claude-plugin/plugin.json (version field)\n - .claude-plugin/marketplace.json (plugins[].version)\n - integrations/beads-mcp/pyproject.toml (version field)\n - README.md (Alpha version mention)\n - PLUGIN.md (version requirements)\n3. Validate semantic versioning format\n4. Show diff preview before applying\n5. Optionally create git commit with standard message\n\nThis prevents the version mismatch issue that occurred when only version.go was updated.\n\nRelated: bd-66 (version sync issue)","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-14T14:43:06.933094-07:00","updated_at":"2025-10-15T16:27:22.028645-07:00","closed_at":"2025-10-15T03:01:29.58971-07:00","dependencies":[{"issue_id":"bd-67","depends_on_id":"bd-71","type":"parent-child","created_at":"2025-10-14T14:43:06.962804-07:00","created_by":"auto-import"}]}
{"id":"bd-68","title":"Add system-wide/multi-repo support for beads","description":"GitHub issue #4 requests ability to use beads across multiple projects and for system-wide task tracking.\n\nCurrent limitation: beads is per-repository isolated. Each project has its own .beads/ directory and issues cannot reference issues in other projects.\n\nPotential approaches:\n1. Global beads instance in ~/.beads/global.db for cross-project work\n2. Project references - allow issues to link across repos\n3. Multi-project workspace support - one beads instance managing multiple repos\n4. Integration with existing MCP server to provide remote multi-project access\n\nUse cases:\n- System administrators tracking work across multiple machines/repos\n- Developers working on a dozen+ projects simultaneously\n- Cross-cutting concerns that span multiple repositories\n- Global todo list with project-specific subtasks\n\nRelated:\n- GitHub issue #4: https://github.com/steveyegge/beads/issues/4\n- Comparison to membank MCP which already supports multi-project via centralized server\n- MCP server at integrations/beads-mcp/ could be extended for this\n\nSee also: Testing framework for plugins (also from GH #4)","status":"open","priority":2,"issue_type":"feature","created_at":"2025-10-14T14:43:06.933446-07:00","updated_at":"2025-10-15T16:27:22.029194-07:00","dependencies":[{"issue_id":"bd-68","depends_on_id":"bd-71","type":"parent-child","created_at":"2025-10-14T14:43:06.963228-07:00","created_by":"auto-import"}]}
{"id":"bd-69","title":"Add coverage threshold to CI pipeline","description":"Current CI runs tests with coverage but doesn't enforce minimum threshold. Add step to fail if coverage drops below target.\n\nCurrent coverage: 60%\nRecommended thresholds:\n- Warn: 55%\n- Fail: 50%\n\nThis prevents coverage regression while allowing gradual improvement toward 80% target for 1.0.\n\nImplementation:\n1. Add coverage check step after test run\n2. Use 'go tool cover -func=coverage.out' to get total\n3. Parse percentage and compare to threshold\n4. Optionally: Use codecov's built-in threshold features\n\nRelated to test coverage improvement work (upcoming issue).","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-14T14:43:06.93378-07:00","updated_at":"2025-10-15T16:27:22.029753-07:00","closed_at":"2025-10-15T03:01:29.590601-07:00","dependencies":[{"issue_id":"bd-69","depends_on_id":"bd-71","type":"parent-child","created_at":"2025-10-14T14:43:06.963643-07:00","created_by":"auto-import"}]}
{"id":"bd-7","title":"Add performance benchmarks document","description":"Document actual performance metrics with hyperfine tests","status":"open","priority":3,"issue_type":"task","created_at":"2025-10-14T14:43:06.934101-07:00","updated_at":"2025-10-15T16:27:22.030304-07:00","dependencies":[{"issue_id":"bd-7","depends_on_id":"bd-8","type":"parent-child","created_at":"2025-10-14T14:43:06.964048-07:00","created_by":"auto-import"}]}
{"id":"bd-7","title":"Add performance benchmarks document","description":"Document actual performance metrics with hyperfine tests","status":"closed","priority":3,"issue_type":"task","created_at":"2025-10-14T14:43:06.934101-07:00","updated_at":"2025-10-15T20:59:04.025996-07:00","closed_at":"2025-10-15T20:59:04.025996-07:00","dependencies":[{"issue_id":"bd-7","depends_on_id":"bd-8","type":"parent-child","created_at":"2025-10-14T14:43:06.964048-07:00","created_by":"auto-import"}]}
{"id":"bd-70","title":"Increase test coverage for auto-flush and auto-import features","description":"Critical features have 0% test coverage despite being core workflow functionality.\n\n**Uncovered areas (0% coverage):**\n\nAuto-flush/Auto-import (dirty tracking):\n- MarkIssueDirty / MarkIssuesDirty\n- GetDirtyIssues / GetDirtyIssueCount\n- ClearDirtyIssues / ClearDirtyIssuesByID\n- Auto-flush debouncing logic\n- Auto-import hash comparison\n\nDatabase/file discovery:\n- FindDatabasePath (finds .beads/*.db in directory tree)\n- FindJSONLPath (finds issues.jsonl)\n- findDatabaseInTree helper\n\nLabel operations:\n- AddLabel / RemoveLabel\n- GetLabels / GetIssuesByLabel\n\nEvents/Comments:\n- AddComment\n- GetEvents\n- GetStatistics\n\nMetadata storage:\n- SetMetadata / GetMetadata (used for import hash tracking)\n\nCLI output formatting:\n- outputJSON\n- printCollisionReport / printRemappingReport\n- createIssuesFromMarkdown\n\n**Priority areas:**\n1. Auto-flush/import (highest risk - core workflow)\n2. Database discovery (second - affects all operations)\n3. Labels/events (lower priority - less commonly used)\n\n**Test approach:**\n- Add unit tests for dirty tracking in sqlite package\n- Add integration tests for auto-flush timing and debouncing\n- Add tests for import hash detection and idempotency\n- Add tests for database discovery edge cases (permissions, nested dirs)\n\n**Target:** Get overall coverage from 60% → 75%, focus on cmd/bd (currently 24.1%)\n\n**Note:** These features work well in practice (dogfooding proves it) but edge cases (disk full, permissions, concurrent access, race conditions) are untested.","notes":"Test coverage significantly improved! Added comprehensive test suites:\n\n**Tests Added:**\n1. ✅ Dirty tracking (dirty_test.go): 8 tests\n - MarkIssueDirty, MarkIssuesDirty, GetDirtyIssues, ClearDirtyIssues\n - GetDirtyIssueCount, ClearDirtyIssuesByID\n - Ordering and timestamp update tests\n - Coverage: 75-100% for all functions\n\n2. ✅ Metadata storage (sqlite_test.go): 4 tests\n - SetMetadata, GetMetadata with various scenarios\n - Coverage: 100%\n\n3. ✅ Label operations (labels_test.go): 9 tests\n - AddLabel, RemoveLabel, GetLabels, GetIssuesByLabel\n - Duplicate handling, empty cases, dirty marking\n - Coverage: 71-82%\n\n4. ✅ Events \u0026 Comments (events_test.go): 7 tests\n - AddComment, GetEvents with limits\n - Timestamp updates, dirty marking\n - Multiple event types in history\n - Coverage: 73-92%\n\n5. ✅ Database/JSONL discovery (beads_test.go): 8 tests\n - FindDatabasePath with env var, tree search, home default\n - FindJSONLPath with existing files, defaults, multiple files\n - Coverage: 89-100%\n\n**Coverage Results:**\n- Overall: 70.3% (up from ~60%)\n- beads package: 90.6%\n- sqlite package: 74.7% (up from 60.8%)\n- types package: 100%\n\n**Impact:**\nAll priority 1 features now have solid test coverage. Auto-flush/import, metadata storage, labels, and events are thoroughly tested with edge cases covered.\n\nFiles created:\n- internal/storage/sqlite/dirty_test.go (8 tests)\n- internal/storage/sqlite/labels_test.go (9 tests)\n- internal/storage/sqlite/events_test.go (7 tests)\n- beads_test.go (8 tests)\n\nFiles modified:\n- internal/storage/sqlite/sqlite_test.go (added 4 metadata tests)\n\n**Next Steps (if desired):**\n- CLI output formatting tests (outputJSON, printCollisionReport, etc.)\n- Integration tests for auto-flush debouncing timing\n- More edge cases for concurrent access","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-14T14:43:06.934425-07:00","updated_at":"2025-10-15T16:27:22.030827-07:00","closed_at":"2025-10-15T03:01:29.591338-07:00","dependencies":[{"issue_id":"bd-70","depends_on_id":"bd-71","type":"parent-child","created_at":"2025-10-14T14:43:06.96447-07:00","created_by":"auto-import"}]}
{"id":"bd-71","title":"Code review follow-up: Post-PR #8 merge improvements","description":"Follow-up tasks from the ultrathink code review of PR #8 merge (bd-62).\n\n**Context:** PR #8 successfully merged atomic counter + dirty tracking. Core functionality is solid but several improvements identified.\n\n**Critical (P0-P1):**\n- bd-64: Fix SyncAllCounters performance bottleneck (P0)\n- bd-65: Add migration for issue_counters table (P1)\n- bd-66: Make import counter sync failure fatal (P1)\n\n**Nice to have (P2-P3):**\n- bd-67: Update test comments (P2)\n- bd-68: Add performance benchmarks (P2)\n- bd-69: Add metrics/logging (P3)\n- bd-70: Add EXPLAIN QUERY PLAN tests (P3)\n\n**Overall assessment:** 4/5 stars - Excellent implementation with one critical performance issue. After bd-64 is fixed, this becomes 5/5.\n\n**Review document:** Available if needed","notes":"All tasks completed: bd-64 (performance), bd-65 (migration), bd-66 (version sync), bd-67 (version script), bd-69 (CI coverage), bd-70 (test coverage). Code review follow-up complete.","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-10-14T14:43:06.934829-07:00","updated_at":"2025-10-15T19:41:18.75038-07:00","closed_at":"2025-10-15T19:41:18.75038-07:00"}
{"id":"bd-72","title":"Test performance - issue 1","description":"","status":"open","priority":2,"issue_type":"task","created_at":"2025-10-14T14:43:06.935489-07:00","updated_at":"2025-10-15T16:27:22.032229-07:00"}
@@ -224,7 +225,7 @@
{"id":"bd-77","title":"Performance test 5","description":"","status":"open","priority":3,"issue_type":"task","created_at":"2025-10-14T14:43:06.937471-07:00","updated_at":"2025-10-15T16:27:22.038117-07:00"}
{"id":"bd-78","title":"Performance test 6","description":"","status":"open","priority":3,"issue_type":"task","created_at":"2025-10-14T14:43:06.937812-07:00","updated_at":"2025-10-15T16:27:22.038536-07:00"}
{"id":"bd-79","title":"Performance test 7","description":"","status":"open","priority":3,"issue_type":"task","created_at":"2025-10-14T14:43:06.938343-07:00","updated_at":"2025-10-15T16:27:22.038987-07:00"}
{"id":"bd-8","title":"Reach 1.0 release milestone","description":"Stabilize API, finalize documentation, comprehensive testing","status":"open","priority":1,"issue_type":"epic","created_at":"2025-10-14T14:43:06.938911-07:00","updated_at":"2025-10-15T16:27:22.039394-07:00"}
{"id":"bd-8","title":"Reach 1.0 release milestone","description":"Stabilize API, finalize documentation, comprehensive testing","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-10-14T14:43:06.938911-07:00","updated_at":"2025-10-15T21:00:51.057137-07:00","closed_at":"2025-10-15T21:00:51.057137-07:00"}
{"id":"bd-80","title":"Performance test 8","description":"","status":"open","priority":3,"issue_type":"task","created_at":"2025-10-14T14:43:06.939443-07:00","updated_at":"2025-10-15T16:27:22.039892-07:00"}
{"id":"bd-81","title":"Performance test 9","description":"","status":"open","priority":3,"issue_type":"task","created_at":"2025-10-14T14:43:06.939983-07:00","updated_at":"2025-10-15T16:27:22.04036-07:00"}
{"id":"bd-82","title":"Performance test 10","description":"","status":"open","priority":3,"issue_type":"task","created_at":"2025-10-14T14:43:06.940364-07:00","updated_at":"2025-10-15T16:27:22.040837-07:00"}