bd sync: 2025-11-20 20:35:20

This commit is contained in:
Steve Yegge
2025-11-20 20:35:20 -05:00
parent a1e507520c
commit d6bc798ab9

View File

@@ -313,7 +313,7 @@
{"id":"bd-c825f867","content_hash":"e2925468dd33e89b5930382acb9a0ef9c48a3570d376068f9e3a39bb245f0c9d","title":"Add docs/architecture/event_driven.md","description":"Copy event_driven_daemon.md into docs/ folder. Add to documentation index.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-28T16:20:02.431399-07:00","updated_at":"2025-11-08T01:58:15.282811-08:00","closed_at":"2025-11-08T00:51:06.826771-08:00","source_repo":"."}
{"id":"bd-c947dd1b","content_hash":"79bd51b46b28bc16cfc19cd19a4dd4f57f45cd1e902b682788d355b03ec00b2a","title":"Remove Daemon Storage Cache","description":"The daemon's multi-repo storage cache is the root cause of stale data bugs. Since global daemon is deprecated, we only ever serve one repository, making the cache unnecessary complexity. This epic removes the cache entirely for simpler, more reliable direct storage access.","design":"For local daemon (single repository), eliminate the cache entirely:\n- Use s.storage field directly (opened at daemon startup)\n- Remove getStorageForRequest() routing logic\n- Remove server_cache_storage.go entirely (~300 lines)\n- Remove cache-related tests\n- Simplify Server struct\n\nBenefits:\n✅ No staleness bugs: Always using live SQLite connection\n✅ Simpler code: Remove ~300 lines of cache management\n✅ Easier debugging: Direct storage access, no cache indirection\n✅ Same performance: Cache was always 1 entry for local daemon anyway","acceptance_criteria":"- Daemon has no storage cache code\n- All tests pass\n- MCP integration works\n- No stale data bugs\n- Documentation updated\n- Performance validated","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-10-28T10:50:15.126939-07:00","updated_at":"2025-10-30T17:12:58.21743-07:00","closed_at":"2025-10-28T10:49:53.612049-07:00","source_repo":"."}
{"id":"bd-c9a482db","content_hash":"f939b9e15e7143d89626757438a69530fa9165a2f66588fd55f2e6146c20d646","title":"Add internal/ai package for AI-assisted repairs","description":"Add AI integration package to support AI-powered repair commands.\n\nProviders:\n- Anthropic (Claude)\n- OpenAI\n- Ollama (local)\n\nFeatures:\n- Conflict resolution analysis\n- Duplicate detection via embeddings\n- Configuration via env vars (BEADS_AI_PROVIDER, BEADS_AI_API_KEY, etc.)\n\nSee repair_commands.md lines 357-425 for design.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-28T19:37:55.722841-07:00","updated_at":"2025-11-06T19:36:13.972304-08:00","closed_at":"2025-11-06T19:27:19.150657-08:00","source_repo":"."}
{"id":"bd-ca0b","content_hash":"623867d1e82a9ea9ed2df654e0d894794e393ab4d431cbda60313d2b742e755a","title":"bd sync should auto-resolve conflicts instead of failing","description":"Currently bd sync fails with 'Pre-export validation failed: refusing to export: JSONL is newer than database' when there's a conflict. Instead, it should intelligently determine what needs to be done:\n\n1. If JSONL is newer: auto-import first, then export\n2. If DB is newer: proceed with export\n3. If both modified: attempt auto-merge or provide clear resolution options\n\nThe command should 'just work' instead of requiring users to manually determine and run the right sequence of import/export commands.\n\nThis is especially annoying in normal workflows where you're just trying to sync your changes.","status":"open","priority":1,"issue_type":"task","created_at":"2025-11-20T20:30:47.406796-05:00","updated_at":"2025-11-20T20:30:47.406796-05:00","source_repo":"."}
{"id":"bd-ca0b","content_hash":"a3b7857ebb697a6b220530e7ac7e5213ecc5a8a7f106c549503d3b8ab385695a","title":"bd sync should auto-resolve conflicts instead of failing","description":"Currently bd sync fails with 'Pre-export validation failed: refusing to export: JSONL is newer than database' when there's a conflict. Instead, it should intelligently determine what needs to be done:\n\n1. If JSONL is newer: auto-import first, then export\n2. If DB is newer: proceed with export\n3. If both modified: attempt auto-merge or provide clear resolution options\n\nThe command should 'just work' instead of requiring users to manually determine and run the right sequence of import/export commands.\n\nThis is especially annoying in normal workflows where you're just trying to sync your changes.","notes":"Implementation complete. bd sync now auto-imports when JSONL is newer than database, eliminating the confusing error message and making the workflow just work.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-11-20T20:30:47.406796-05:00","updated_at":"2025-11-20T20:33:58.379712-05:00","closed_at":"2025-11-20T20:33:58.379717-05:00","source_repo":"."}
{"id":"bd-caa9","content_hash":"6e8d4006d4f9b265e63fad9e30f24c6ab29fbf79ef47ec90a7e1225b5d662b67","title":"Migration tool for existing users","description":"Ensure smooth migration for existing users to separate branch workflow.\n\nTasks:\n- Add bd migrate --separate-branch command\n- Detect existing repos, migrate cleanly\n- Preserve git history\n- Add rollback mechanism\n- Test migration on beads' own repo (dogfooding)\n- Communication plan (GitHub discussion, docs)\n- Version compatibility checks\n\nEstimated effort: 2-3 days","acceptance_criteria":"- Existing users can migrate without data loss\n- Rollback works if migration fails\n- Clear communication about breaking changes (if any)\n- beads project itself migrated successfully (dogfooding)\n- Migration tested on 5+ real-world repos","status":"closed","priority":1,"issue_type":"task","created_at":"2025-11-02T15:22:35.627388-08:00","updated_at":"2025-11-04T12:36:53.789201-08:00","closed_at":"2025-11-04T12:36:53.789201-08:00","source_repo":".","dependencies":[{"issue_id":"bd-caa9","depends_on_id":"bd-a101","type":"parent-child","created_at":"2025-11-02T15:22:48.382619-08:00","created_by":"stevey"}]}
{"id":"bd-cb2f","content_hash":"99b9c1c19d5e9f38308d78f09763426777797f133d4c86edd579419e7ba4043f","title":"Week 1 task","description":"","status":"open","priority":2,"issue_type":"task","created_at":"2025-11-03T19:11:59.358093-08:00","updated_at":"2025-11-03T19:11:59.358093-08:00","source_repo":".","labels":["frontend","week2"]}
{"id":"bd-cb64c226.1","content_hash":"0bfd0735c8985d3b3e4906e44f22b06fb24758c6d795188226e920bd8b3e7cf8","title":"Performance Validation","description":"Confirm no performance regression from cache removal","acceptance_criteria":"- Benchmarks show no significant regression\n- Document performance characteristics\n- Confirm single SQLite connection is reused\n\nBenchmarks: go test -bench=. -benchmem ./internal/rpc/...\n\nMetrics to track:\n- Request latency (p50, p99)\n- Throughput (requests/sec)\n- Memory usage\n- SQLite connection overhead\n\nExpected results:\n- Latency: Same or better (no cache overhead)\n- Throughput: Same (cache was always 1 entry)\n- Memory: Lower (no cache structs)\n- Connection overhead: Zero (single connection reused)","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-28T10:50:15.126019-07:00","updated_at":"2025-10-30T17:12:58.216721-07:00","closed_at":"2025-10-28T10:49:45.021037-07:00","source_repo":"."}