bd sync: 2025-11-23 21:46:21
This commit is contained in:
@@ -653,6 +653,7 @@
|
||||
{"id":"bd-wrfz","content_hash":"a03a18ee69cedc5e7e9c7553d27168fd1935b8d77c863b4888a2b73fec9531bf","title":"Create queries.go with core issue CRUD methods","description":"","status":"closed","priority":2,"issue_type":"task","created_at":"2025-11-23T18:08:13.46714-08:00","updated_at":"2025-11-23T18:12:55.233765-08:00","closed_at":"2025-11-23T18:12:55.233765-08:00","source_repo":"."}
|
||||
{"id":"bd-wta","content_hash":"eee40bbe4e00af632ad46e1461a25e4b0e5508bea115422aea0772381eec0d84","title":"Add performance benchmarks for multi-repo hydration","description":"The contributor-workflow-analysis.md asserts sub-second queries (line 702) and describes smart caching via file mtime tracking (Decision #4, lines 584-618), but doesn't provide concrete performance benchmarks.\n\nVC's requirement (from VC feedback section):\n- Executor polls GetReadyWork() every 5-10 seconds\n- Queries must be sub-second (ideally \u003c100ms)\n- Smart caching must avoid re-parsing JSONLs on every query\n\nSuggested performance targets to validate:\n- File stat overhead: \u003c1ms per repo\n- Hydration (when needed): \u003c500ms for typical JSONL (\u003c25k)\n- Query (from cache): \u003c10ms\n- Total GetReadyWork(): \u003c100ms (VC's requirement)\n\nAlso test at scale:\n- N=1 repo (baseline)\n- N=3 repos (typical)\n- N=10 repos (edge case)\n\nThese benchmarks are critical for library consumers like VC that run automated polling loops.","acceptance_criteria":"- Performance benchmark suite created for multi-repo hydration\n- Benchmarks cover file stat, hydration, and query times\n- Tests at N=1, N=3, N=10 repo scales\n- Results documented in contributor-workflow-analysis.md\n- Performance targets met or issues filed for optimization","status":"closed","priority":2,"issue_type":"task","created_at":"2025-11-03T20:24:39.331528-08:00","updated_at":"2025-11-05T14:17:15.079226-08:00","closed_at":"2025-11-05T14:17:15.079226-08:00","source_repo":"."}
|
||||
{"id":"bd-wv9l","content_hash":"b592ae73e12d49eb8186f99e49d896685f9d414972647cd6fd327ba2a2e56fcd","title":"Code Review Sweep: thorough","description":"Perform thorough code review sweep based on accumulated activity.\n\n**AI Reasoning:**\nSignificant code activity with 7608 lines added and 120 files changed indicates substantial modifications. Multiple high-churn areas (cmd/bd, internal/rpc) suggest potential for subtle issues and emerging patterns that warrant review.\n\n**Scope:** thorough\n**Target Areas:** cmd/bd, internal/rpc, .beads\n**Estimated Files:** 12\n**Estimated Cost:** $5\n\n**Task:**\nReview files for non-obvious issues that agents miss during focused work:\n- Inefficiencies (algorithmic, resource usage)\n- Subtle bugs (race conditions, off-by-one, copy-paste)\n- Poor patterns (coupling, complexity, duplication)\n- Missing best practices (error handling, docs, tests)\n- Unnamed anti-patterns\n\nFile discovered issues with detailed reasoning and suggestions.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-11-21T23:23:16.056392-08:00","updated_at":"2025-11-23T20:55:16.619192-08:00","closed_at":"2025-11-23T19:47:01.132766-08:00","source_repo":".","labels":["code-review-sweep","review-area:.beads","review-area:cmd/bd","review-area:internal/rpc"]}
|
||||
{"id":"bd-wvhk","content_hash":"c87e7401e2b815c7fb058ecd153379bf151087126349691e121bf1525a541224","title":"Code review: bd-pq5k merge conflict fix","description":"# Task\n\nThorough code review of bd-pq5k fix (commit d4f9a05)\n\n# Files Changed\n\n1. **internal/merge/merge.go**\n - New mergeStatus() function enforcing closed \u003e open rule\n - Modified closed_at handling in mergeIssue()\n - Changed deletion handling to always prefer deletion\n\n2. **internal/merge/merge_test.go**\n - New TestMergeStatus() with comprehensive coverage\n - New bd-pq5k regression test\n - Updated deletion tests to reflect new behavior\n\n3. **cmd/bd/snapshot_manager.go**\n - Simplified ComputeAcceptedDeletions() logic\n - Removed \"unchanged locally\" check\n - Updated documentation\n\n4. **cmd/bd/deletion_tracking_test.go**\n - Updated TestDeletionWithLocalModification expectations\n - Updated TestComputeAcceptedDeletions_LocallyModified\n\n5. **internal/storage/sqlite/util.go**\n - Added IsForeignKeyConstraintError() helper (for bd-koab)\n\n6. **internal/storage/sqlite/util_test.go**\n - Added TestIsForeignKeyConstraintError()\n\n# Review Focus Areas\n\n## Correctness\n- [ ] Does mergeStatus() correctly handle all status transition cases?\n- [ ] Is the closed_at logic sound (only set when status=closed)?\n- [ ] Does deletion-wins logic apply consistently across merge and deletion tracking?\n- [ ] Are there edge cases we haven't considered?\n\n## Consistency\n- [ ] Do merge.go and snapshot_manager.go agree on deletion semantics?\n- [ ] Are all tests updated to match the new behavior?\n- [ ] Is the bd-pq5k regression test comprehensive enough?\n\n## Impact Analysis\n- [ ] What happens to existing workflows that depended on conflict detection?\n- [ ] Could \"deletion always wins\" cause data loss in legitimate scenarios?\n- [ ] Should there be a config option to revert to old behavior?\n- [ ] Are there any backward compatibility concerns?\n\n## Test Coverage\n- [ ] Do tests cover all combinations of status merges?\n- [ ] Do tests cover both uppercase and lowercase FK constraint errors?\n- [ ] Are there scenarios that need additional test coverage?\n\n## Documentation\n- [ ] Are comments clear about why closed \u003e open?\n- [ ] Should we document the merge rules in user-facing docs?\n- [ ] Is the bd-pq5k reference helpful or should it include more context?\n\n## Performance\n- [ ] Does the simplified ComputeAcceptedDeletions improve performance?\n- [ ] Any performance implications of the new merge logic?\n\n# Review Commands\n\n```bash\n# View the commit\ngit show d4f9a05\n\n# Run all tests\ngo test ./...\n\n# Run merge tests specifically\ngo test -v github.com/steveyegge/beads/internal/merge\n\n# Run deletion tracking tests\ngo test -v github.com/steveyegge/beads/cmd/bd -run TestDeletion\n\n# Check test coverage\ngo test -cover github.com/steveyegge/beads/internal/merge\n```\n\n# Questions to Consider\n\n1. Should users be warned when deletion wins over their local modifications?\n2. Is there a legitimate case where we'd want open to win over closed?\n3. Should there be logging/telemetry when these rules apply?\n4. Do we need migration logic for existing invalid states in databases?","status":"open","priority":1,"issue_type":"task","created_at":"2025-11-23T21:46:15.862335-08:00","updated_at":"2025-11-23T21:46:15.862335-08:00","source_repo":"."}
|
||||
{"id":"bd-ww0g","content_hash":"973e5e6eb58975fcbe80f804b69a900cde824af4b51243737ef5fca404d0b1c1","title":"MCP server: \"No workspace set\" and \"chunk longer than limit\" errors","description":"Two related errors reported in beads-mcp v0.21:\n\n**Error 1: \"No workspace set\" after successful set_context**\n```\n✓ Set beads context\n✗ list\n Error calling tool 'list': No workspace set. Either provide workspace_root\n parameter or call set_context() first.\n```\n\nHypothesis: Environment variable persistence issue between MCP tool calls, or ContextVar not being set correctly by @with_workspace decorator.\n\n**Error 2: \"Separator is found, but chunk is longer than limit\"**\n```\n✗ list\n Error calling tool 'list': Separator is found, but chunk is longer than limit\n```\n\nHypothesis: MCP protocol output size limit exceeded. Large issue databases may produce JSON output that exceeds MCP stdio buffer limits.\n\nPlatform: Fedora 43, using copilot-cli with Sonnet 4.5\n\nWorkaround: CLI works fine (`bd list --status open --json`)","notes":"## Fixes Implemented\n\n**Issue 1: \"No workspace set\" after successful set_context** ✅ FIXED\n\nRoot cause: os.environ doesn't persist across MCP tool calls. When set_context() set BEADS_WORKING_DIR in os.environ, that change was lost on the next tool call.\n\nSolution:\n- Added module-level _workspace_context dict for persistent storage (server.py:51)\n- Modified set_context() to store in both persistent dict and os.environ (server.py:265-287)\n- Modified with_workspace() decorator to check persistent context first (server.py:129-133)\n- Updated where_am_i() to check persistent context (server.py:302-330)\n\n**Issue 2: \"chunk longer than limit\"** ✅ FIXED\n\nRoot cause: MCP stdio protocol has buffer limits. Large issue lists with full dependencies/dependents exceed this.\n\nSolution:\n- Reduced default list limit from 50 to 20 (server.py:356, models.py:122)\n- Reduced max list limit from 1000 to 100 (models.py:122)\n- Strip dependencies/dependents from list() and ready() responses (server.py:343-350, 368-373)\n- Full dependency details still available via show() command\n\n## Testing\n\n✅ Python syntax validated with py_compile\n✅ Changes are backward compatible\n✅ Persistent context falls back to os.environ for compatibility\n\nUsers should now be able to call set_context() once and have it persist across all subsequent tool calls. Large databases will no longer cause buffer overflow errors.","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-11-07T14:32:18.315155-08:00","updated_at":"2025-11-07T21:02:55.470937-08:00","closed_at":"2025-11-07T16:53:46.929942-08:00","source_repo":"."}
|
||||
{"id":"bd-x2ba","content_hash":"a238eea6a86e16961fcf1b0c2d707359aec731f0857b75f14187f52286cf8624","title":"Create config.go with config and metadata methods","description":"","status":"closed","priority":2,"issue_type":"task","created_at":"2025-11-23T18:08:14.746163-08:00","updated_at":"2025-11-23T18:13:11.540896-08:00","closed_at":"2025-11-23T18:13:11.540896-08:00","source_repo":"."}
|
||||
{"id":"bd-x47","content_hash":"e363d887fa6693c1c748d78ea9cdaaa97606889d910f318fbd29584576da57e9","title":"Add guidance for self-hosting projects","description":"The contributor-workflow-analysis.md is optimized for OSS contributors making PRs to upstream projects. However, it doesn't address projects like VC that use beads for their own development (self-hosting).\n\nSelf-hosting projects differ from OSS contributors:\n- No upstream/downstream distinction (they ARE the project)\n- May run automated executors (not just humans)\n- In bootstrap/early phase (stability matters)\n- Single team/owner (not multiple contributors with permissions)\n\nGuidance needed on:\n- When self-hosting projects should stay single-repo (default, recommended)\n- When they should adopt multi-repo (team planning, multi-phase dev)\n- How automated executors should handle multi-repo (if at all)\n- Special considerations for projects in bootstrap phase\n\nExamples of self-hosting projects: VC (building itself with beads), internal tools, pet projects","acceptance_criteria":"- Section added: 'For Projects Using Beads for Self-Hosting'\n- Clear guidance on when to stay single-repo vs adopt multi-repo\n- Recommendations for automated executor behavior with multi-repo\n- Bootstrap phase considerations documented","status":"closed","priority":2,"issue_type":"task","created_at":"2025-11-03T20:24:27.805341-08:00","updated_at":"2025-11-05T14:16:34.69662-08:00","closed_at":"2025-11-05T14:16:34.69662-08:00","source_repo":"."}
|
||||
|
||||
Reference in New Issue
Block a user