Fix bd-143: Prevent daemon auto-sync from wiping out issues.jsonl with empty database
- Added safety check to exportToJSONLWithStore (daemon path) - Refuses to export 0 issues over non-empty JSONL file - Added --force flag to override safety check when intentional - Added test coverage for empty database export protection - Prevents data loss when daemon has wrong/empty database Amp-Thread-ID: https://ampcode.com/threads/T-de18e0ad-bd17-46ec-994b-0581e257dcde Co-authored-by: Amp <amp@ampcode.com>
This commit is contained in:
@@ -38,15 +38,18 @@
|
||||
{"id":"bd-132","title":"GH#146: No color showing in terminal for some users","description":"User reports color not working in macOS (Taho 26.0.1) with iTerm 3.6.4 and Terminal.app, despite color working elsewhere in terminal. Python rich and printf escape codes work.\n\nNeed to investigate:\n- Is NO_COLOR env var set?\n- Terminal type detection?\n- fatih/color library configuration\n- Does bd list show colors? bd ready? bd init?\n- What's the output of: echo $TERM, echo $NO_COLOR","status":"open","priority":2,"issue_type":"bug","created_at":"2025-10-24T22:26:36.22163-07:00","updated_at":"2025-10-24T22:26:36.22163-07:00","external_ref":"github:146"}
|
||||
{"id":"bd-133","title":"Fix nil pointer crash in bd reopen command","description":"bd reopen crashes with SIGSEGV at reopen.go:30. Nil pointer dereference when trying to reopen an issue.","notes":"Fixed by adding daemon RPC support to reopen command. Pattern: check daemonClient != nil first, use RPC UpdateArgs with Status=open, fall back to direct store if daemon unavailable.","status":"closed","priority":0,"issue_type":"bug","created_at":"2025-10-25T10:30:31.602438-07:00","updated_at":"2025-10-25T10:33:39.016623-07:00","closed_at":"2025-10-25T10:33:39.016623-07:00"}
|
||||
{"id":"bd-134","title":"Address gosec security warnings (102 issues)","description":"Security linter warnings: file permissions (0755 should be 0750), G304 file inclusion via variable, G204 subprocess launches. Many are false positives but should be reviewed.","design":"Review each gosec warning. Add exclusions for legitimate cases to .golangci.yml. Fix real security issues (overly permissive file modes).","status":"open","priority":2,"issue_type":"task","created_at":"2025-10-25T13:47:10.719134-07:00","updated_at":"2025-10-25T13:47:10.719134-07:00"}
|
||||
{"id":"bd-135","title":"Multi-project MCP context switching (GH#145)","description":"Enable MCP server to manage multiple beads projects in a single session with per-request workspace_root parameter.\n\nCurrent bug: set_context(workspace_root) doesn't actually switch sockets - all operations hit first initialized socket.\n\nUse case: Managing tasks across multiple organizations with different permission models (internal, partners, open source).\n\nArchitecture: Connection pool keyed by workspace_root, each maintaining its own daemon socket connection. Request-scoped routing using ContextVar to avoid global state races.\n\nSee GH#145 for full requirements and user context.","design":"✅ APPROVED WITH MODIFICATIONS by architectural review (2025-10-25)\n\nLSP-style model is correct: Single MCP server → per-project daemons → isolated databases.\n\nCRITICAL CHANGES from review:\n1. **Simplify connection pool**: No LRU eviction initially (typical user: 2-5 projects)\n2. **Add asyncio.Lock**: Prevent race conditions in pool access\n3. **Defer health checks**: Only retry on failure, not preemptive pings\n4. **Handle submodules**: Check local .beads BEFORE git toplevel\n5. **Path canonicalization**: realpath + git toplevel with caching\n\nRISKS MITIGATED:\n- Global _client bug: Replace with connection pool keyed by canonical path\n- Race conditions: Add asyncio.Lock for pool mutations\n- Submodule edge case: Check .beads directory first\n- Stale sockets: Retry once on connection failure\n\nEstimated effort: 2.5-3.5 days (simplified from 2.5-4.5 days)\nConfidence: 8/10","acceptance_criteria":"- Multiple projects can be accessed in single MCP session\n- Per-request workspace_root parameter works on all tools\n- No cross-project data leakage\n- Concurrent calls to different projects work correctly\n- Stale sockets auto-reconnect with retry/backoff\n- Integration tests verify isolation across 2+ temp repos\n- set_context() still works as default fallback","notes":"Review doc: docs/bd-135-architectural-review.md\n\nImplementation order:\n1. bd-139 (connection manager) - Foundation with pool + lock\n2. bd-136 (ContextVar routing) - Per-request workspace\n3. bd-138 (require_context) - Validation\n4. bd-140 (tests) - Concurrency + edge cases\n5. bd-141 (docs) - Usage guide\n6. bd-142 (health checks) - DEFERRED to Phase 2","status":"open","priority":1,"issue_type":"epic","created_at":"2025-10-25T13:59:57.231937-07:00","updated_at":"2025-10-25T14:23:57.985669-07:00"}
|
||||
{"id":"bd-136","title":"Implement request-scoped routing with ContextVar","description":"Add ContextVar-based routing to avoid global state races during concurrent multi-project calls.\n\nApproach:\n- Define current_workspace: ContextVar[str|None] in server.py\n- Add @with_workspace decorator that resolves workspace_root (via _resolve_workspace_root + realpath)\n- Set ContextVar for duration of tool call, reset after\n- Falls back to set_context default (BEADS_WORKING_DIR) if workspace_root not provided\n- beads_mcp.tools.get_client() reads current_workspace from ContextVar\n\nBlocks: bd-135 (connection manager must exist first)","design":"Decorator pattern with ContextVar for request-scoped workspace routing.\n\n@with_workspace decorator:\n- Extract workspace_root parameter from tool call\n- Resolve via _resolve_workspace_root + realpath\n- Set current_workspace ContextVar for request duration\n- Falls back to BEADS_WORKING_DIR if workspace_root not provided\n- Reset ContextVar after tool completes\n\nApplied to all tools in server.py. _get_client() reads current_workspace.\n\n⚠️ CONCURRENCY GOTCHA (from architectural review):\n- ContextVar doesn't propagate to asyncio.create_task() spawned tasks\n- SOLUTION: Keep tool calls synchronous, no background task spawning\n- If background tasks needed: use contextvars.copy_context()\n\nDocument this limitation in bd-141.","notes":"Blocks on bd-139 (connection pool must exist first).\n\nCRITICAL: Do NOT spawn background tasks within tool implementations.\nContextVar propagation to spawned tasks is unreliable.","status":"open","priority":1,"issue_type":"task","created_at":"2025-10-25T14:00:27.895512-07:00","updated_at":"2025-10-25T14:24:26.774466-07:00","dependencies":[{"issue_id":"bd-136","depends_on_id":"bd-135","type":"parent-child","created_at":"2025-10-25T14:00:27.896366-07:00","created_by":"daemon"}]}
|
||||
{"id":"bd-135","title":"Multi-project MCP context switching (GH#145)","description":"Enable MCP server to manage multiple beads projects in a single session with per-request workspace_root parameter.\n\nCurrent bug: set_context(workspace_root) doesn't actually switch sockets - all operations hit first initialized socket.\n\nUse case: Managing tasks across multiple organizations with different permission models (internal, partners, open source).\n\nArchitecture: Connection pool keyed by workspace_root, each maintaining its own daemon socket connection. Request-scoped routing using ContextVar to avoid global state races.\n\nSee GH#145 for full requirements and user context.","design":"✅ APPROVED WITH MODIFICATIONS by architectural review (2025-10-25)\n\nLSP-style model is correct: Single MCP server → per-project daemons → isolated databases.\n\nCRITICAL CHANGES from review:\n1. **Simplify connection pool**: No LRU eviction initially (typical user: 2-5 projects)\n2. **Add asyncio.Lock**: Prevent race conditions in pool access\n3. **Defer health checks**: Only retry on failure, not preemptive pings\n4. **Handle submodules**: Check local .beads BEFORE git toplevel\n5. **Path canonicalization**: realpath + git toplevel with caching\n\nRISKS MITIGATED:\n- Global _client bug: Replace with connection pool keyed by canonical path\n- Race conditions: Add asyncio.Lock for pool mutations\n- Submodule edge case: Check .beads directory first\n- Stale sockets: Retry once on connection failure\n\nEstimated effort: 2.5-3.5 days (simplified from 2.5-4.5 days)\nConfidence: 8/10","acceptance_criteria":"- Multiple projects can be accessed in single MCP session\n- Per-request workspace_root parameter works on all tools\n- No cross-project data leakage\n- Concurrent calls to different projects work correctly\n- Stale sockets auto-reconnect with retry/backoff\n- Integration tests verify isolation across 2+ temp repos\n- set_context() still works as default fallback","notes":"Review doc: docs/bd-135-architectural-review.md\n\nImplementation order:\n1. bd-139 (connection manager) - Foundation with pool + lock\n2. bd-136 (ContextVar routing) - Per-request workspace\n3. bd-138 (require_context) - Validation\n4. bd-140 (tests) - Concurrency + edge cases\n5. bd-141 (docs) - Usage guide\n6. bd-142 (health checks) - DEFERRED to Phase 2","status":"closed","priority":1,"issue_type":"epic","assignee":"amp","created_at":"2025-10-25T13:59:57.231937-07:00","updated_at":"2025-10-25T14:36:02.046142-07:00","closed_at":"2025-10-25T14:36:02.046142-07:00"}
|
||||
{"id":"bd-136","title":"Implement request-scoped routing with ContextVar","description":"Add ContextVar-based routing to avoid global state races during concurrent multi-project calls.\n\nApproach:\n- Define current_workspace: ContextVar[str|None] in server.py\n- Add @with_workspace decorator that resolves workspace_root (via _resolve_workspace_root + realpath)\n- Set ContextVar for duration of tool call, reset after\n- Falls back to set_context default (BEADS_WORKING_DIR) if workspace_root not provided\n- beads_mcp.tools.get_client() reads current_workspace from ContextVar\n\nBlocks: bd-135 (connection manager must exist first)","design":"Decorator pattern with ContextVar for request-scoped workspace routing.\n\n@with_workspace decorator:\n- Extract workspace_root parameter from tool call\n- Resolve via _resolve_workspace_root + realpath\n- Set current_workspace ContextVar for request duration\n- Falls back to BEADS_WORKING_DIR if workspace_root not provided\n- Reset ContextVar after tool completes\n\nApplied to all tools in server.py. _get_client() reads current_workspace.\n\n⚠️ CONCURRENCY GOTCHA (from architectural review):\n- ContextVar doesn't propagate to asyncio.create_task() spawned tasks\n- SOLUTION: Keep tool calls synchronous, no background task spawning\n- If background tasks needed: use contextvars.copy_context()\n\nDocument this limitation in bd-141.","notes":"Blocks on bd-139 (connection pool must exist first).\n\nCRITICAL: Do NOT spawn background tasks within tool implementations.\nContextVar propagation to spawned tasks is unreliable.","status":"closed","priority":1,"issue_type":"task","assignee":"amp","created_at":"2025-10-25T14:00:27.895512-07:00","updated_at":"2025-10-25T14:32:36.531658-07:00","closed_at":"2025-10-25T14:32:36.531658-07:00","dependencies":[{"issue_id":"bd-136","depends_on_id":"bd-135","type":"parent-child","created_at":"2025-10-25T14:00:27.896366-07:00","created_by":"daemon"}]}
|
||||
{"id":"bd-137","title":"Add health checks and reconnection logic for stale sockets","description":"Handle stale/broken socket connections after daemon restarts or upgrades.\n\nFeatures:\n- ping/health_check method on client\n- Check before use or periodic health check\n- On failure: drop from pool, attempt reconnect with exponential backoff\n- Clear error propagation if reconnect fails after retries\n- Handle version mismatch after daemon upgrade\n- Handle long-idle connections closed by daemon","design":"Add async ping() to DaemonClient. ConnectionManager.get_client() pings before returning cached client. On failure, del from pool and retry connect. Bounded retry (3-5 attempts) with backoff.","status":"open","priority":1,"issue_type":"task","created_at":"2025-10-25T14:00:27.8967-07:00","updated_at":"2025-10-25T14:00:27.8967-07:00"}
|
||||
{"id":"bd-138","title":"Fix require_context to support per-request workspace_root","description":"Update require_context decorator to pass if either:\n- workspace_root was provided on the tool call (via ContextVar), OR\n- BEADS_WORKING_DIR is set (from set_context)\n\nStop using BEADS_DB as router - treat it only as CLI fallback.\n\nEnsures backward compatibility with set_context() while supporting new per-request routing.","status":"open","priority":2,"issue_type":"task","created_at":"2025-10-25T14:00:27.896646-07:00","updated_at":"2025-10-25T14:00:27.896646-07:00","dependencies":[{"issue_id":"bd-138","depends_on_id":"bd-135","type":"parent-child","created_at":"2025-10-25T14:00:27.89804-07:00","created_by":"daemon"}]}
|
||||
{"id":"bd-139","title":"Add connection manager for per-project daemon sockets","description":"Create connection pool for managing multiple daemon socket connections in beads_mcp.tools.\n\nSIMPLIFIED APPROACH (per architectural review):\n- Connection pool dict[str, BdClientBase] keyed by canonical path\n- Add asyncio.Lock for thread-safe pool access (CRITICAL)\n- Path canonicalization with @lru_cache for performance\n- NO LRU eviction (start simple, add if monitoring shows memory issues)\n- NO preemptive health checks (only retry on failure)\n\nFeatures:\n- get_client(workspace_root) → returns client for that project\n- Canonicalization: realpath + git toplevel (handles symlinks)\n- Submodule-aware: Check local .beads BEFORE git toplevel\n- Auto-start daemon if socket missing (existing behavior)\n- Validate version on connect (existing behavior)\n\nCRITICAL: Replace global _client singleton with pool to fix set_context() bug.","design":"Implementation in tools.py:\n\n```python\nfrom contextvars import ContextVar\nimport asyncio\nfrom functools import lru_cache\n\ncurrent_workspace: ContextVar[str | None] = ContextVar('workspace', default=None)\n_connection_pool: Dict[str, BdClientBase] = {}\n_pool_lock = asyncio.Lock() # Prevent race conditions\n\n@lru_cache(maxsize=128)\ndef _canonicalize_path(path: str) -\u003e str:\n # 1. realpath to resolve symlinks\n real = os.path.realpath(path)\n \n # 2. Check local .beads (submodule edge case)\n if os.path.exists(os.path.join(real, \".beads\")):\n return real\n \n # 3. Try git toplevel\n return _resolve_workspace_root(real)\n\nasync def _get_client() -\u003e BdClientBase:\n workspace = current_workspace.get() or os.environ.get(\"BEADS_WORKING_DIR\")\n if not workspace:\n raise BdError(\"No workspace set\")\n \n canonical = _canonicalize_path(workspace)\n \n async with _pool_lock: # CRITICAL: prevent concurrent mutations\n if canonical not in _connection_pool:\n _connection_pool[canonical] = create_bd_client(\n prefer_daemon=True,\n working_dir=canonical\n )\n return _connection_pool[canonical]\n```\n\n~200 LOC total (simplified from ~400-500)","status":"open","priority":1,"issue_type":"task","created_at":"2025-10-25T14:00:27.896896-07:00","updated_at":"2025-10-25T14:24:15.261895-07:00","dependencies":[{"issue_id":"bd-139","depends_on_id":"bd-135","type":"parent-child","created_at":"2025-10-25T14:00:27.89917-07:00","created_by":"daemon"}]}
|
||||
{"id":"bd-138","title":"Fix require_context to support per-request workspace_root","description":"Update require_context decorator to pass if either:\n- workspace_root was provided on the tool call (via ContextVar), OR\n- BEADS_WORKING_DIR is set (from set_context)\n\nStop using BEADS_DB as router - treat it only as CLI fallback.\n\nEnsures backward compatibility with set_context() while supporting new per-request routing.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-25T14:00:27.896646-07:00","updated_at":"2025-10-25T14:32:36.533618-07:00","closed_at":"2025-10-25T14:32:36.533618-07:00","dependencies":[{"issue_id":"bd-138","depends_on_id":"bd-135","type":"parent-child","created_at":"2025-10-25T14:00:27.89804-07:00","created_by":"daemon"}]}
|
||||
{"id":"bd-139","title":"Add connection manager for per-project daemon sockets","description":"Create connection pool for managing multiple daemon socket connections in beads_mcp.tools.\n\nSIMPLIFIED APPROACH (per architectural review):\n- Connection pool dict[str, BdClientBase] keyed by canonical path\n- Add asyncio.Lock for thread-safe pool access (CRITICAL)\n- Path canonicalization with @lru_cache for performance\n- NO LRU eviction (start simple, add if monitoring shows memory issues)\n- NO preemptive health checks (only retry on failure)\n\nFeatures:\n- get_client(workspace_root) → returns client for that project\n- Canonicalization: realpath + git toplevel (handles symlinks)\n- Submodule-aware: Check local .beads BEFORE git toplevel\n- Auto-start daemon if socket missing (existing behavior)\n- Validate version on connect (existing behavior)\n\nCRITICAL: Replace global _client singleton with pool to fix set_context() bug.","design":"Implementation in tools.py:\n\n```python\nfrom contextvars import ContextVar\nimport asyncio\nfrom functools import lru_cache\n\ncurrent_workspace: ContextVar[str | None] = ContextVar('workspace', default=None)\n_connection_pool: Dict[str, BdClientBase] = {}\n_pool_lock = asyncio.Lock() # Prevent race conditions\n\n@lru_cache(maxsize=128)\ndef _canonicalize_path(path: str) -\u003e str:\n # 1. realpath to resolve symlinks\n real = os.path.realpath(path)\n \n # 2. Check local .beads (submodule edge case)\n if os.path.exists(os.path.join(real, \".beads\")):\n return real\n \n # 3. Try git toplevel\n return _resolve_workspace_root(real)\n\nasync def _get_client() -\u003e BdClientBase:\n workspace = current_workspace.get() or os.environ.get(\"BEADS_WORKING_DIR\")\n if not workspace:\n raise BdError(\"No workspace set\")\n \n canonical = _canonicalize_path(workspace)\n \n async with _pool_lock: # CRITICAL: prevent concurrent mutations\n if canonical not in _connection_pool:\n _connection_pool[canonical] = create_bd_client(\n prefer_daemon=True,\n working_dir=canonical\n )\n return _connection_pool[canonical]\n```\n\n~200 LOC total (simplified from ~400-500)","status":"closed","priority":1,"issue_type":"task","assignee":"amp","created_at":"2025-10-25T14:00:27.896896-07:00","updated_at":"2025-10-25T14:31:08.910695-07:00","closed_at":"2025-10-25T14:31:08.910695-07:00","dependencies":[{"issue_id":"bd-139","depends_on_id":"bd-135","type":"parent-child","created_at":"2025-10-25T14:00:27.89917-07:00","created_by":"daemon"}]}
|
||||
{"id":"bd-14","title":"Auto-flush writes test pollution and session work to git-tracked issues.jsonl","description":"Auto-flush exports ALL issues from DB to issues.jsonl every 5 seconds, including:\n- Test issues (bd-4053 through bd-4059 were version test junk)\n- Issues created during debugging sessions\n- Test pollution from stress tests\n- Temporary diagnostic issues\n\nThis pollutes the git-tracked issues.jsonl with garbage that shouldn't be committed.\n\nExample from today:\n- Git had 49 clean issues\n- Our DB grew to 100+ with test junk and session work\n- Auto-flush wrote all 100+ to issues.jsonl\n- Git status showed modified issues.jsonl with 50+ unwanted issues\n\nImpact:\n- Pollutes git history with test/debug garbage\n- Makes code review difficult (noise in diffs)\n- Can't distinguish real work from session artifacts\n- Other team members pull polluted issues\n\nSolutions to consider:\n1. Disable auto-flush by default (require explicit --enable-auto-flush)\n2. Add .beadsignore to exclude issue ID patterns\n3. Make auto-flush only export 'real' issues (exclude test-*)\n4. Require manual 'bd sync' for git commit\n5. Auto-flush to separate file (.beads/session.jsonl vs issues.jsonl)\n\nRelated: bd-117 (test pollution), isolation_test.go (test DB separation)","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-10-22T00:05:10.788996-07:00","updated_at":"2025-10-24T13:51:54.437366-07:00","closed_at":"2025-10-22T01:05:59.459797-07:00"}
|
||||
{"id":"bd-140","title":"Add integration tests for multi-project MCP switching","description":"Comprehensive tests to verify multi-project isolation, concurrency, and edge cases.\n\nEXPANDED TEST COVERAGE (per architectural review):\n\n**Concurrency tests (CRITICAL):**\n- asyncio.gather() with calls to different workspace_root values\n- Verify no cross-project data leakage\n- Verify pool lock prevents race conditions\n\n**Edge case tests:**\n- Submodule handling: Parent repo vs submodule with own .beads\n- Symlink deduplication: Same physical path via different symlinks\n- Stale socket recovery: Kill daemon, verify retry on failure\n- Missing .beads directory handling\n\n**Isolation tests:**\n- Create 2+ temp repos with bd init\n- Verify operations in project A don't affect project B\n- Stress test: many parallel calls across 3-5 repos\n\nEstimated effort: M (1-2 days) including fixtures for temp repos and daemon process management","design":"Test structure:\n\n1. test_concurrent_multi_project.py:\n - asyncio.gather with 2+ projects\n - Verify pool lock prevents corruption\n \n2. test_path_canonicalization.py:\n - Submodule edge case (check .beads first)\n - Symlink deduplication (realpath normalization)\n \n3. test_stale_socket_recovery.py:\n - Kill daemon mid-session\n - Verify retry-on-failure works\n \n4. test_cross_project_isolation.py:\n - Create issues in project A\n - List from project B, verify empty\n - No data leakage\n\nUse pytest fixtures for temp repos and daemon lifecycle.","acceptance_criteria":"- All concurrency tests pass with asyncio.gather\n- Submodule edge case handled correctly\n- Symlinks deduplicated to same connection\n- Stale socket retry works\n- No cross-project data leakage in stress tests","status":"open","priority":1,"issue_type":"task","created_at":"2025-10-25T14:00:27.896623-07:00","updated_at":"2025-10-25T14:24:42.04792-07:00","dependencies":[{"issue_id":"bd-140","depends_on_id":"bd-135","type":"parent-child","created_at":"2025-10-25T14:00:27.90028-07:00","created_by":"daemon"}]}
|
||||
{"id":"bd-141","title":"Update MCP multi-project documentation","description":"Update documentation for multi-project workflow in README.md, AGENTS.md, and MCP integration docs.\n\nEXPANDED SECTIONS (per architectural review):\n\n**Usage examples:**\n- Per-request workspace_root parameter usage\n- Concurrent multi-project queries with asyncio.gather\n- Migration from set_context() to workspace_root parameter\n\n**Architecture notes:**\n- Connection pooling behavior (no limits initially)\n- set_context() as default fallback (still supported)\n- Library users NOT affected (all changes in MCP layer)\n\n**Concurrency gotchas (CRITICAL):**\n- ContextVar doesn't propagate to asyncio.create_task()\n- Do NOT spawn background tasks in tool implementations\n- All tool calls should be synchronous/sequential\n\n**Troubleshooting:**\n- Stale sockets (retry once on failure)\n- Version mismatches (auto-detected since v0.16.0)\n- Path aliasing via symlinks (deduplicated by realpath)\n- Submodules with own .beads (handled correctly)\n\n**Use cases:**\n- Multi-organization collaboration (GH#145)\n- Parallel project management scripts\n- Cross-project queries","design":"Documentation structure:\n\n1. integrations/beads-mcp/README.md:\n - Add \"Multi-Project Support\" section\n - workspace_root parameter examples\n - Connection pool behavior\n \n2. AGENTS.md:\n - Update MCP section with workspace_root usage\n - Add concurrency warning (no spawned tasks)\n - Document library non-impact\n \n3. New: docs/MCP_MULTI_PROJECT.md:\n - Detailed architecture explanation\n - Migration guide from set_context()\n - Troubleshooting guide\n - Edge cases (submodules, symlinks)","status":"open","priority":2,"issue_type":"task","created_at":"2025-10-25T14:00:27.897025-07:00","updated_at":"2025-10-25T14:25:14.524235-07:00","dependencies":[{"issue_id":"bd-141","depends_on_id":"bd-135","type":"parent-child","created_at":"2025-10-25T14:00:27.901495-07:00","created_by":"daemon"}]}
|
||||
{"id":"bd-142","title":"Add health checks and reconnection logic for stale sockets","description":"⚠️ DEFERRED TO PHASE 2 (per architectural review)\n\nAdd health checks and reconnection logic for stale sockets ONLY IF monitoring shows it's needed.\n\nSIMPLIFIED APPROACH:\n- NO preemptive health checks (adds latency to every call)\n- NO periodic ping before use (daemon restarts are rare)\n- YES: Single retry on connection failure (DaemonConnectionError)\n- YES: Evict stale client from pool on failure\n\nRationale:\n- Stale sockets are rare (daemon auto-restart is uncommon)\n- Preemptive checks add latency with little benefit\n- Retry-on-failure is sufficient for most cases\n\nImplementation (if needed later):\n- Wrap tool calls with try/except\n- On DaemonConnectionError: evict from pool, retry once\n- Log failures for monitoring\n\nMonitor after Phase 1 launch:\n- Frequency of stale socket errors\n- User reports of connection issues\n- Decision point: Add if \u003e1% of calls fail","design":"Simplified retry wrapper (implement only if monitoring shows need):\n\n```python\nasync def _robust_client_call(func):\n try:\n client = await _get_client()\n return await func(client)\n except (DaemonConnectionError, DaemonNotRunningError):\n # Evict stale client and retry once\n workspace = current_workspace.get()\n canonical = _canonicalize_path(workspace)\n async with _pool_lock:\n _connection_pool.pop(canonical, None)\n # Retry\n client = await _get_client()\n return await func(client)\n```\n\nNO bounded backoff, NO health checks, NO version validation pings.","notes":"DEFERRED - Not needed for MVP.\n\nAdd to Phase 2 roadmap only if monitoring shows:\n- Stale socket errors \u003e1% of calls\n- User complaints about connection issues\n- Long-running MCP servers experiencing problems","status":"open","priority":3,"issue_type":"task","created_at":"2025-10-25T14:00:36.252409-07:00","updated_at":"2025-10-25T14:24:58.5637-07:00","dependencies":[{"issue_id":"bd-142","depends_on_id":"bd-135","type":"parent-child","created_at":"2025-10-25T14:00:42.132775-07:00","created_by":"daemon"}]}
|
||||
{"id":"bd-140","title":"Add integration tests for multi-project MCP switching","description":"Comprehensive tests to verify multi-project isolation, concurrency, and edge cases.\n\nEXPANDED TEST COVERAGE (per architectural review):\n\n**Concurrency tests (CRITICAL):**\n- asyncio.gather() with calls to different workspace_root values\n- Verify no cross-project data leakage\n- Verify pool lock prevents race conditions\n\n**Edge case tests:**\n- Submodule handling: Parent repo vs submodule with own .beads\n- Symlink deduplication: Same physical path via different symlinks\n- Stale socket recovery: Kill daemon, verify retry on failure\n- Missing .beads directory handling\n\n**Isolation tests:**\n- Create 2+ temp repos with bd init\n- Verify operations in project A don't affect project B\n- Stress test: many parallel calls across 3-5 repos\n\nEstimated effort: M (1-2 days) including fixtures for temp repos and daemon process management","design":"Test structure:\n\n1. test_concurrent_multi_project.py:\n - asyncio.gather with 2+ projects\n - Verify pool lock prevents corruption\n \n2. test_path_canonicalization.py:\n - Submodule edge case (check .beads first)\n - Symlink deduplication (realpath normalization)\n \n3. test_stale_socket_recovery.py:\n - Kill daemon mid-session\n - Verify retry-on-failure works\n \n4. test_cross_project_isolation.py:\n - Create issues in project A\n - List from project B, verify empty\n - No data leakage\n\nUse pytest fixtures for temp repos and daemon lifecycle.","acceptance_criteria":"- All concurrency tests pass with asyncio.gather\n- Submodule edge case handled correctly\n- Symlinks deduplicated to same connection\n- Stale socket retry works\n- No cross-project data leakage in stress tests","status":"closed","priority":1,"issue_type":"task","assignee":"amp","created_at":"2025-10-25T14:00:27.896623-07:00","updated_at":"2025-10-25T14:35:13.09686-07:00","closed_at":"2025-10-25T14:35:13.09686-07:00","dependencies":[{"issue_id":"bd-140","depends_on_id":"bd-135","type":"parent-child","created_at":"2025-10-25T14:00:27.90028-07:00","created_by":"daemon"}]}
|
||||
{"id":"bd-141","title":"Update MCP multi-project documentation","description":"Update documentation for multi-project workflow in README.md, AGENTS.md, and MCP integration docs.\n\nEXPANDED SECTIONS (per architectural review):\n\n**Usage examples:**\n- Per-request workspace_root parameter usage\n- Concurrent multi-project queries with asyncio.gather\n- Migration from set_context() to workspace_root parameter\n\n**Architecture notes:**\n- Connection pooling behavior (no limits initially)\n- set_context() as default fallback (still supported)\n- Library users NOT affected (all changes in MCP layer)\n\n**Concurrency gotchas (CRITICAL):**\n- ContextVar doesn't propagate to asyncio.create_task()\n- Do NOT spawn background tasks in tool implementations\n- All tool calls should be synchronous/sequential\n\n**Troubleshooting:**\n- Stale sockets (retry once on failure)\n- Version mismatches (auto-detected since v0.16.0)\n- Path aliasing via symlinks (deduplicated by realpath)\n- Submodules with own .beads (handled correctly)\n\n**Use cases:**\n- Multi-organization collaboration (GH#145)\n- Parallel project management scripts\n- Cross-project queries","design":"Documentation structure:\n\n1. integrations/beads-mcp/README.md:\n - Add \"Multi-Project Support\" section\n - workspace_root parameter examples\n - Connection pool behavior\n \n2. AGENTS.md:\n - Update MCP section with workspace_root usage\n - Add concurrency warning (no spawned tasks)\n - Document library non-impact\n \n3. New: docs/MCP_MULTI_PROJECT.md:\n - Detailed architecture explanation\n - Migration guide from set_context()\n - Troubleshooting guide\n - Edge cases (submodules, symlinks)","status":"closed","priority":2,"issue_type":"task","assignee":"amp","created_at":"2025-10-25T14:00:27.897025-07:00","updated_at":"2025-10-25T14:35:55.392654-07:00","closed_at":"2025-10-25T14:35:55.392654-07:00","dependencies":[{"issue_id":"bd-141","depends_on_id":"bd-135","type":"parent-child","created_at":"2025-10-25T14:00:27.901495-07:00","created_by":"daemon"}]}
|
||||
{"id":"bd-142","title":"Add health checks and reconnection logic for stale sockets","description":"⚠️ DEFERRED TO PHASE 2 (per architectural review)\n\nAdd health checks and reconnection logic for stale sockets ONLY IF monitoring shows it's needed.\n\nSIMPLIFIED APPROACH:\n- NO preemptive health checks (adds latency to every call)\n- NO periodic ping before use (daemon restarts are rare)\n- YES: Single retry on connection failure (DaemonConnectionError)\n- YES: Evict stale client from pool on failure\n\nRationale:\n- Stale sockets are rare (daemon auto-restart is uncommon)\n- Preemptive checks add latency with little benefit\n- Retry-on-failure is sufficient for most cases\n\nImplementation (if needed later):\n- Wrap tool calls with try/except\n- On DaemonConnectionError: evict from pool, retry once\n- Log failures for monitoring\n\nMonitor after Phase 1 launch:\n- Frequency of stale socket errors\n- User reports of connection issues\n- Decision point: Add if \u003e1% of calls fail","design":"Simplified retry wrapper (implement only if monitoring shows need):\n\n```python\nasync def _robust_client_call(func):\n try:\n client = await _get_client()\n return await func(client)\n except (DaemonConnectionError, DaemonNotRunningError):\n # Evict stale client and retry once\n workspace = current_workspace.get()\n canonical = _canonicalize_path(workspace)\n async with _pool_lock:\n _connection_pool.pop(canonical, None)\n # Retry\n client = await _get_client()\n return await func(client)\n```\n\nNO bounded backoff, NO health checks, NO version validation pings.","notes":"DEFERRED - Not needed for MVP.\n\nAdd to Phase 2 roadmap only if monitoring shows:\n- Stale socket errors \u003e1% of calls\n- User complaints about connection issues\n- Long-running MCP servers experiencing problems","status":"closed","priority":3,"issue_type":"task","created_at":"2025-10-25T14:00:36.252409-07:00","updated_at":"2025-10-25T14:35:55.394617-07:00","closed_at":"2025-10-25T14:35:55.394617-07:00","dependencies":[{"issue_id":"bd-142","depends_on_id":"bd-135","type":"parent-child","created_at":"2025-10-25T14:00:42.132775-07:00","created_by":"daemon"}]}
|
||||
{"id":"bd-143","title":"bd daemon auto-sync can wipe out issues.jsonl when database is empty","description":"During dogfooding session, bd daemon auto-sync exported empty database to JSONL, losing all 177 issues. Had to git restore to recover.\n\nRoot cause: bd export doesn't check if database is empty before exporting. When daemon has empty/wrong database, it wipes out valid JSONL file.\n\nImpact: DATA LOSS","design":"Add safeguard in bd export:\n1. Count total issues in database before export\n2. If count is 0, refuse to export and show error\n3. Provide --force flag to override if truly want empty export\n\nAlternative: Check if target JSONL exists and has issues, warn if about to replace with empty export","acceptance_criteria":"- bd export refuses to export when database has 0 issues\n- Clear error message: \"Refusing to export empty database (0 issues). Use --force to override.\"\n- --force flag allows override for intentional empty exports\n- Test: export with empty db fails, export with --force succeeds","status":"in_progress","priority":0,"issue_type":"bug","created_at":"2025-10-25T16:29:16.045548-07:00","updated_at":"2025-10-25T16:30:16.559585-07:00"}
|
||||
{"id":"bd-144","title":"bd import doesn't update database modification time (WAL mode)","description":"When running bd import in WAL mode, the -wal file is updated but main .db file timestamp stays old. This breaks staleness detection which only checks main .db file.\n\nDiscovered during dogfooding when import didn't trigger staleness refresh.\n\nImpact: Staleness checks fail to detect that database is newer than expected","design":"Two options:\n1. Checkpoint WAL after import to flush changes to main .db file\n2. Update staleness detection to check both .db and -wal file timestamps\n\nOption 1 is simpler and safer - just add PRAGMA wal_checkpoint(FULL) after import completes","acceptance_criteria":"- After bd import, main .db file modification time is updated\n- Staleness detection correctly sees database as fresh\n- Test: import, check .db mtime, verify it's recent","status":"open","priority":1,"issue_type":"bug","created_at":"2025-10-25T16:29:16.048176-07:00","updated_at":"2025-10-25T16:29:16.048176-07:00"}
|
||||
{"id":"bd-145","title":"bd should show which database file it's using","description":"During dogfooding, bd showed \"0 issues\" when correct database had 177 issues. Confusion arose from which database path was being used (daemon default vs explicit --db flag).\n\nUsers need clear feedback about which database file bd is actually using, especially when daemon is involved.\n\nImpact: User confusion, working with wrong database unknowingly","design":"Add database path to verbose output or as a bd info command:\n1. bd info shows current database path, daemon status\n2. OR: bd ready/list/etc --verbose shows \"Using database: /path/to/.beads/beads.db\"\n3. Consider adding to bd status output\n\nWhen database path differs from expected, show warning","acceptance_criteria":"- User can easily determine which database file bd is using\n- bd info or similar command shows full database path\n- When using unexpected database (e.g., daemon vs explicit --db), show clear indication\n- Documentation updated with how to check database path","status":"open","priority":1,"issue_type":"feature","created_at":"2025-10-25T16:29:16.059118-07:00","updated_at":"2025-10-25T16:29:16.059118-07:00"}
|
||||
{"id":"bd-15","title":"Make merge command idempotent for safe retry after partial failures","description":"The merge command currently performs 3 operations without an outer transaction:\n1. Migrate dependencies from source → target\n2. Update text references across all issues\n3. Close source issues\n\nIf merge fails mid-operation (network issue, daemon crash, etc.), a retry will fail or produce incorrect results because some operations already succeeded.\n\n**Goal:** Make merge idempotent so retrying after partial failure is safe and completes the remaining work.\n\n**Idempotency checks needed:**\n- Skip dependency migration if target already has the dependency\n- Skip text reference updates if already updated\n- Skip closing source issues if already closed\n- Report which operations were skipped vs performed\n\n**Example output:**\n```\n✓ Merged 2 issue(s) into bd-63\n - Dependencies: 3 migrated, 2 already existed\n - Text references: 5 updated, 0 already correct\n - Source issues: 1 closed, 1 already closed\n```\n\n**Related:** bd-115 originally requested transaction support, but idempotency is a better solution for this use case since individual operations are already atomic.","design":"Current merge code already has some idempotency:\n- Dependency migration checks `alreadyExists` before adding (line ~145-151 in merge.go)\n- Text reference updates are naturally idempotent (replacing bd-X with bd-Y twice has same result)\n\nMissing idempotency:\n- CloseIssue fails if source already closed\n- Error messages don't distinguish \"already done\" from \"real failure\"\n\nImplementation:\n1. Check source issue status before closing - skip if already closed\n2. Track which operations succeeded/skipped\n3. Return detailed results for user visibility\n4. Consider adding --dry-run output showing what would be done vs skipped","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-10-22T00:47:43.165434-07:00","updated_at":"2025-10-24T13:51:54.437619-07:00","closed_at":"2025-10-22T11:56:36.526276-07:00"}
|
||||
{"id":"bd-16","title":"Global daemon should warn/reject --auto-commit and --auto-push","description":"When user runs 'bd daemon --global --auto-commit', it's unclear which repo the daemon will commit to (especially after fixing bd-62 where global daemon won't open a DB).\n\nOptions:\n1. Warn and ignore the flags in global mode\n2. Error out with clear message\n\nLine 87-91 already checks autoPush, but should skip check entirely for global mode. Add user-friendly messaging about flag incompatibility.","status":"closed","priority":3,"issue_type":"feature","created_at":"2025-10-22T00:47:43.165645-07:00","updated_at":"2025-10-24T13:51:54.437812-07:00","closed_at":"2025-10-17T23:04:30.223432-07:00"}
|
||||
{"id":"bd-17","title":"Add cross-repo issue references (future enhancement)","description":"Support referencing issues across different beads repositories. Useful for tracking dependencies between separate projects.\n\nProposed syntax:\n- Local reference: bd-63 (current behavior)\n- Cross-repo by path: ~/src/other-project#bd-456\n- Cross-repo by workspace name: @project2:bd-789\n\nUse cases:\n1. Frontend project depends on backend API issue\n2. Shared library changes blocking multiple projects\n3. System administrator tracking work across machines\n4. Monorepo with separate beads databases per component\n\nImplementation challenges:\n- Storage layer needs to query external databases\n- Dependency resolution across repos\n- What if external repo not available?\n- How to handle in JSONL export/import?\n- Security: should repos be able to read others?\n\nDesign questions to resolve first:\n1. Read-only references vs full cross-repo dependencies?\n2. How to handle repo renames/moves?\n3. Absolute paths vs workspace names vs git remotes?\n4. Should bd-38 auto-discover related repos?\n\nRecommendation: \n- Gather user feedback first\n- Start with read-only references\n- Implement as plugin/extension?\n\nContext: This is mentioned in bd-38 as approach #2. Much more complex than daemon multi-repo approach. Only implement if there's strong user demand.\n\nPriority: Backlog (4) - wait for user feedback before designing","status":"closed","priority":4,"issue_type":"feature","created_at":"2025-10-22T00:47:43.165857-07:00","updated_at":"2025-10-24T13:51:54.438011-07:00","closed_at":"2025-10-20T22:00:31.966891-07:00"}
|
||||
|
||||
@@ -689,6 +689,19 @@ func exportToJSONLWithStore(ctx context.Context, store storage.Storage, jsonlPat
|
||||
return fmt.Errorf("failed to get issues: %w", err)
|
||||
}
|
||||
|
||||
// Safety check: prevent exporting empty database over non-empty JSONL
|
||||
if len(issues) == 0 {
|
||||
existingCount, err := countIssuesInJSONL(jsonlPath)
|
||||
if err != nil {
|
||||
// If we can't read the file, it might not exist yet, which is fine
|
||||
if !os.IsNotExist(err) {
|
||||
return fmt.Errorf("warning: failed to read existing JSONL: %w", err)
|
||||
}
|
||||
} else if existingCount > 0 {
|
||||
return fmt.Errorf("refusing to export empty database over non-empty JSONL file (database: 0 issues, JSONL: %d issues). This would result in data loss", existingCount)
|
||||
}
|
||||
}
|
||||
|
||||
// Sort by ID for consistent output
|
||||
sort.Slice(issues, func(i, j int) bool {
|
||||
return issues[i].ID < issues[j].ID
|
||||
|
||||
@@ -14,6 +14,31 @@ import (
|
||||
"github.com/steveyegge/beads/internal/types"
|
||||
)
|
||||
|
||||
// countIssuesInJSONL counts the number of issues in a JSONL file
|
||||
func countIssuesInJSONL(path string) (int, error) {
|
||||
file, err := os.Open(path)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
defer file.Close()
|
||||
|
||||
count := 0
|
||||
decoder := json.NewDecoder(file)
|
||||
for {
|
||||
var issue types.Issue
|
||||
if err := decoder.Decode(&issue); err != nil {
|
||||
if err.Error() == "EOF" {
|
||||
break
|
||||
}
|
||||
// If we hit a decode error, stop counting but return what we have
|
||||
// This handles partially corrupt files
|
||||
break
|
||||
}
|
||||
count++
|
||||
}
|
||||
return count, nil
|
||||
}
|
||||
|
||||
// validateExportPath checks if the output path is safe to write to
|
||||
func validateExportPath(path string) error {
|
||||
// Get absolute path to normalize it
|
||||
@@ -57,6 +82,7 @@ Output to stdout by default, or use -o flag for file output.`,
|
||||
format, _ := cmd.Flags().GetString("format")
|
||||
output, _ := cmd.Flags().GetString("output")
|
||||
statusFilter, _ := cmd.Flags().GetString("status")
|
||||
force, _ := cmd.Flags().GetBool("force")
|
||||
|
||||
if format != "jsonl" {
|
||||
fmt.Fprintf(os.Stderr, "Error: only 'jsonl' format is currently supported\n")
|
||||
@@ -95,6 +121,43 @@ Output to stdout by default, or use -o flag for file output.`,
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Safety check: prevent exporting empty database over non-empty JSONL
|
||||
if len(issues) == 0 && output != "" && !force {
|
||||
existingCount, err := countIssuesInJSONL(output)
|
||||
if err != nil {
|
||||
// If we can't read the file, it might not exist yet, which is fine
|
||||
if !os.IsNotExist(err) {
|
||||
fmt.Fprintf(os.Stderr, "Warning: failed to read existing JSONL: %v\n", err)
|
||||
}
|
||||
} else if existingCount > 0 {
|
||||
fmt.Fprintf(os.Stderr, "Error: refusing to export empty database over non-empty JSONL file\n")
|
||||
fmt.Fprintf(os.Stderr, " Database has 0 issues, JSONL has %d issues\n", existingCount)
|
||||
fmt.Fprintf(os.Stderr, " This would result in data loss!\n")
|
||||
fmt.Fprintf(os.Stderr, "Hint: Use --force to override this safety check, or delete the JSONL file first:\n")
|
||||
fmt.Fprintf(os.Stderr, " bd export -o %s --force\n", output)
|
||||
fmt.Fprintf(os.Stderr, " rm %s\n", output)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
// Warning: check if export would lose >50% of issues
|
||||
if output != "" {
|
||||
existingCount, err := countIssuesInJSONL(output)
|
||||
if err == nil && existingCount > 0 {
|
||||
lossPercent := float64(existingCount-len(issues)) / float64(existingCount) * 100
|
||||
if lossPercent > 50 {
|
||||
fmt.Fprintf(os.Stderr, "WARNING: Export would lose %.1f%% of issues!\n", lossPercent)
|
||||
fmt.Fprintf(os.Stderr, " Existing JSONL: %d issues\n", existingCount)
|
||||
fmt.Fprintf(os.Stderr, " Database: %d issues\n", len(issues))
|
||||
fmt.Fprintf(os.Stderr, " This suggests database staleness or corruption.\n")
|
||||
fmt.Fprintf(os.Stderr, "Press Ctrl+C to abort, or Enter to continue: ")
|
||||
// Read a line from stdin to wait for user confirmation
|
||||
var response string
|
||||
fmt.Scanln(&response)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Sort by ID for consistent output
|
||||
sort.Slice(issues, func(i, j int) bool {
|
||||
return issues[i].ID < issues[j].ID
|
||||
@@ -206,5 +269,6 @@ func init() {
|
||||
exportCmd.Flags().StringP("format", "f", "jsonl", "Export format (jsonl)")
|
||||
exportCmd.Flags().StringP("output", "o", "", "Output file (default: stdout)")
|
||||
exportCmd.Flags().StringP("status", "s", "", "Filter by status")
|
||||
exportCmd.Flags().Bool("force", false, "Force export even if database is empty")
|
||||
rootCmd.AddCommand(exportCmd)
|
||||
}
|
||||
|
||||
@@ -12,6 +12,8 @@ import (
|
||||
"github.com/steveyegge/beads/internal/types"
|
||||
)
|
||||
|
||||
|
||||
|
||||
func TestExportCommand(t *testing.T) {
|
||||
tmpDir, err := os.MkdirTemp("", "bd-test-export-*")
|
||||
if err != nil {
|
||||
@@ -198,4 +200,58 @@ func TestExportCommand(t *testing.T) {
|
||||
// Just verify the function doesn't panic with Windows-style paths
|
||||
_ = validateExportPath("C:\\Windows\\system32\\test.jsonl")
|
||||
})
|
||||
|
||||
t.Run("prevent exporting empty database over non-empty JSONL", func(t *testing.T) {
|
||||
exportPath := filepath.Join(tmpDir, "export_empty_check.jsonl")
|
||||
|
||||
// First, create a JSONL file with issues
|
||||
file, err := os.Create(exportPath)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create JSONL: %v", err)
|
||||
}
|
||||
encoder := json.NewEncoder(file)
|
||||
for _, issue := range issues {
|
||||
if err := encoder.Encode(issue); err != nil {
|
||||
t.Fatalf("Failed to encode issue: %v", err)
|
||||
}
|
||||
}
|
||||
file.Close()
|
||||
|
||||
// Verify file has issues
|
||||
count, err := countIssuesInJSONL(exportPath)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to count issues: %v", err)
|
||||
}
|
||||
if count != 2 {
|
||||
t.Fatalf("Expected 2 issues in JSONL, got %d", count)
|
||||
}
|
||||
|
||||
// Create empty database
|
||||
emptyDBPath := filepath.Join(tmpDir, "empty.db")
|
||||
emptyStore, err := sqlite.New(emptyDBPath)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create empty store: %v", err)
|
||||
}
|
||||
defer emptyStore.Close()
|
||||
|
||||
// Test using exportToJSONLWithStore directly (daemon code path)
|
||||
err = exportToJSONLWithStore(ctx, emptyStore, exportPath)
|
||||
if err == nil {
|
||||
t.Error("Expected error when exporting empty database over non-empty JSONL")
|
||||
} else {
|
||||
expectedMsg := "refusing to export empty database over non-empty JSONL file (database: 0 issues, JSONL: 2 issues). This would result in data loss"
|
||||
if err.Error() != expectedMsg {
|
||||
t.Errorf("Unexpected error message:\nGot: %q\nExpected: %q", err.Error(), expectedMsg)
|
||||
}
|
||||
}
|
||||
|
||||
// Verify JSONL file is unchanged
|
||||
countAfter, err := countIssuesInJSONL(exportPath)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to count issues after failed export: %v", err)
|
||||
}
|
||||
if countAfter != 2 {
|
||||
t.Errorf("JSONL file was modified! Expected 2 issues, got %d", countAfter)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
@@ -288,6 +288,29 @@ func exportToJSONL(ctx context.Context, jsonlPath string) error {
|
||||
return fmt.Errorf("failed to get issues: %w", err)
|
||||
}
|
||||
|
||||
// Safety check: prevent exporting empty database over non-empty JSONL
|
||||
if len(issues) == 0 {
|
||||
existingCount, countErr := countIssuesInJSONL(jsonlPath)
|
||||
if countErr != nil {
|
||||
// If we can't read the file, it might not exist yet, which is fine
|
||||
if !os.IsNotExist(countErr) {
|
||||
fmt.Fprintf(os.Stderr, "Warning: failed to read existing JSONL: %v\n", countErr)
|
||||
}
|
||||
} else if existingCount > 0 {
|
||||
return fmt.Errorf("refusing to export empty database over non-empty JSONL file (database: 0 issues, JSONL: %d issues)", existingCount)
|
||||
}
|
||||
}
|
||||
|
||||
// Warning: check if export would lose >50% of issues
|
||||
existingCount, err := countIssuesInJSONL(jsonlPath)
|
||||
if err == nil && existingCount > 0 {
|
||||
lossPercent := float64(existingCount-len(issues)) / float64(existingCount) * 100
|
||||
if lossPercent > 50 {
|
||||
fmt.Fprintf(os.Stderr, "WARNING: Export would lose %.1f%% of issues (existing: %d, database: %d)\n",
|
||||
lossPercent, existingCount, len(issues))
|
||||
}
|
||||
}
|
||||
|
||||
// Sort by ID for consistent output
|
||||
sort.Slice(issues, func(i, j int) bool {
|
||||
return issues[i].ID < issues[j].ID
|
||||
|
||||
@@ -130,12 +130,85 @@ Configure separate MCP servers for specific projects using `BEADS_WORKING_DIR`:
|
||||
|
||||
⚠️ **Problem**: AI may select the wrong MCP server for your workspace, causing commands to operate on the wrong database. Use single MCP server instead.
|
||||
|
||||
## Multi-Project Support
|
||||
|
||||
The MCP server supports managing multiple beads projects in a single session using per-request workspace routing.
|
||||
|
||||
### Using `workspace_root` Parameter
|
||||
|
||||
Every tool accepts an optional `workspace_root` parameter for explicit project targeting:
|
||||
|
||||
```python
|
||||
# Query issues from different projects concurrently
|
||||
results = await asyncio.gather(
|
||||
beads_ready_work(workspace_root="/Users/you/project-a"),
|
||||
beads_ready_work(workspace_root="/Users/you/project-b"),
|
||||
)
|
||||
|
||||
# Create issue in specific project
|
||||
await beads_create_issue(
|
||||
title="Fix auth bug",
|
||||
priority=1,
|
||||
workspace_root="/Users/you/project-a"
|
||||
)
|
||||
```
|
||||
|
||||
### Architecture
|
||||
|
||||
**Connection Pool**: The MCP server maintains a connection pool keyed by canonical workspace path:
|
||||
- Each workspace gets its own daemon socket connection
|
||||
- Paths are canonicalized (symlinks resolved, git toplevel detected)
|
||||
- Concurrent requests use `asyncio.Lock` to prevent race conditions
|
||||
- No LRU eviction (keeps all connections open for session)
|
||||
|
||||
**ContextVar Routing**: Per-request workspace context is managed via Python's `ContextVar`:
|
||||
- Each tool call sets the workspace for its duration
|
||||
- Properly isolated for concurrent calls (no cross-contamination)
|
||||
- Falls back to `BEADS_WORKING_DIR` if `workspace_root` not provided
|
||||
|
||||
**Path Canonicalization**:
|
||||
- Symlinks are resolved to physical paths (prevents duplicate connections)
|
||||
- Git submodules with `.beads` directories use local context
|
||||
- Git toplevel is used for non-initialized directories
|
||||
- Results are cached for performance
|
||||
|
||||
### Backward Compatibility
|
||||
|
||||
The `set_context()` tool still works and sets a default workspace:
|
||||
|
||||
```python
|
||||
# Old way (still supported)
|
||||
await set_context(workspace_root="/Users/you/project-a")
|
||||
await beads_ready_work() # Uses project-a
|
||||
|
||||
# New way (more flexible)
|
||||
await beads_ready_work(workspace_root="/Users/you/project-a")
|
||||
```
|
||||
|
||||
### Concurrency Gotchas
|
||||
|
||||
⚠️ **IMPORTANT**: Tool implementations must NOT spawn background tasks using `asyncio.create_task()`.
|
||||
|
||||
**Why?** ContextVar doesn't propagate to spawned tasks, which can cause cross-project data leakage.
|
||||
|
||||
**Solution**: Keep all tool logic synchronous or use sequential `await` calls.
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
**Symlink aliasing**: Different paths to same project are deduplicated automatically via `realpath`.
|
||||
|
||||
**Submodule handling**: Submodules with their own `.beads` directory are treated as separate projects.
|
||||
|
||||
**Stale sockets**: Currently no health checks. Phase 2 will add retry-on-failure if monitoring shows need.
|
||||
|
||||
**Version mismatches**: Daemon version is auto-checked since v0.16.0. Mismatched daemons are automatically restarted.
|
||||
|
||||
## Features
|
||||
|
||||
**Resource:**
|
||||
- `beads://quickstart` - Quickstart guide for using beads
|
||||
|
||||
**Tools:**
|
||||
**Tools (all support `workspace_root` parameter):**
|
||||
- `init` - Initialize bd in current directory
|
||||
- `create` - Create new issue (bug, feature, task, epic, chore)
|
||||
- `list` - List issues with filters (status, priority, type, assignee)
|
||||
@@ -147,6 +220,7 @@ Configure separate MCP servers for specific projects using `BEADS_WORKING_DIR`:
|
||||
- `blocked` - Get blocked issues
|
||||
- `stats` - Get project statistics
|
||||
- `reopen` - Reopen a closed issue with optional reason
|
||||
- `set_context` - Set default workspace for subsequent calls (backward compatibility)
|
||||
|
||||
|
||||
## Development
|
||||
|
||||
@@ -26,6 +26,7 @@ from beads_mcp.tools import (
|
||||
beads_show_issue,
|
||||
beads_stats,
|
||||
beads_update_issue,
|
||||
current_workspace, # ContextVar for per-request workspace routing
|
||||
)
|
||||
|
||||
# Setup logging for lifecycle events
|
||||
@@ -96,9 +97,43 @@ signal.signal(signal.SIGINT, signal_handler)
|
||||
logger.info("beads-mcp server initialized with lifecycle management")
|
||||
|
||||
|
||||
def with_workspace(func: Callable[..., T]) -> Callable[..., T]:
|
||||
"""Decorator to set workspace context for the duration of a tool call.
|
||||
|
||||
Extracts workspace_root parameter from tool call kwargs, resolves it,
|
||||
and sets current_workspace ContextVar for the request duration.
|
||||
Falls back to BEADS_WORKING_DIR if workspace_root not provided.
|
||||
|
||||
This enables per-request workspace routing for multi-project support.
|
||||
"""
|
||||
@wraps(func)
|
||||
async def wrapper(*args, **kwargs):
|
||||
# Extract workspace_root parameter (if provided)
|
||||
workspace_root = kwargs.get('workspace_root')
|
||||
|
||||
# Determine workspace: parameter > env > None
|
||||
workspace = workspace_root or os.environ.get("BEADS_WORKING_DIR")
|
||||
|
||||
# Set ContextVar for this request
|
||||
token = current_workspace.set(workspace)
|
||||
|
||||
try:
|
||||
# Execute tool with workspace context set
|
||||
return await func(*args, **kwargs)
|
||||
finally:
|
||||
# Always reset ContextVar after tool completes
|
||||
current_workspace.reset(token)
|
||||
|
||||
return wrapper
|
||||
|
||||
|
||||
def require_context(func: Callable[..., T]) -> Callable[..., T]:
|
||||
"""Decorator to enforce context has been set before write operations.
|
||||
|
||||
Passes if either:
|
||||
- workspace_root was provided on tool call (via ContextVar), OR
|
||||
- BEADS_WORKING_DIR is set (from set_context)
|
||||
|
||||
Only enforces if BEADS_REQUIRE_CONTEXT=1 is set in environment.
|
||||
This allows backward compatibility while adding safety for multi-repo setups.
|
||||
"""
|
||||
@@ -106,9 +141,11 @@ def require_context(func: Callable[..., T]) -> Callable[..., T]:
|
||||
async def wrapper(*args, **kwargs):
|
||||
# Only enforce if explicitly enabled
|
||||
if os.environ.get("BEADS_REQUIRE_CONTEXT") == "1":
|
||||
if not os.environ.get("BEADS_CONTEXT_SET"):
|
||||
# Check ContextVar or environment
|
||||
workspace = current_workspace.get() or os.environ.get("BEADS_WORKING_DIR")
|
||||
if not workspace:
|
||||
raise ValueError(
|
||||
"Context not set. Call set_context with your workspace root before performing write operations."
|
||||
"Context not set. Either provide workspace_root parameter or call set_context() first."
|
||||
)
|
||||
return await func(*args, **kwargs)
|
||||
return wrapper
|
||||
@@ -243,6 +280,7 @@ async def where_am_i(workspace_root: str | None = None) -> str:
|
||||
|
||||
# Register all tools
|
||||
@mcp.tool(name="ready", description="Find tasks that have no blockers and are ready to be worked on.")
|
||||
@with_workspace
|
||||
async def ready_work(
|
||||
limit: int = 10,
|
||||
priority: int | None = None,
|
||||
@@ -257,6 +295,7 @@ async def ready_work(
|
||||
name="list",
|
||||
description="List all issues with optional filters (status, priority, type, assignee).",
|
||||
)
|
||||
@with_workspace
|
||||
async def list_issues(
|
||||
status: IssueStatus | None = None,
|
||||
priority: int | None = None,
|
||||
@@ -279,6 +318,7 @@ async def list_issues(
|
||||
name="show",
|
||||
description="Show detailed information about a specific issue including dependencies and dependents.",
|
||||
)
|
||||
@with_workspace
|
||||
async def show_issue(issue_id: str, workspace_root: str | None = None) -> Issue:
|
||||
"""Show detailed information about a specific issue."""
|
||||
return await beads_show_issue(issue_id=issue_id)
|
||||
@@ -289,6 +329,7 @@ async def show_issue(issue_id: str, workspace_root: str | None = None) -> Issue:
|
||||
description="""Create a new issue (bug, feature, task, epic, or chore) with optional design,
|
||||
acceptance criteria, and dependencies.""",
|
||||
)
|
||||
@with_workspace
|
||||
@require_context
|
||||
async def create_issue(
|
||||
title: str,
|
||||
@@ -325,6 +366,7 @@ async def create_issue(
|
||||
description="""Update an existing issue's status, priority, assignee, description, design notes,
|
||||
or acceptance criteria. Use this to claim work (set status=in_progress).""",
|
||||
)
|
||||
@with_workspace
|
||||
@require_context
|
||||
async def update_issue(
|
||||
issue_id: str,
|
||||
@@ -363,6 +405,7 @@ async def update_issue(
|
||||
name="close",
|
||||
description="Close (complete) an issue. Mark work as done when you've finished implementing/fixing it.",
|
||||
)
|
||||
@with_workspace
|
||||
@require_context
|
||||
async def close_issue(issue_id: str, reason: str = "Completed", workspace_root: str | None = None) -> list[Issue]:
|
||||
"""Close (complete) an issue."""
|
||||
@@ -373,6 +416,7 @@ async def close_issue(issue_id: str, reason: str = "Completed", workspace_root:
|
||||
name="reopen",
|
||||
description="Reopen one or more closed issues. Sets status to 'open' and clears closed_at timestamp.",
|
||||
)
|
||||
@with_workspace
|
||||
@require_context
|
||||
async def reopen_issue(issue_ids: list[str], reason: str | None = None, workspace_root: str | None = None) -> list[Issue]:
|
||||
"""Reopen one or more closed issues."""
|
||||
@@ -384,6 +428,7 @@ async def reopen_issue(issue_ids: list[str], reason: str | None = None, workspac
|
||||
description="""Add a dependency between issues. Types: blocks (hard blocker),
|
||||
related (soft link), parent-child (epic/subtask), discovered-from (found during work).""",
|
||||
)
|
||||
@with_workspace
|
||||
@require_context
|
||||
async def add_dependency(
|
||||
issue_id: str,
|
||||
@@ -403,6 +448,7 @@ async def add_dependency(
|
||||
name="stats",
|
||||
description="Get statistics: total issues, open, in_progress, closed, blocked, ready, and average lead time.",
|
||||
)
|
||||
@with_workspace
|
||||
async def stats(workspace_root: str | None = None) -> Stats:
|
||||
"""Get statistics about tasks."""
|
||||
return await beads_stats()
|
||||
@@ -412,6 +458,7 @@ async def stats(workspace_root: str | None = None) -> Stats:
|
||||
name="blocked",
|
||||
description="Get blocked issues showing what dependencies are blocking them from being worked on.",
|
||||
)
|
||||
@with_workspace
|
||||
async def blocked(workspace_root: str | None = None) -> list[BlockedIssue]:
|
||||
"""Get blocked issues."""
|
||||
return await beads_blocked()
|
||||
@@ -422,6 +469,7 @@ async def blocked(workspace_root: str | None = None) -> list[BlockedIssue]:
|
||||
description="""Initialize bd in current directory. Creates .beads/ directory and
|
||||
database with optional custom prefix for issue IDs.""",
|
||||
)
|
||||
@with_workspace
|
||||
@require_context
|
||||
async def init(prefix: str | None = None, workspace_root: str | None = None) -> str:
|
||||
"""Initialize bd in current directory."""
|
||||
@@ -432,6 +480,7 @@ async def init(prefix: str | None = None, workspace_root: str | None = None) ->
|
||||
name="debug_env",
|
||||
description="Debug tool: Show environment and working directory information",
|
||||
)
|
||||
@with_workspace
|
||||
async def debug_env(workspace_root: str | None = None) -> str:
|
||||
"""Debug tool to check working directory and environment variables."""
|
||||
info = []
|
||||
|
||||
@@ -1,6 +1,10 @@
|
||||
"""MCP tools for beads issue tracker."""
|
||||
|
||||
import asyncio
|
||||
import os
|
||||
import subprocess
|
||||
from contextvars import ContextVar
|
||||
from functools import lru_cache
|
||||
from typing import Annotated, TYPE_CHECKING
|
||||
|
||||
from .bd_client import create_bd_client, BdClientBase, BdError
|
||||
@@ -25,10 +29,15 @@ from .models import (
|
||||
UpdateIssueParams,
|
||||
)
|
||||
|
||||
# Global client instance - initialized on first use
|
||||
_client: BdClientBase | None = None
|
||||
_version_checked: bool = False
|
||||
_client_registered: bool = False
|
||||
# ContextVar for request-scoped workspace routing
|
||||
current_workspace: ContextVar[str | None] = ContextVar('workspace', default=None)
|
||||
|
||||
# Connection pool for per-project daemon sockets
|
||||
_connection_pool: dict[str, BdClientBase] = {}
|
||||
_pool_lock = asyncio.Lock()
|
||||
|
||||
# Version checking state (per-pool client)
|
||||
_version_checked: set[str] = set()
|
||||
|
||||
# Default constants
|
||||
DEFAULT_ISSUE_TYPE: IssueType = "task"
|
||||
@@ -50,41 +59,108 @@ def _register_client_for_cleanup(client: BdClientBase) -> None:
|
||||
pass
|
||||
|
||||
|
||||
async def _get_client() -> BdClientBase:
|
||||
"""Get a BdClient instance, creating it on first use.
|
||||
def _resolve_workspace_root(path: str) -> str:
|
||||
"""Resolve workspace root to git repo root if inside a git repo.
|
||||
|
||||
Args:
|
||||
path: Directory path to resolve
|
||||
|
||||
Returns:
|
||||
Git repo root if inside git repo, otherwise the original path
|
||||
"""
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["git", "rev-parse", "--show-toplevel"],
|
||||
cwd=path,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
check=False,
|
||||
)
|
||||
if result.returncode == 0:
|
||||
return result.stdout.strip()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return os.path.abspath(path)
|
||||
|
||||
Performs version check on first initialization.
|
||||
|
||||
@lru_cache(maxsize=128)
|
||||
def _canonicalize_path(path: str) -> str:
|
||||
"""Canonicalize workspace path to handle symlinks and git repos.
|
||||
|
||||
This ensures that different paths pointing to the same project
|
||||
(e.g., via symlinks) use the same daemon connection.
|
||||
|
||||
Args:
|
||||
path: Workspace directory path
|
||||
|
||||
Returns:
|
||||
Canonical path (handles symlinks and submodules correctly)
|
||||
"""
|
||||
# 1. Resolve symlinks
|
||||
real = os.path.realpath(path)
|
||||
|
||||
# 2. Check for local .beads directory (submodule edge case)
|
||||
# Submodules should use their own .beads, not the parent repo's
|
||||
if os.path.exists(os.path.join(real, ".beads")):
|
||||
return real
|
||||
|
||||
# 3. Try to find git toplevel
|
||||
# This ensures we connect to the right daemon for the git repo
|
||||
return _resolve_workspace_root(real)
|
||||
|
||||
|
||||
async def _get_client() -> BdClientBase:
|
||||
"""Get a BdClient instance for the current workspace.
|
||||
|
||||
Uses connection pool to manage per-project daemon sockets.
|
||||
Workspace is determined by current_workspace ContextVar or BEADS_WORKING_DIR env.
|
||||
|
||||
Performs version check on first connection to each workspace.
|
||||
Uses daemon client if available, falls back to CLI client.
|
||||
|
||||
Returns:
|
||||
Configured BdClientBase instance (config loaded automatically)
|
||||
Configured BdClientBase instance for the current workspace
|
||||
|
||||
Raises:
|
||||
BdError: If bd is not installed or version is incompatible
|
||||
BdError: If no workspace is set, or bd is not installed, or version is incompatible
|
||||
"""
|
||||
global _client, _version_checked, _client_registered
|
||||
if _client is None:
|
||||
# Check if daemon should be used (default: yes)
|
||||
use_daemon = os.environ.get("BEADS_USE_DAEMON", "1") == "1"
|
||||
workspace_root = os.environ.get("BEADS_WORKING_DIR")
|
||||
|
||||
_client = create_bd_client(
|
||||
prefer_daemon=use_daemon,
|
||||
working_dir=workspace_root
|
||||
# Determine workspace from ContextVar or environment
|
||||
workspace = current_workspace.get() or os.environ.get("BEADS_WORKING_DIR")
|
||||
if not workspace:
|
||||
raise BdError(
|
||||
"No workspace set. Either provide workspace_root parameter or call set_context() first."
|
||||
)
|
||||
|
||||
# Register for cleanup on first creation
|
||||
if not _client_registered:
|
||||
_register_client_for_cleanup(_client)
|
||||
_client_registered = True
|
||||
|
||||
# Canonicalize path to handle symlinks and deduplicate connections
|
||||
canonical = _canonicalize_path(workspace)
|
||||
|
||||
# Thread-safe connection pool access
|
||||
async with _pool_lock:
|
||||
if canonical not in _connection_pool:
|
||||
# Create new client for this workspace
|
||||
use_daemon = os.environ.get("BEADS_USE_DAEMON", "1") == "1"
|
||||
|
||||
client = create_bd_client(
|
||||
prefer_daemon=use_daemon,
|
||||
working_dir=canonical
|
||||
)
|
||||
|
||||
# Register for cleanup
|
||||
_register_client_for_cleanup(client)
|
||||
|
||||
# Add to pool
|
||||
_connection_pool[canonical] = client
|
||||
|
||||
client = _connection_pool[canonical]
|
||||
|
||||
# Check version once per workspace (only for CLI client)
|
||||
if canonical not in _version_checked:
|
||||
if hasattr(client, '_check_version'):
|
||||
await client._check_version()
|
||||
_version_checked.add(canonical)
|
||||
|
||||
# Check version once per server lifetime (only for CLI client)
|
||||
if not _version_checked:
|
||||
if hasattr(_client, '_check_version'):
|
||||
await _client._check_version()
|
||||
_version_checked = True
|
||||
|
||||
return _client
|
||||
return client
|
||||
|
||||
|
||||
async def beads_ready_work(
|
||||
|
||||
459
integrations/beads-mcp/tests/test_multi_project_switching.py
Normal file
459
integrations/beads-mcp/tests/test_multi_project_switching.py
Normal file
@@ -0,0 +1,459 @@
|
||||
"""Integration tests for multi-project MCP context switching.
|
||||
|
||||
Tests verify:
|
||||
- Concurrent multi-project calls work correctly
|
||||
- No cross-project data leakage
|
||||
- Path canonicalization handles symlinks and submodules
|
||||
- Connection pool prevents race conditions
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import os
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
from unittest.mock import AsyncMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from beads_mcp.tools import (
|
||||
_canonicalize_path,
|
||||
_connection_pool,
|
||||
_pool_lock,
|
||||
_resolve_workspace_root,
|
||||
beads_create_issue,
|
||||
beads_list_issues,
|
||||
beads_ready_work,
|
||||
current_workspace,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def reset_connection_pool():
|
||||
"""Reset connection pool before each test."""
|
||||
_connection_pool.clear()
|
||||
yield
|
||||
_connection_pool.clear()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def temp_projects():
|
||||
"""Create temporary project directories for testing."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
project_a = Path(tmpdir) / "project_a"
|
||||
project_b = Path(tmpdir) / "project_b"
|
||||
project_c = Path(tmpdir) / "project_c"
|
||||
|
||||
project_a.mkdir()
|
||||
project_b.mkdir()
|
||||
project_c.mkdir()
|
||||
|
||||
# Create .beads directories to simulate initialized projects
|
||||
(project_a / ".beads").mkdir()
|
||||
(project_b / ".beads").mkdir()
|
||||
(project_c / ".beads").mkdir()
|
||||
|
||||
yield {
|
||||
"a": str(project_a),
|
||||
"b": str(project_b),
|
||||
"c": str(project_c),
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_client_factory():
|
||||
"""Factory for creating mock clients per project."""
|
||||
clients = {}
|
||||
|
||||
def get_mock_client(workspace_root: str):
|
||||
if workspace_root not in clients:
|
||||
client = AsyncMock()
|
||||
client.ready = AsyncMock(return_value=[])
|
||||
client.list_issues = AsyncMock(return_value=[])
|
||||
client.create = AsyncMock(return_value={
|
||||
"id": f"issue-{len(clients)}",
|
||||
"title": "Test",
|
||||
"workspace": workspace_root,
|
||||
})
|
||||
clients[workspace_root] = client
|
||||
return clients[workspace_root]
|
||||
|
||||
return get_mock_client, clients
|
||||
|
||||
|
||||
class TestConcurrentMultiProject:
|
||||
"""Test concurrent access to multiple projects."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_concurrent_calls_different_projects(self, temp_projects, mock_client_factory):
|
||||
"""Test concurrent calls to different projects use different clients."""
|
||||
get_mock, clients = mock_client_factory
|
||||
|
||||
async def call_ready(workspace: str):
|
||||
"""Call ready with workspace context set."""
|
||||
token = current_workspace.set(workspace)
|
||||
try:
|
||||
with patch("beads_mcp.tools.create_bd_client", side_effect=lambda **kwargs: get_mock(kwargs["working_dir"])):
|
||||
return await beads_ready_work()
|
||||
finally:
|
||||
current_workspace.reset(token)
|
||||
|
||||
# Call all three projects concurrently
|
||||
results = await asyncio.gather(
|
||||
call_ready(temp_projects["a"]),
|
||||
call_ready(temp_projects["b"]),
|
||||
call_ready(temp_projects["c"]),
|
||||
)
|
||||
|
||||
# Verify we created separate clients for each project
|
||||
assert len(clients) == 3
|
||||
# Note: paths might be canonicalized (e.g., /var -> /private/var on macOS)
|
||||
canonical_a = os.path.realpath(temp_projects["a"])
|
||||
canonical_b = os.path.realpath(temp_projects["b"])
|
||||
canonical_c = os.path.realpath(temp_projects["c"])
|
||||
assert canonical_a in clients
|
||||
assert canonical_b in clients
|
||||
assert canonical_c in clients
|
||||
|
||||
# Verify each client was called
|
||||
for client in clients.values():
|
||||
client.ready.assert_called_once()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_concurrent_calls_same_project_reuse_client(self, temp_projects, mock_client_factory):
|
||||
"""Test concurrent calls to same project reuse the same client."""
|
||||
get_mock, clients = mock_client_factory
|
||||
|
||||
async def call_ready(workspace: str):
|
||||
"""Call ready with workspace context set."""
|
||||
token = current_workspace.set(workspace)
|
||||
try:
|
||||
with patch("beads_mcp.tools.create_bd_client", side_effect=lambda **kwargs: get_mock(kwargs["working_dir"])):
|
||||
return await beads_ready_work()
|
||||
finally:
|
||||
current_workspace.reset(token)
|
||||
|
||||
# Call same project multiple times concurrently
|
||||
results = await asyncio.gather(
|
||||
call_ready(temp_projects["a"]),
|
||||
call_ready(temp_projects["a"]),
|
||||
call_ready(temp_projects["a"]),
|
||||
)
|
||||
|
||||
# Verify only one client created (connection pool working)
|
||||
assert len(clients) == 1
|
||||
canonical_a = os.path.realpath(temp_projects["a"])
|
||||
assert canonical_a in clients
|
||||
|
||||
# Verify client was called 3 times
|
||||
clients[canonical_a].ready.assert_called()
|
||||
assert clients[canonical_a].ready.call_count == 3
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_pool_lock_prevents_race_conditions(self, temp_projects, mock_client_factory):
|
||||
"""Test that pool lock prevents race conditions during client creation."""
|
||||
get_mock, clients = mock_client_factory
|
||||
|
||||
# Track creation count (the lock should ensure only 1)
|
||||
creation_count = []
|
||||
|
||||
async def call_with_delay(workspace: str):
|
||||
"""Call ready and track concurrent creation attempts."""
|
||||
token = current_workspace.set(workspace)
|
||||
try:
|
||||
def slow_create(**kwargs):
|
||||
"""Slow client creation to expose race conditions."""
|
||||
creation_count.append(1)
|
||||
import time
|
||||
time.sleep(0.01) # Simulate slow creation
|
||||
return get_mock(kwargs["working_dir"])
|
||||
|
||||
with patch("beads_mcp.tools.create_bd_client", side_effect=slow_create):
|
||||
return await beads_ready_work()
|
||||
finally:
|
||||
current_workspace.reset(token)
|
||||
|
||||
# Race: three calls to same project
|
||||
await asyncio.gather(
|
||||
call_with_delay(temp_projects["a"]),
|
||||
call_with_delay(temp_projects["a"]),
|
||||
call_with_delay(temp_projects["a"]),
|
||||
)
|
||||
|
||||
# Pool lock should ensure only one client created
|
||||
assert len(clients) == 1
|
||||
# Only one creation should have happened (due to pool lock)
|
||||
assert len(creation_count) == 1
|
||||
|
||||
|
||||
class TestPathCanonicalization:
|
||||
"""Test path canonicalization for symlinks and submodules."""
|
||||
|
||||
def test_canonicalize_with_beads_dir(self, temp_projects):
|
||||
"""Test canonicalization prefers local .beads directory."""
|
||||
project_a = temp_projects["a"]
|
||||
|
||||
# Should return the project path itself (has .beads)
|
||||
canonical = _canonicalize_path(project_a)
|
||||
assert canonical == os.path.realpath(project_a)
|
||||
|
||||
def test_canonicalize_symlink_deduplication(self):
|
||||
"""Test symlinks to same directory deduplicate to same canonical path."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
real_dir = Path(tmpdir) / "real"
|
||||
real_dir.mkdir()
|
||||
(real_dir / ".beads").mkdir()
|
||||
|
||||
symlink = Path(tmpdir) / "symlink"
|
||||
symlink.symlink_to(real_dir)
|
||||
|
||||
# Both paths should canonicalize to same path
|
||||
canonical_real = _canonicalize_path(str(real_dir))
|
||||
canonical_symlink = _canonicalize_path(str(symlink))
|
||||
|
||||
assert canonical_real == canonical_symlink
|
||||
assert canonical_real == str(real_dir.resolve())
|
||||
|
||||
def test_canonicalize_submodule_with_beads(self):
|
||||
"""Test submodule with own .beads uses local directory, not parent."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
parent = Path(tmpdir) / "parent"
|
||||
parent.mkdir()
|
||||
(parent / ".beads").mkdir()
|
||||
|
||||
submodule = parent / "submodule"
|
||||
submodule.mkdir()
|
||||
(submodule / ".beads").mkdir()
|
||||
|
||||
# Submodule should use its own .beads, not parent's
|
||||
canonical = _canonicalize_path(str(submodule))
|
||||
assert canonical == str(submodule.resolve())
|
||||
# NOT parent's path
|
||||
assert canonical != str(parent.resolve())
|
||||
|
||||
def test_canonicalize_no_beads_uses_git_toplevel(self):
|
||||
"""Test path without .beads falls back to git toplevel."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
project = Path(tmpdir) / "project"
|
||||
project.mkdir()
|
||||
|
||||
# Mock git toplevel to return project dir
|
||||
with patch("beads_mcp.tools.subprocess.run") as mock_run:
|
||||
mock_run.return_value.returncode = 0
|
||||
mock_run.return_value.stdout = str(project)
|
||||
|
||||
canonical = _canonicalize_path(str(project))
|
||||
|
||||
# Should use git toplevel
|
||||
assert canonical == str(project)
|
||||
mock_run.assert_called_once()
|
||||
|
||||
def test_resolve_workspace_root_git_repo(self):
|
||||
"""Test _resolve_workspace_root returns git toplevel."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
project = Path(tmpdir) / "repo"
|
||||
project.mkdir()
|
||||
|
||||
with patch("beads_mcp.tools.subprocess.run") as mock_run:
|
||||
mock_run.return_value.returncode = 0
|
||||
mock_run.return_value.stdout = str(project)
|
||||
|
||||
resolved = _resolve_workspace_root(str(project))
|
||||
|
||||
assert resolved == str(project)
|
||||
|
||||
def test_resolve_workspace_root_not_git(self):
|
||||
"""Test _resolve_workspace_root falls back to abspath if not git repo."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
project = Path(tmpdir) / "not-git"
|
||||
project.mkdir()
|
||||
|
||||
with patch("beads_mcp.tools.subprocess.run") as mock_run:
|
||||
mock_run.return_value.returncode = 1
|
||||
mock_run.return_value.stdout = ""
|
||||
|
||||
resolved = _resolve_workspace_root(str(project))
|
||||
|
||||
# Compare as realpath to handle macOS /var -> /private/var
|
||||
assert os.path.realpath(resolved) == os.path.realpath(str(project))
|
||||
|
||||
|
||||
class TestCrossProjectIsolation:
|
||||
"""Test that projects don't leak data to each other."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_no_cross_project_data_leakage(self, temp_projects, mock_client_factory):
|
||||
"""Test operations in project A don't affect project B."""
|
||||
get_mock, clients = mock_client_factory
|
||||
|
||||
# Mock different responses for each project
|
||||
canonical_a = os.path.realpath(temp_projects["a"])
|
||||
canonical_b = os.path.realpath(temp_projects["b"])
|
||||
|
||||
def create_client_with_data(**kwargs):
|
||||
client = get_mock(kwargs["working_dir"])
|
||||
workspace = os.path.realpath(kwargs["working_dir"])
|
||||
|
||||
# Project A returns issues, B returns empty
|
||||
if workspace == canonical_a:
|
||||
client.list_issues = AsyncMock(return_value=[
|
||||
{"id": "a-1", "title": "Issue from A"}
|
||||
])
|
||||
else:
|
||||
client.list_issues = AsyncMock(return_value=[])
|
||||
|
||||
return client
|
||||
|
||||
async def list_from_project(workspace: str):
|
||||
token = current_workspace.set(workspace)
|
||||
try:
|
||||
with patch("beads_mcp.tools.create_bd_client", side_effect=create_client_with_data):
|
||||
return await beads_list_issues()
|
||||
finally:
|
||||
current_workspace.reset(token)
|
||||
|
||||
# List from both projects
|
||||
issues_a = await list_from_project(temp_projects["a"])
|
||||
issues_b = await list_from_project(temp_projects["b"])
|
||||
|
||||
# Project A has issues, B is empty
|
||||
assert len(issues_a) == 1
|
||||
assert issues_a[0]["id"] == "a-1"
|
||||
assert len(issues_b) == 0
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_stress_test_many_parallel_calls(self, temp_projects, mock_client_factory):
|
||||
"""Stress test: many parallel calls across multiple repos."""
|
||||
get_mock, clients = mock_client_factory
|
||||
|
||||
async def random_call(workspace: str, call_id: int):
|
||||
"""Random call to project."""
|
||||
token = current_workspace.set(workspace)
|
||||
try:
|
||||
with patch("beads_mcp.tools.create_bd_client", side_effect=lambda **kwargs: get_mock(kwargs["working_dir"])):
|
||||
# Alternate between ready and list calls
|
||||
if call_id % 2 == 0:
|
||||
return await beads_ready_work()
|
||||
else:
|
||||
return await beads_list_issues()
|
||||
finally:
|
||||
current_workspace.reset(token)
|
||||
|
||||
# 100 parallel calls across 3 projects
|
||||
workspaces = [temp_projects["a"], temp_projects["b"], temp_projects["c"]]
|
||||
tasks = [
|
||||
random_call(workspaces[i % 3], i)
|
||||
for i in range(100)
|
||||
]
|
||||
|
||||
results = await asyncio.gather(*tasks)
|
||||
|
||||
# Verify all calls completed
|
||||
assert len(results) == 100
|
||||
|
||||
# Verify only 3 clients created (one per project)
|
||||
assert len(clients) == 3
|
||||
|
||||
|
||||
class TestContextVarBehavior:
|
||||
"""Test ContextVar behavior and edge cases."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_contextvar_isolated_per_request(self, temp_projects):
|
||||
"""Test ContextVar is isolated per async call."""
|
||||
|
||||
async def get_current_workspace():
|
||||
"""Get current workspace from ContextVar."""
|
||||
return current_workspace.get()
|
||||
|
||||
# Set different contexts in parallel
|
||||
async def call_with_context(workspace: str):
|
||||
token = current_workspace.set(workspace)
|
||||
try:
|
||||
# Simulate some async work
|
||||
await asyncio.sleep(0.01)
|
||||
return await get_current_workspace()
|
||||
finally:
|
||||
current_workspace.reset(token)
|
||||
|
||||
results = await asyncio.gather(
|
||||
call_with_context(temp_projects["a"]),
|
||||
call_with_context(temp_projects["b"]),
|
||||
call_with_context(temp_projects["c"]),
|
||||
)
|
||||
|
||||
# Each call should see its own workspace
|
||||
assert temp_projects["a"] in results
|
||||
assert temp_projects["b"] in results
|
||||
assert temp_projects["c"] in results
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_contextvar_reset_after_call(self, temp_projects):
|
||||
"""Test ContextVar is properly reset after call."""
|
||||
# No context initially
|
||||
assert current_workspace.get() is None
|
||||
|
||||
token = current_workspace.set(temp_projects["a"])
|
||||
assert current_workspace.get() == temp_projects["a"]
|
||||
|
||||
current_workspace.reset(token)
|
||||
assert current_workspace.get() is None
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_contextvar_fallback_to_env(self, temp_projects):
|
||||
"""Test ContextVar falls back to BEADS_WORKING_DIR."""
|
||||
import os
|
||||
|
||||
# Set env var
|
||||
canonical_a = os.path.realpath(temp_projects["a"])
|
||||
os.environ["BEADS_WORKING_DIR"] = temp_projects["a"]
|
||||
|
||||
try:
|
||||
# ContextVar not set, should use env
|
||||
with patch("beads_mcp.tools.create_bd_client") as mock_create:
|
||||
mock_client = AsyncMock()
|
||||
mock_client.ready = AsyncMock(return_value=[])
|
||||
mock_create.return_value = mock_client
|
||||
|
||||
await beads_ready_work()
|
||||
|
||||
# Should have created client with env workspace (canonicalized)
|
||||
mock_create.assert_called_once()
|
||||
assert os.path.realpath(mock_create.call_args.kwargs["working_dir"]) == canonical_a
|
||||
finally:
|
||||
os.environ.pop("BEADS_WORKING_DIR", None)
|
||||
|
||||
|
||||
class TestEdgeCases:
|
||||
"""Test edge cases and error handling."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_no_workspace_raises_error(self):
|
||||
"""Test calling without workspace raises helpful error."""
|
||||
import os
|
||||
|
||||
# Clear env
|
||||
os.environ.pop("BEADS_WORKING_DIR", None)
|
||||
|
||||
# No ContextVar set, no env var
|
||||
with pytest.raises(Exception) as exc_info:
|
||||
with patch("beads_mcp.tools.create_bd_client") as mock_create:
|
||||
await beads_ready_work()
|
||||
|
||||
assert "No workspace set" in str(exc_info.value)
|
||||
|
||||
def test_canonicalize_path_cached(self, temp_projects):
|
||||
"""Test path canonicalization is cached for performance."""
|
||||
# Clear cache
|
||||
_canonicalize_path.cache_clear()
|
||||
|
||||
# First call
|
||||
result1 = _canonicalize_path(temp_projects["a"])
|
||||
|
||||
# Second call should hit cache
|
||||
result2 = _canonicalize_path(temp_projects["a"])
|
||||
|
||||
assert result1 == result2
|
||||
|
||||
# Verify cache hit
|
||||
cache_info = _canonicalize_path.cache_info()
|
||||
assert cache_info.hits >= 1
|
||||
2
integrations/beads-mcp/uv.lock
generated
2
integrations/beads-mcp/uv.lock
generated
@@ -58,7 +58,7 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "beads-mcp"
|
||||
version = "0.14.0"
|
||||
version = "0.17.0"
|
||||
source = { editable = "." }
|
||||
dependencies = [
|
||||
{ name = "fastmcp" },
|
||||
|
||||
Reference in New Issue
Block a user