Fix bd-143: Prevent daemon auto-sync from wiping out issues.jsonl with empty database
- Added safety check to exportToJSONLWithStore (daemon path) - Refuses to export 0 issues over non-empty JSONL file - Added --force flag to override safety check when intentional - Added test coverage for empty database export protection - Prevents data loss when daemon has wrong/empty database Amp-Thread-ID: https://ampcode.com/threads/T-de18e0ad-bd17-46ec-994b-0581e257dcde Co-authored-by: Amp <amp@ampcode.com>
This commit is contained in:
@@ -130,12 +130,85 @@ Configure separate MCP servers for specific projects using `BEADS_WORKING_DIR`:
|
||||
|
||||
⚠️ **Problem**: AI may select the wrong MCP server for your workspace, causing commands to operate on the wrong database. Use single MCP server instead.
|
||||
|
||||
## Multi-Project Support
|
||||
|
||||
The MCP server supports managing multiple beads projects in a single session using per-request workspace routing.
|
||||
|
||||
### Using `workspace_root` Parameter
|
||||
|
||||
Every tool accepts an optional `workspace_root` parameter for explicit project targeting:
|
||||
|
||||
```python
|
||||
# Query issues from different projects concurrently
|
||||
results = await asyncio.gather(
|
||||
beads_ready_work(workspace_root="/Users/you/project-a"),
|
||||
beads_ready_work(workspace_root="/Users/you/project-b"),
|
||||
)
|
||||
|
||||
# Create issue in specific project
|
||||
await beads_create_issue(
|
||||
title="Fix auth bug",
|
||||
priority=1,
|
||||
workspace_root="/Users/you/project-a"
|
||||
)
|
||||
```
|
||||
|
||||
### Architecture
|
||||
|
||||
**Connection Pool**: The MCP server maintains a connection pool keyed by canonical workspace path:
|
||||
- Each workspace gets its own daemon socket connection
|
||||
- Paths are canonicalized (symlinks resolved, git toplevel detected)
|
||||
- Concurrent requests use `asyncio.Lock` to prevent race conditions
|
||||
- No LRU eviction (keeps all connections open for session)
|
||||
|
||||
**ContextVar Routing**: Per-request workspace context is managed via Python's `ContextVar`:
|
||||
- Each tool call sets the workspace for its duration
|
||||
- Properly isolated for concurrent calls (no cross-contamination)
|
||||
- Falls back to `BEADS_WORKING_DIR` if `workspace_root` not provided
|
||||
|
||||
**Path Canonicalization**:
|
||||
- Symlinks are resolved to physical paths (prevents duplicate connections)
|
||||
- Git submodules with `.beads` directories use local context
|
||||
- Git toplevel is used for non-initialized directories
|
||||
- Results are cached for performance
|
||||
|
||||
### Backward Compatibility
|
||||
|
||||
The `set_context()` tool still works and sets a default workspace:
|
||||
|
||||
```python
|
||||
# Old way (still supported)
|
||||
await set_context(workspace_root="/Users/you/project-a")
|
||||
await beads_ready_work() # Uses project-a
|
||||
|
||||
# New way (more flexible)
|
||||
await beads_ready_work(workspace_root="/Users/you/project-a")
|
||||
```
|
||||
|
||||
### Concurrency Gotchas
|
||||
|
||||
⚠️ **IMPORTANT**: Tool implementations must NOT spawn background tasks using `asyncio.create_task()`.
|
||||
|
||||
**Why?** ContextVar doesn't propagate to spawned tasks, which can cause cross-project data leakage.
|
||||
|
||||
**Solution**: Keep all tool logic synchronous or use sequential `await` calls.
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
**Symlink aliasing**: Different paths to same project are deduplicated automatically via `realpath`.
|
||||
|
||||
**Submodule handling**: Submodules with their own `.beads` directory are treated as separate projects.
|
||||
|
||||
**Stale sockets**: Currently no health checks. Phase 2 will add retry-on-failure if monitoring shows need.
|
||||
|
||||
**Version mismatches**: Daemon version is auto-checked since v0.16.0. Mismatched daemons are automatically restarted.
|
||||
|
||||
## Features
|
||||
|
||||
**Resource:**
|
||||
- `beads://quickstart` - Quickstart guide for using beads
|
||||
|
||||
**Tools:**
|
||||
**Tools (all support `workspace_root` parameter):**
|
||||
- `init` - Initialize bd in current directory
|
||||
- `create` - Create new issue (bug, feature, task, epic, chore)
|
||||
- `list` - List issues with filters (status, priority, type, assignee)
|
||||
@@ -147,6 +220,7 @@ Configure separate MCP servers for specific projects using `BEADS_WORKING_DIR`:
|
||||
- `blocked` - Get blocked issues
|
||||
- `stats` - Get project statistics
|
||||
- `reopen` - Reopen a closed issue with optional reason
|
||||
- `set_context` - Set default workspace for subsequent calls (backward compatibility)
|
||||
|
||||
|
||||
## Development
|
||||
|
||||
@@ -26,6 +26,7 @@ from beads_mcp.tools import (
|
||||
beads_show_issue,
|
||||
beads_stats,
|
||||
beads_update_issue,
|
||||
current_workspace, # ContextVar for per-request workspace routing
|
||||
)
|
||||
|
||||
# Setup logging for lifecycle events
|
||||
@@ -96,9 +97,43 @@ signal.signal(signal.SIGINT, signal_handler)
|
||||
logger.info("beads-mcp server initialized with lifecycle management")
|
||||
|
||||
|
||||
def with_workspace(func: Callable[..., T]) -> Callable[..., T]:
|
||||
"""Decorator to set workspace context for the duration of a tool call.
|
||||
|
||||
Extracts workspace_root parameter from tool call kwargs, resolves it,
|
||||
and sets current_workspace ContextVar for the request duration.
|
||||
Falls back to BEADS_WORKING_DIR if workspace_root not provided.
|
||||
|
||||
This enables per-request workspace routing for multi-project support.
|
||||
"""
|
||||
@wraps(func)
|
||||
async def wrapper(*args, **kwargs):
|
||||
# Extract workspace_root parameter (if provided)
|
||||
workspace_root = kwargs.get('workspace_root')
|
||||
|
||||
# Determine workspace: parameter > env > None
|
||||
workspace = workspace_root or os.environ.get("BEADS_WORKING_DIR")
|
||||
|
||||
# Set ContextVar for this request
|
||||
token = current_workspace.set(workspace)
|
||||
|
||||
try:
|
||||
# Execute tool with workspace context set
|
||||
return await func(*args, **kwargs)
|
||||
finally:
|
||||
# Always reset ContextVar after tool completes
|
||||
current_workspace.reset(token)
|
||||
|
||||
return wrapper
|
||||
|
||||
|
||||
def require_context(func: Callable[..., T]) -> Callable[..., T]:
|
||||
"""Decorator to enforce context has been set before write operations.
|
||||
|
||||
Passes if either:
|
||||
- workspace_root was provided on tool call (via ContextVar), OR
|
||||
- BEADS_WORKING_DIR is set (from set_context)
|
||||
|
||||
Only enforces if BEADS_REQUIRE_CONTEXT=1 is set in environment.
|
||||
This allows backward compatibility while adding safety for multi-repo setups.
|
||||
"""
|
||||
@@ -106,9 +141,11 @@ def require_context(func: Callable[..., T]) -> Callable[..., T]:
|
||||
async def wrapper(*args, **kwargs):
|
||||
# Only enforce if explicitly enabled
|
||||
if os.environ.get("BEADS_REQUIRE_CONTEXT") == "1":
|
||||
if not os.environ.get("BEADS_CONTEXT_SET"):
|
||||
# Check ContextVar or environment
|
||||
workspace = current_workspace.get() or os.environ.get("BEADS_WORKING_DIR")
|
||||
if not workspace:
|
||||
raise ValueError(
|
||||
"Context not set. Call set_context with your workspace root before performing write operations."
|
||||
"Context not set. Either provide workspace_root parameter or call set_context() first."
|
||||
)
|
||||
return await func(*args, **kwargs)
|
||||
return wrapper
|
||||
@@ -243,6 +280,7 @@ async def where_am_i(workspace_root: str | None = None) -> str:
|
||||
|
||||
# Register all tools
|
||||
@mcp.tool(name="ready", description="Find tasks that have no blockers and are ready to be worked on.")
|
||||
@with_workspace
|
||||
async def ready_work(
|
||||
limit: int = 10,
|
||||
priority: int | None = None,
|
||||
@@ -257,6 +295,7 @@ async def ready_work(
|
||||
name="list",
|
||||
description="List all issues with optional filters (status, priority, type, assignee).",
|
||||
)
|
||||
@with_workspace
|
||||
async def list_issues(
|
||||
status: IssueStatus | None = None,
|
||||
priority: int | None = None,
|
||||
@@ -279,6 +318,7 @@ async def list_issues(
|
||||
name="show",
|
||||
description="Show detailed information about a specific issue including dependencies and dependents.",
|
||||
)
|
||||
@with_workspace
|
||||
async def show_issue(issue_id: str, workspace_root: str | None = None) -> Issue:
|
||||
"""Show detailed information about a specific issue."""
|
||||
return await beads_show_issue(issue_id=issue_id)
|
||||
@@ -289,6 +329,7 @@ async def show_issue(issue_id: str, workspace_root: str | None = None) -> Issue:
|
||||
description="""Create a new issue (bug, feature, task, epic, or chore) with optional design,
|
||||
acceptance criteria, and dependencies.""",
|
||||
)
|
||||
@with_workspace
|
||||
@require_context
|
||||
async def create_issue(
|
||||
title: str,
|
||||
@@ -325,6 +366,7 @@ async def create_issue(
|
||||
description="""Update an existing issue's status, priority, assignee, description, design notes,
|
||||
or acceptance criteria. Use this to claim work (set status=in_progress).""",
|
||||
)
|
||||
@with_workspace
|
||||
@require_context
|
||||
async def update_issue(
|
||||
issue_id: str,
|
||||
@@ -363,6 +405,7 @@ async def update_issue(
|
||||
name="close",
|
||||
description="Close (complete) an issue. Mark work as done when you've finished implementing/fixing it.",
|
||||
)
|
||||
@with_workspace
|
||||
@require_context
|
||||
async def close_issue(issue_id: str, reason: str = "Completed", workspace_root: str | None = None) -> list[Issue]:
|
||||
"""Close (complete) an issue."""
|
||||
@@ -373,6 +416,7 @@ async def close_issue(issue_id: str, reason: str = "Completed", workspace_root:
|
||||
name="reopen",
|
||||
description="Reopen one or more closed issues. Sets status to 'open' and clears closed_at timestamp.",
|
||||
)
|
||||
@with_workspace
|
||||
@require_context
|
||||
async def reopen_issue(issue_ids: list[str], reason: str | None = None, workspace_root: str | None = None) -> list[Issue]:
|
||||
"""Reopen one or more closed issues."""
|
||||
@@ -384,6 +428,7 @@ async def reopen_issue(issue_ids: list[str], reason: str | None = None, workspac
|
||||
description="""Add a dependency between issues. Types: blocks (hard blocker),
|
||||
related (soft link), parent-child (epic/subtask), discovered-from (found during work).""",
|
||||
)
|
||||
@with_workspace
|
||||
@require_context
|
||||
async def add_dependency(
|
||||
issue_id: str,
|
||||
@@ -403,6 +448,7 @@ async def add_dependency(
|
||||
name="stats",
|
||||
description="Get statistics: total issues, open, in_progress, closed, blocked, ready, and average lead time.",
|
||||
)
|
||||
@with_workspace
|
||||
async def stats(workspace_root: str | None = None) -> Stats:
|
||||
"""Get statistics about tasks."""
|
||||
return await beads_stats()
|
||||
@@ -412,6 +458,7 @@ async def stats(workspace_root: str | None = None) -> Stats:
|
||||
name="blocked",
|
||||
description="Get blocked issues showing what dependencies are blocking them from being worked on.",
|
||||
)
|
||||
@with_workspace
|
||||
async def blocked(workspace_root: str | None = None) -> list[BlockedIssue]:
|
||||
"""Get blocked issues."""
|
||||
return await beads_blocked()
|
||||
@@ -422,6 +469,7 @@ async def blocked(workspace_root: str | None = None) -> list[BlockedIssue]:
|
||||
description="""Initialize bd in current directory. Creates .beads/ directory and
|
||||
database with optional custom prefix for issue IDs.""",
|
||||
)
|
||||
@with_workspace
|
||||
@require_context
|
||||
async def init(prefix: str | None = None, workspace_root: str | None = None) -> str:
|
||||
"""Initialize bd in current directory."""
|
||||
@@ -432,6 +480,7 @@ async def init(prefix: str | None = None, workspace_root: str | None = None) ->
|
||||
name="debug_env",
|
||||
description="Debug tool: Show environment and working directory information",
|
||||
)
|
||||
@with_workspace
|
||||
async def debug_env(workspace_root: str | None = None) -> str:
|
||||
"""Debug tool to check working directory and environment variables."""
|
||||
info = []
|
||||
|
||||
@@ -1,6 +1,10 @@
|
||||
"""MCP tools for beads issue tracker."""
|
||||
|
||||
import asyncio
|
||||
import os
|
||||
import subprocess
|
||||
from contextvars import ContextVar
|
||||
from functools import lru_cache
|
||||
from typing import Annotated, TYPE_CHECKING
|
||||
|
||||
from .bd_client import create_bd_client, BdClientBase, BdError
|
||||
@@ -25,10 +29,15 @@ from .models import (
|
||||
UpdateIssueParams,
|
||||
)
|
||||
|
||||
# Global client instance - initialized on first use
|
||||
_client: BdClientBase | None = None
|
||||
_version_checked: bool = False
|
||||
_client_registered: bool = False
|
||||
# ContextVar for request-scoped workspace routing
|
||||
current_workspace: ContextVar[str | None] = ContextVar('workspace', default=None)
|
||||
|
||||
# Connection pool for per-project daemon sockets
|
||||
_connection_pool: dict[str, BdClientBase] = {}
|
||||
_pool_lock = asyncio.Lock()
|
||||
|
||||
# Version checking state (per-pool client)
|
||||
_version_checked: set[str] = set()
|
||||
|
||||
# Default constants
|
||||
DEFAULT_ISSUE_TYPE: IssueType = "task"
|
||||
@@ -50,41 +59,108 @@ def _register_client_for_cleanup(client: BdClientBase) -> None:
|
||||
pass
|
||||
|
||||
|
||||
async def _get_client() -> BdClientBase:
|
||||
"""Get a BdClient instance, creating it on first use.
|
||||
def _resolve_workspace_root(path: str) -> str:
|
||||
"""Resolve workspace root to git repo root if inside a git repo.
|
||||
|
||||
Args:
|
||||
path: Directory path to resolve
|
||||
|
||||
Returns:
|
||||
Git repo root if inside git repo, otherwise the original path
|
||||
"""
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["git", "rev-parse", "--show-toplevel"],
|
||||
cwd=path,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
check=False,
|
||||
)
|
||||
if result.returncode == 0:
|
||||
return result.stdout.strip()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return os.path.abspath(path)
|
||||
|
||||
Performs version check on first initialization.
|
||||
|
||||
@lru_cache(maxsize=128)
|
||||
def _canonicalize_path(path: str) -> str:
|
||||
"""Canonicalize workspace path to handle symlinks and git repos.
|
||||
|
||||
This ensures that different paths pointing to the same project
|
||||
(e.g., via symlinks) use the same daemon connection.
|
||||
|
||||
Args:
|
||||
path: Workspace directory path
|
||||
|
||||
Returns:
|
||||
Canonical path (handles symlinks and submodules correctly)
|
||||
"""
|
||||
# 1. Resolve symlinks
|
||||
real = os.path.realpath(path)
|
||||
|
||||
# 2. Check for local .beads directory (submodule edge case)
|
||||
# Submodules should use their own .beads, not the parent repo's
|
||||
if os.path.exists(os.path.join(real, ".beads")):
|
||||
return real
|
||||
|
||||
# 3. Try to find git toplevel
|
||||
# This ensures we connect to the right daemon for the git repo
|
||||
return _resolve_workspace_root(real)
|
||||
|
||||
|
||||
async def _get_client() -> BdClientBase:
|
||||
"""Get a BdClient instance for the current workspace.
|
||||
|
||||
Uses connection pool to manage per-project daemon sockets.
|
||||
Workspace is determined by current_workspace ContextVar or BEADS_WORKING_DIR env.
|
||||
|
||||
Performs version check on first connection to each workspace.
|
||||
Uses daemon client if available, falls back to CLI client.
|
||||
|
||||
Returns:
|
||||
Configured BdClientBase instance (config loaded automatically)
|
||||
Configured BdClientBase instance for the current workspace
|
||||
|
||||
Raises:
|
||||
BdError: If bd is not installed or version is incompatible
|
||||
BdError: If no workspace is set, or bd is not installed, or version is incompatible
|
||||
"""
|
||||
global _client, _version_checked, _client_registered
|
||||
if _client is None:
|
||||
# Check if daemon should be used (default: yes)
|
||||
use_daemon = os.environ.get("BEADS_USE_DAEMON", "1") == "1"
|
||||
workspace_root = os.environ.get("BEADS_WORKING_DIR")
|
||||
|
||||
_client = create_bd_client(
|
||||
prefer_daemon=use_daemon,
|
||||
working_dir=workspace_root
|
||||
# Determine workspace from ContextVar or environment
|
||||
workspace = current_workspace.get() or os.environ.get("BEADS_WORKING_DIR")
|
||||
if not workspace:
|
||||
raise BdError(
|
||||
"No workspace set. Either provide workspace_root parameter or call set_context() first."
|
||||
)
|
||||
|
||||
# Register for cleanup on first creation
|
||||
if not _client_registered:
|
||||
_register_client_for_cleanup(_client)
|
||||
_client_registered = True
|
||||
|
||||
# Canonicalize path to handle symlinks and deduplicate connections
|
||||
canonical = _canonicalize_path(workspace)
|
||||
|
||||
# Thread-safe connection pool access
|
||||
async with _pool_lock:
|
||||
if canonical not in _connection_pool:
|
||||
# Create new client for this workspace
|
||||
use_daemon = os.environ.get("BEADS_USE_DAEMON", "1") == "1"
|
||||
|
||||
client = create_bd_client(
|
||||
prefer_daemon=use_daemon,
|
||||
working_dir=canonical
|
||||
)
|
||||
|
||||
# Register for cleanup
|
||||
_register_client_for_cleanup(client)
|
||||
|
||||
# Add to pool
|
||||
_connection_pool[canonical] = client
|
||||
|
||||
client = _connection_pool[canonical]
|
||||
|
||||
# Check version once per workspace (only for CLI client)
|
||||
if canonical not in _version_checked:
|
||||
if hasattr(client, '_check_version'):
|
||||
await client._check_version()
|
||||
_version_checked.add(canonical)
|
||||
|
||||
# Check version once per server lifetime (only for CLI client)
|
||||
if not _version_checked:
|
||||
if hasattr(_client, '_check_version'):
|
||||
await _client._check_version()
|
||||
_version_checked = True
|
||||
|
||||
return _client
|
||||
return client
|
||||
|
||||
|
||||
async def beads_ready_work(
|
||||
|
||||
459
integrations/beads-mcp/tests/test_multi_project_switching.py
Normal file
459
integrations/beads-mcp/tests/test_multi_project_switching.py
Normal file
@@ -0,0 +1,459 @@
|
||||
"""Integration tests for multi-project MCP context switching.
|
||||
|
||||
Tests verify:
|
||||
- Concurrent multi-project calls work correctly
|
||||
- No cross-project data leakage
|
||||
- Path canonicalization handles symlinks and submodules
|
||||
- Connection pool prevents race conditions
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import os
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
from unittest.mock import AsyncMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from beads_mcp.tools import (
|
||||
_canonicalize_path,
|
||||
_connection_pool,
|
||||
_pool_lock,
|
||||
_resolve_workspace_root,
|
||||
beads_create_issue,
|
||||
beads_list_issues,
|
||||
beads_ready_work,
|
||||
current_workspace,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def reset_connection_pool():
|
||||
"""Reset connection pool before each test."""
|
||||
_connection_pool.clear()
|
||||
yield
|
||||
_connection_pool.clear()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def temp_projects():
|
||||
"""Create temporary project directories for testing."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
project_a = Path(tmpdir) / "project_a"
|
||||
project_b = Path(tmpdir) / "project_b"
|
||||
project_c = Path(tmpdir) / "project_c"
|
||||
|
||||
project_a.mkdir()
|
||||
project_b.mkdir()
|
||||
project_c.mkdir()
|
||||
|
||||
# Create .beads directories to simulate initialized projects
|
||||
(project_a / ".beads").mkdir()
|
||||
(project_b / ".beads").mkdir()
|
||||
(project_c / ".beads").mkdir()
|
||||
|
||||
yield {
|
||||
"a": str(project_a),
|
||||
"b": str(project_b),
|
||||
"c": str(project_c),
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_client_factory():
|
||||
"""Factory for creating mock clients per project."""
|
||||
clients = {}
|
||||
|
||||
def get_mock_client(workspace_root: str):
|
||||
if workspace_root not in clients:
|
||||
client = AsyncMock()
|
||||
client.ready = AsyncMock(return_value=[])
|
||||
client.list_issues = AsyncMock(return_value=[])
|
||||
client.create = AsyncMock(return_value={
|
||||
"id": f"issue-{len(clients)}",
|
||||
"title": "Test",
|
||||
"workspace": workspace_root,
|
||||
})
|
||||
clients[workspace_root] = client
|
||||
return clients[workspace_root]
|
||||
|
||||
return get_mock_client, clients
|
||||
|
||||
|
||||
class TestConcurrentMultiProject:
|
||||
"""Test concurrent access to multiple projects."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_concurrent_calls_different_projects(self, temp_projects, mock_client_factory):
|
||||
"""Test concurrent calls to different projects use different clients."""
|
||||
get_mock, clients = mock_client_factory
|
||||
|
||||
async def call_ready(workspace: str):
|
||||
"""Call ready with workspace context set."""
|
||||
token = current_workspace.set(workspace)
|
||||
try:
|
||||
with patch("beads_mcp.tools.create_bd_client", side_effect=lambda **kwargs: get_mock(kwargs["working_dir"])):
|
||||
return await beads_ready_work()
|
||||
finally:
|
||||
current_workspace.reset(token)
|
||||
|
||||
# Call all three projects concurrently
|
||||
results = await asyncio.gather(
|
||||
call_ready(temp_projects["a"]),
|
||||
call_ready(temp_projects["b"]),
|
||||
call_ready(temp_projects["c"]),
|
||||
)
|
||||
|
||||
# Verify we created separate clients for each project
|
||||
assert len(clients) == 3
|
||||
# Note: paths might be canonicalized (e.g., /var -> /private/var on macOS)
|
||||
canonical_a = os.path.realpath(temp_projects["a"])
|
||||
canonical_b = os.path.realpath(temp_projects["b"])
|
||||
canonical_c = os.path.realpath(temp_projects["c"])
|
||||
assert canonical_a in clients
|
||||
assert canonical_b in clients
|
||||
assert canonical_c in clients
|
||||
|
||||
# Verify each client was called
|
||||
for client in clients.values():
|
||||
client.ready.assert_called_once()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_concurrent_calls_same_project_reuse_client(self, temp_projects, mock_client_factory):
|
||||
"""Test concurrent calls to same project reuse the same client."""
|
||||
get_mock, clients = mock_client_factory
|
||||
|
||||
async def call_ready(workspace: str):
|
||||
"""Call ready with workspace context set."""
|
||||
token = current_workspace.set(workspace)
|
||||
try:
|
||||
with patch("beads_mcp.tools.create_bd_client", side_effect=lambda **kwargs: get_mock(kwargs["working_dir"])):
|
||||
return await beads_ready_work()
|
||||
finally:
|
||||
current_workspace.reset(token)
|
||||
|
||||
# Call same project multiple times concurrently
|
||||
results = await asyncio.gather(
|
||||
call_ready(temp_projects["a"]),
|
||||
call_ready(temp_projects["a"]),
|
||||
call_ready(temp_projects["a"]),
|
||||
)
|
||||
|
||||
# Verify only one client created (connection pool working)
|
||||
assert len(clients) == 1
|
||||
canonical_a = os.path.realpath(temp_projects["a"])
|
||||
assert canonical_a in clients
|
||||
|
||||
# Verify client was called 3 times
|
||||
clients[canonical_a].ready.assert_called()
|
||||
assert clients[canonical_a].ready.call_count == 3
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_pool_lock_prevents_race_conditions(self, temp_projects, mock_client_factory):
|
||||
"""Test that pool lock prevents race conditions during client creation."""
|
||||
get_mock, clients = mock_client_factory
|
||||
|
||||
# Track creation count (the lock should ensure only 1)
|
||||
creation_count = []
|
||||
|
||||
async def call_with_delay(workspace: str):
|
||||
"""Call ready and track concurrent creation attempts."""
|
||||
token = current_workspace.set(workspace)
|
||||
try:
|
||||
def slow_create(**kwargs):
|
||||
"""Slow client creation to expose race conditions."""
|
||||
creation_count.append(1)
|
||||
import time
|
||||
time.sleep(0.01) # Simulate slow creation
|
||||
return get_mock(kwargs["working_dir"])
|
||||
|
||||
with patch("beads_mcp.tools.create_bd_client", side_effect=slow_create):
|
||||
return await beads_ready_work()
|
||||
finally:
|
||||
current_workspace.reset(token)
|
||||
|
||||
# Race: three calls to same project
|
||||
await asyncio.gather(
|
||||
call_with_delay(temp_projects["a"]),
|
||||
call_with_delay(temp_projects["a"]),
|
||||
call_with_delay(temp_projects["a"]),
|
||||
)
|
||||
|
||||
# Pool lock should ensure only one client created
|
||||
assert len(clients) == 1
|
||||
# Only one creation should have happened (due to pool lock)
|
||||
assert len(creation_count) == 1
|
||||
|
||||
|
||||
class TestPathCanonicalization:
|
||||
"""Test path canonicalization for symlinks and submodules."""
|
||||
|
||||
def test_canonicalize_with_beads_dir(self, temp_projects):
|
||||
"""Test canonicalization prefers local .beads directory."""
|
||||
project_a = temp_projects["a"]
|
||||
|
||||
# Should return the project path itself (has .beads)
|
||||
canonical = _canonicalize_path(project_a)
|
||||
assert canonical == os.path.realpath(project_a)
|
||||
|
||||
def test_canonicalize_symlink_deduplication(self):
|
||||
"""Test symlinks to same directory deduplicate to same canonical path."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
real_dir = Path(tmpdir) / "real"
|
||||
real_dir.mkdir()
|
||||
(real_dir / ".beads").mkdir()
|
||||
|
||||
symlink = Path(tmpdir) / "symlink"
|
||||
symlink.symlink_to(real_dir)
|
||||
|
||||
# Both paths should canonicalize to same path
|
||||
canonical_real = _canonicalize_path(str(real_dir))
|
||||
canonical_symlink = _canonicalize_path(str(symlink))
|
||||
|
||||
assert canonical_real == canonical_symlink
|
||||
assert canonical_real == str(real_dir.resolve())
|
||||
|
||||
def test_canonicalize_submodule_with_beads(self):
|
||||
"""Test submodule with own .beads uses local directory, not parent."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
parent = Path(tmpdir) / "parent"
|
||||
parent.mkdir()
|
||||
(parent / ".beads").mkdir()
|
||||
|
||||
submodule = parent / "submodule"
|
||||
submodule.mkdir()
|
||||
(submodule / ".beads").mkdir()
|
||||
|
||||
# Submodule should use its own .beads, not parent's
|
||||
canonical = _canonicalize_path(str(submodule))
|
||||
assert canonical == str(submodule.resolve())
|
||||
# NOT parent's path
|
||||
assert canonical != str(parent.resolve())
|
||||
|
||||
def test_canonicalize_no_beads_uses_git_toplevel(self):
|
||||
"""Test path without .beads falls back to git toplevel."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
project = Path(tmpdir) / "project"
|
||||
project.mkdir()
|
||||
|
||||
# Mock git toplevel to return project dir
|
||||
with patch("beads_mcp.tools.subprocess.run") as mock_run:
|
||||
mock_run.return_value.returncode = 0
|
||||
mock_run.return_value.stdout = str(project)
|
||||
|
||||
canonical = _canonicalize_path(str(project))
|
||||
|
||||
# Should use git toplevel
|
||||
assert canonical == str(project)
|
||||
mock_run.assert_called_once()
|
||||
|
||||
def test_resolve_workspace_root_git_repo(self):
|
||||
"""Test _resolve_workspace_root returns git toplevel."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
project = Path(tmpdir) / "repo"
|
||||
project.mkdir()
|
||||
|
||||
with patch("beads_mcp.tools.subprocess.run") as mock_run:
|
||||
mock_run.return_value.returncode = 0
|
||||
mock_run.return_value.stdout = str(project)
|
||||
|
||||
resolved = _resolve_workspace_root(str(project))
|
||||
|
||||
assert resolved == str(project)
|
||||
|
||||
def test_resolve_workspace_root_not_git(self):
|
||||
"""Test _resolve_workspace_root falls back to abspath if not git repo."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
project = Path(tmpdir) / "not-git"
|
||||
project.mkdir()
|
||||
|
||||
with patch("beads_mcp.tools.subprocess.run") as mock_run:
|
||||
mock_run.return_value.returncode = 1
|
||||
mock_run.return_value.stdout = ""
|
||||
|
||||
resolved = _resolve_workspace_root(str(project))
|
||||
|
||||
# Compare as realpath to handle macOS /var -> /private/var
|
||||
assert os.path.realpath(resolved) == os.path.realpath(str(project))
|
||||
|
||||
|
||||
class TestCrossProjectIsolation:
|
||||
"""Test that projects don't leak data to each other."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_no_cross_project_data_leakage(self, temp_projects, mock_client_factory):
|
||||
"""Test operations in project A don't affect project B."""
|
||||
get_mock, clients = mock_client_factory
|
||||
|
||||
# Mock different responses for each project
|
||||
canonical_a = os.path.realpath(temp_projects["a"])
|
||||
canonical_b = os.path.realpath(temp_projects["b"])
|
||||
|
||||
def create_client_with_data(**kwargs):
|
||||
client = get_mock(kwargs["working_dir"])
|
||||
workspace = os.path.realpath(kwargs["working_dir"])
|
||||
|
||||
# Project A returns issues, B returns empty
|
||||
if workspace == canonical_a:
|
||||
client.list_issues = AsyncMock(return_value=[
|
||||
{"id": "a-1", "title": "Issue from A"}
|
||||
])
|
||||
else:
|
||||
client.list_issues = AsyncMock(return_value=[])
|
||||
|
||||
return client
|
||||
|
||||
async def list_from_project(workspace: str):
|
||||
token = current_workspace.set(workspace)
|
||||
try:
|
||||
with patch("beads_mcp.tools.create_bd_client", side_effect=create_client_with_data):
|
||||
return await beads_list_issues()
|
||||
finally:
|
||||
current_workspace.reset(token)
|
||||
|
||||
# List from both projects
|
||||
issues_a = await list_from_project(temp_projects["a"])
|
||||
issues_b = await list_from_project(temp_projects["b"])
|
||||
|
||||
# Project A has issues, B is empty
|
||||
assert len(issues_a) == 1
|
||||
assert issues_a[0]["id"] == "a-1"
|
||||
assert len(issues_b) == 0
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_stress_test_many_parallel_calls(self, temp_projects, mock_client_factory):
|
||||
"""Stress test: many parallel calls across multiple repos."""
|
||||
get_mock, clients = mock_client_factory
|
||||
|
||||
async def random_call(workspace: str, call_id: int):
|
||||
"""Random call to project."""
|
||||
token = current_workspace.set(workspace)
|
||||
try:
|
||||
with patch("beads_mcp.tools.create_bd_client", side_effect=lambda **kwargs: get_mock(kwargs["working_dir"])):
|
||||
# Alternate between ready and list calls
|
||||
if call_id % 2 == 0:
|
||||
return await beads_ready_work()
|
||||
else:
|
||||
return await beads_list_issues()
|
||||
finally:
|
||||
current_workspace.reset(token)
|
||||
|
||||
# 100 parallel calls across 3 projects
|
||||
workspaces = [temp_projects["a"], temp_projects["b"], temp_projects["c"]]
|
||||
tasks = [
|
||||
random_call(workspaces[i % 3], i)
|
||||
for i in range(100)
|
||||
]
|
||||
|
||||
results = await asyncio.gather(*tasks)
|
||||
|
||||
# Verify all calls completed
|
||||
assert len(results) == 100
|
||||
|
||||
# Verify only 3 clients created (one per project)
|
||||
assert len(clients) == 3
|
||||
|
||||
|
||||
class TestContextVarBehavior:
|
||||
"""Test ContextVar behavior and edge cases."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_contextvar_isolated_per_request(self, temp_projects):
|
||||
"""Test ContextVar is isolated per async call."""
|
||||
|
||||
async def get_current_workspace():
|
||||
"""Get current workspace from ContextVar."""
|
||||
return current_workspace.get()
|
||||
|
||||
# Set different contexts in parallel
|
||||
async def call_with_context(workspace: str):
|
||||
token = current_workspace.set(workspace)
|
||||
try:
|
||||
# Simulate some async work
|
||||
await asyncio.sleep(0.01)
|
||||
return await get_current_workspace()
|
||||
finally:
|
||||
current_workspace.reset(token)
|
||||
|
||||
results = await asyncio.gather(
|
||||
call_with_context(temp_projects["a"]),
|
||||
call_with_context(temp_projects["b"]),
|
||||
call_with_context(temp_projects["c"]),
|
||||
)
|
||||
|
||||
# Each call should see its own workspace
|
||||
assert temp_projects["a"] in results
|
||||
assert temp_projects["b"] in results
|
||||
assert temp_projects["c"] in results
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_contextvar_reset_after_call(self, temp_projects):
|
||||
"""Test ContextVar is properly reset after call."""
|
||||
# No context initially
|
||||
assert current_workspace.get() is None
|
||||
|
||||
token = current_workspace.set(temp_projects["a"])
|
||||
assert current_workspace.get() == temp_projects["a"]
|
||||
|
||||
current_workspace.reset(token)
|
||||
assert current_workspace.get() is None
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_contextvar_fallback_to_env(self, temp_projects):
|
||||
"""Test ContextVar falls back to BEADS_WORKING_DIR."""
|
||||
import os
|
||||
|
||||
# Set env var
|
||||
canonical_a = os.path.realpath(temp_projects["a"])
|
||||
os.environ["BEADS_WORKING_DIR"] = temp_projects["a"]
|
||||
|
||||
try:
|
||||
# ContextVar not set, should use env
|
||||
with patch("beads_mcp.tools.create_bd_client") as mock_create:
|
||||
mock_client = AsyncMock()
|
||||
mock_client.ready = AsyncMock(return_value=[])
|
||||
mock_create.return_value = mock_client
|
||||
|
||||
await beads_ready_work()
|
||||
|
||||
# Should have created client with env workspace (canonicalized)
|
||||
mock_create.assert_called_once()
|
||||
assert os.path.realpath(mock_create.call_args.kwargs["working_dir"]) == canonical_a
|
||||
finally:
|
||||
os.environ.pop("BEADS_WORKING_DIR", None)
|
||||
|
||||
|
||||
class TestEdgeCases:
|
||||
"""Test edge cases and error handling."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_no_workspace_raises_error(self):
|
||||
"""Test calling without workspace raises helpful error."""
|
||||
import os
|
||||
|
||||
# Clear env
|
||||
os.environ.pop("BEADS_WORKING_DIR", None)
|
||||
|
||||
# No ContextVar set, no env var
|
||||
with pytest.raises(Exception) as exc_info:
|
||||
with patch("beads_mcp.tools.create_bd_client") as mock_create:
|
||||
await beads_ready_work()
|
||||
|
||||
assert "No workspace set" in str(exc_info.value)
|
||||
|
||||
def test_canonicalize_path_cached(self, temp_projects):
|
||||
"""Test path canonicalization is cached for performance."""
|
||||
# Clear cache
|
||||
_canonicalize_path.cache_clear()
|
||||
|
||||
# First call
|
||||
result1 = _canonicalize_path(temp_projects["a"])
|
||||
|
||||
# Second call should hit cache
|
||||
result2 = _canonicalize_path(temp_projects["a"])
|
||||
|
||||
assert result1 == result2
|
||||
|
||||
# Verify cache hit
|
||||
cache_info = _canonicalize_path.cache_info()
|
||||
assert cache_info.hits >= 1
|
||||
2
integrations/beads-mcp/uv.lock
generated
2
integrations/beads-mcp/uv.lock
generated
@@ -58,7 +58,7 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "beads-mcp"
|
||||
version = "0.14.0"
|
||||
version = "0.17.0"
|
||||
source = { editable = "." }
|
||||
dependencies = [
|
||||
{ name = "fastmcp" },
|
||||
|
||||
Reference in New Issue
Block a user