Fix MCP server integration tests - add env variable propagation
- Fix BdCliClient._run_command() to pass BEADS_DIR/BEADS_DB env vars to subprocess - Update temp_db fixture to create .beads in workspace and return .beads dir path - Update mcp_client fixture to use beads_dir and enable BEADS_NO_DAEMON mode - Simplify test_init_tool to work with connection pool architecture Result: All 19 integration tests now pass (was 19 failing) Amp-Thread-ID: https://ampcode.com/threads/T-c1da2a55-f086-4284-9c85-0dfa4c479cf9 Co-authored-by: Amp <amp@ampcode.com>
This commit is contained in:
@@ -294,7 +294,6 @@
|
||||
{"id":"bd-mlcz","content_hash":"35434fed7e20e8b67f7b598c46ce5dc65c111de237ea38b36a88a671b023b58b","title":"Implement bd migrate command","description":"Add bd migrate command to move issues between repos with filtering. Should support: filtering by status/priority/labels, dry-run mode, preserving dependencies, handling source_repo field updates.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-11-05T18:04:29.902151-08:00","updated_at":"2025-11-05T18:42:52.536951-08:00","closed_at":"2025-11-05T18:42:52.536951-08:00","source_repo":".","dependencies":[{"issue_id":"bd-mlcz","depends_on_id":"bd-8rd","type":"parent-child","created_at":"2025-11-05T18:04:39.072312-08:00","created_by":"daemon"}]}
|
||||
{"id":"bd-mn9p","content_hash":"89752e08d5a278ed301c655753e07bc2b564e710ce6d9e7e535bbd9ec48af182","title":"bd-hv01: Brittle string comparison breaks with JSON field reordering","description":"## Problem\ndeletion_tracking.go:125 uses string comparison to detect unchanged issues:\n\n```go\nif leftLine, existsInLeft := leftIndex[id]; existsInLeft \u0026\u0026 leftLine == baseLine {\n deletions = append(deletions, id)\n}\n```\n\nThis breaks if:\n- JSON field order changes (legal in JSON)\n- Timestamps updated by import/export\n- Whitespace/formatting changes\n- Floating point precision varies\n\n## Example Failure\n```json\n// baseLine\n{\"id\":\"bd-1\",\"priority\":1,\"status\":\"open\"}\n// leftLine (same data, different order)\n{\"id\":\"bd-1\",\"status\":\"open\",\"priority\":1}\n```\nThese are semantically identical but string comparison fails.\n\n## Fix\nParse and compare JSON semantically:\n```go\nfunc jsonEquals(a, b string) bool {\n var objA, objB map[string]interface{}\n json.Unmarshal([]byte(a), \u0026objA)\n json.Unmarshal([]byte(b), \u0026objB)\n return reflect.DeepEqual(objA, objB)\n}\n```\n\n## Files Affected\n- cmd/bd/deletion_tracking.go:125\n- cmd/bd/deletion_tracking.go:134-170 (buildIDToLineMap)","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-11-06T18:15:35.090716-08:00","updated_at":"2025-11-06T18:46:55.889888-08:00","closed_at":"2025-11-06T18:46:55.889888-08:00","source_repo":".","dependencies":[{"issue_id":"bd-mn9p","depends_on_id":"bd-rbxi","type":"parent-child","created_at":"2025-11-06T18:19:14.790898-08:00","created_by":"daemon"}]}
|
||||
{"id":"bd-my64","content_hash":"8f4eb8056f81096e7090813f319b3aa996ada6dc5809d81305271d0584c2f364","title":"Pre-push hook and daemon export produce different JSONL","description":"After committing and pushing, git status shows .beads/beads.jsonl as dirty. Investigation shows:\n\n1. Pre-push hook ran successfully and exported DB → JSONL\n2. Push completed\n3. Shortly after, daemon exported DB → JSONL again with different content\n4. Diff shows comments added to old issues (bd-23a8, bd-6049, bd-87a0)\n\nTimeline:\n- Commit c731c45 \"Update beads JSONL\"\n- Pre-push hook exported JSONL\n- Push succeeded\n- Daemon PID 33314 exported again with different content\n\nQuestions:\n1. Did someone run a command between commit and daemon export?\n2. Is there a timing issue where pre-push hook doesn't capture all DB changes?\n3. Should pre-commit hook flush daemon changes before committing?\n\nThe comments appear to be from Nov 5 (created_at: 2025-11-05T08:38:46Z) but are only appearing in JSONL now. This suggests the DB had these comments but they weren't exported during pre-push.\n\nPossible causes:\n- Pre-push hook uses BEADS_NO_DAEMON=1 which might skip pending writes\n- Daemon has unflushed changes in memory\n- Race condition between pre-push export and daemon's periodic export","notes":"Improved fix based on oracle code review:\n1. Pre-push now flushes pending changes first (prevents debounce race)\n2. Uses git status --porcelain to catch all change types\n3. Handles both beads.jsonl and issues.jsonl\n4. Works even if bd not installed (git-only check)","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-11-06T18:49:54.570993-08:00","updated_at":"2025-11-06T19:01:14.549032-08:00","closed_at":"2025-11-06T18:57:42.710282-08:00","source_repo":"."}
|
||||
{"id":"bd-n449","content_hash":"23f97088bca298f24d56312ee9aa5a14c4dd061a05eedae7e3f4d251786be4e1","title":"Blocked issue","description":"","status":"open","priority":1,"issue_type":"task","created_at":"2025-11-07T19:07:02.053113-08:00","updated_at":"2025-11-07T19:07:02.053113-08:00","source_repo":".","dependencies":[{"issue_id":"bd-n449","depends_on_id":"bd-la9d","type":"blocks","created_at":"2025-11-07T19:07:02.076698-08:00","created_by":"daemon"}]}
|
||||
{"id":"bd-ndyz","content_hash":"f3bee1df504d7d508b424b9a01cde89ad8aeedcd036ed1b66d71c4962c4a80cf","title":"GH#243: Recurring stale daemon.lock causes 5s delays","description":"User reports daemon.lock keeps becoming stale after running Claude with beads.\n\nSymptom:\n- bd ready takes 5 seconds (exact)\n- daemon.lock exists but socket is missing\n- bd daemons killall temporarily fixes it\n- Problem recurs after using beads with AI agents\n\nUser on v0.22.0, Macbook M2, 132 issues (89 closed)\n\nHypothesis: Daemon is crashing or exiting uncleanly during agent sessions, leaving stale lock file.\n\nNeed to:\n1. Add crash logging to daemon to understand why it's exiting\n2. Improve cleanup on daemon exit (ensure lock is always removed)\n3. Add automatic stale lock detection/cleanup\n4. Consider making daemon more resilient to crashes","design":"Root cause: 5s delay from slow RPC connect attempts when socket missing but clients retry with long timeouts. Lock file mechanism is fine (OS releases on crash), but missing socket + stale pid cause unnecessary connection attempts.\n\nKey insight: The lock itself isn't stale (OS-managed), but socket cleanup on crash is incomplete, leading clients to wait through full dial timeout.","notes":"Oracle analysis complete. Converting to epic with 5 focused sub-issues:\n1. RPC fast-fail with socket stat + short timeouts (P0)\n2. Standardize daemon detection with lock probe (P1) \n3. Crash recovery improvements (P2)\n4. Self-heal stale artifacts (P2)\n5. Diagnostics and debugging (P3)","status":"in_progress","priority":0,"issue_type":"bug","created_at":"2025-11-07T16:32:23.576171-08:00","updated_at":"2025-11-07T16:41:51.955399-08:00","source_repo":"."}
|
||||
{"id":"bd-ng56","content_hash":"adb5cea12f59ff7a9934ddd6f4e1cf09632e162bd4baad9f83ffea96f43d3e88","title":"bd-hv01: Three full JSONL reads on every sync (performance)","description":"Problem: computeAcceptedDeletions reads three JSONL files completely into memory (base, left, merged). For 1000 issues at 1KB each, this is 3MB read and 3000 JSON parse operations.\n\nImpact: Acceptable now (~20-35ms overhead) but will be slow for large repos (10k+ issues).\n\nPossible optimizations: single-pass streaming, memory-mapped files, binary format, incremental snapshots.\n\nFiles: cmd/bd/deletion_tracking.go:101-208","status":"closed","priority":3,"issue_type":"task","created_at":"2025-11-06T18:16:25.653076-08:00","updated_at":"2025-11-06T19:41:04.67733-08:00","closed_at":"2025-11-06T19:41:04.67733-08:00","source_repo":".","dependencies":[{"issue_id":"bd-ng56","depends_on_id":"bd-rbxi","type":"parent-child","created_at":"2025-11-06T18:19:15.148149-08:00","created_by":"daemon"}]}
|
||||
{"id":"bd-nqes","content_hash":"d7ca10b54b92c464a0c4c1c932f2918a6e0e952a5fd7337acb9160cd524dc59a","title":"bd-hv01: Non-atomic snapshot operations can cause data loss","description":"## Problem\nIn sync.go:146-155 and daemon_sync.go:502-505, snapshot capture failures are logged as warnings but sync continues:\n\n```go\nif err := exportToJSONL(ctx, jsonlPath); err != nil { ... }\nif err := captureLeftSnapshot(jsonlPath); err != nil {\n fmt.Fprintf(os.Stderr, \"Warning: failed to capture snapshot...\")\n}\n```\n\nIf export succeeds but snapshot capture fails, the merge uses stale snapshot data, potentially deleting wrong issues.\n\n## Impact\n- Critical data integrity issue\n- Could delete issues incorrectly during multi-workspace sync\n\n## Fix\nMake snapshot capture mandatory:\n```go\nif err := captureLeftSnapshot(jsonlPath); err != nil {\n return fmt.Errorf(\"failed to capture snapshot (required for deletion tracking): %w\", err)\n}\n```\n\n## Files Affected\n- cmd/bd/sync.go:146-155\n- cmd/bd/daemon_sync.go:502-505","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-11-06T18:15:33.574158-08:00","updated_at":"2025-11-06T18:46:55.874814-08:00","closed_at":"2025-11-06T18:46:55.874814-08:00","source_repo":".","dependencies":[{"issue_id":"bd-nqes","depends_on_id":"bd-rbxi","type":"parent-child","created_at":"2025-11-06T18:19:14.749153-08:00","created_by":"daemon"}]}
|
||||
|
||||
@@ -226,9 +226,15 @@ class BdCliClient(BdClientBase):
|
||||
cmd = [self.bd_path, *args, *self._global_flags(), "--json"]
|
||||
working_dir = cwd if cwd is not None else self._get_working_dir()
|
||||
|
||||
# Set up environment with database configuration
|
||||
env = os.environ.copy()
|
||||
if self.beads_dir:
|
||||
env["BEADS_DIR"] = self.beads_dir
|
||||
elif self.beads_db:
|
||||
env["BEADS_DB"] = self.beads_db
|
||||
|
||||
# Log database routing for debugging
|
||||
import sys
|
||||
working_dir = self._get_working_dir()
|
||||
if self.beads_dir:
|
||||
db_info = f"BEADS_DIR={self.beads_dir}"
|
||||
elif self.beads_db:
|
||||
@@ -245,6 +251,7 @@ class BdCliClient(BdClientBase):
|
||||
stdout=asyncio.subprocess.PIPE,
|
||||
stderr=asyncio.subprocess.PIPE,
|
||||
cwd=working_dir,
|
||||
env=env,
|
||||
)
|
||||
stdout, stderr = await process.communicate()
|
||||
except FileNotFoundError as e:
|
||||
|
||||
@@ -25,38 +25,37 @@ def bd_executable():
|
||||
@pytest.fixture
|
||||
async def temp_db(bd_executable):
|
||||
"""Create a temporary database file and initialize it - fully hermetic."""
|
||||
# Create temp directory for database
|
||||
# Create temp directory that will serve as the workspace root
|
||||
temp_dir = tempfile.mkdtemp(prefix="beads_mcp_test_", dir="/tmp")
|
||||
db_path = os.path.join(temp_dir, "test.db")
|
||||
|
||||
# Initialize database with explicit BEADS_DB - no chdir needed!
|
||||
# Initialize database in this directory (creates .beads/ subdirectory)
|
||||
import asyncio
|
||||
|
||||
env = os.environ.copy()
|
||||
# Clear any existing BEADS_DB to ensure we use only temp db
|
||||
# Clear any existing BEADS_DIR/BEADS_DB to ensure clean state
|
||||
env.pop("BEADS_DB", None)
|
||||
env["BEADS_DB"] = db_path
|
||||
env.pop("BEADS_DIR", None)
|
||||
|
||||
# Use temp workspace dir for subprocess (prevents .beads/ discovery)
|
||||
with tempfile.TemporaryDirectory(
|
||||
prefix="beads_mcp_test_workspace_", dir="/tmp"
|
||||
) as temp_workspace:
|
||||
process = await asyncio.create_subprocess_exec(
|
||||
bd_executable,
|
||||
"init",
|
||||
"--prefix",
|
||||
"test",
|
||||
stdout=asyncio.subprocess.PIPE,
|
||||
stderr=asyncio.subprocess.PIPE,
|
||||
env=env,
|
||||
cwd=temp_workspace, # Run in temp workspace, not project dir
|
||||
)
|
||||
stdout, stderr = await process.communicate()
|
||||
# Run bd init in the temp directory - it will create .beads/ subdirectory
|
||||
process = await asyncio.create_subprocess_exec(
|
||||
bd_executable,
|
||||
"init",
|
||||
"--prefix",
|
||||
"test",
|
||||
stdout=asyncio.subprocess.PIPE,
|
||||
stderr=asyncio.subprocess.PIPE,
|
||||
env=env,
|
||||
cwd=temp_dir, # Run in temp dir - bd init creates .beads/ here
|
||||
)
|
||||
stdout, stderr = await process.communicate()
|
||||
|
||||
if process.returncode != 0:
|
||||
pytest.fail(f"Failed to initialize test database: {stderr.decode()}")
|
||||
if process.returncode != 0:
|
||||
pytest.fail(f"Failed to initialize test database: {stderr.decode()}")
|
||||
|
||||
yield db_path
|
||||
# Return the .beads directory path (not the db file)
|
||||
beads_dir = os.path.join(temp_dir, ".beads")
|
||||
|
||||
yield beads_dir
|
||||
|
||||
# Cleanup
|
||||
shutil.rmtree(temp_dir, ignore_errors=True)
|
||||
@@ -75,15 +74,22 @@ async def mcp_client(bd_executable, temp_db, monkeypatch):
|
||||
os.environ.pop("BEADS_CONTEXT_SET", None)
|
||||
os.environ.pop("BEADS_WORKING_DIR", None)
|
||||
os.environ.pop("BEADS_DB", None)
|
||||
os.environ.pop("BEADS_DIR", None)
|
||||
|
||||
# temp_db is now the .beads directory path
|
||||
# The workspace root is the parent directory
|
||||
workspace_root = os.path.dirname(temp_db)
|
||||
|
||||
# Disable daemon mode for tests (prevents daemon accumulation and timeouts)
|
||||
os.environ["BEADS_NO_DAEMON"] = "1"
|
||||
|
||||
# Create a pre-configured client with explicit paths (bypasses config loading)
|
||||
temp_dir = os.path.dirname(temp_db)
|
||||
tools._client = BdClient(bd_path=bd_executable, beads_db=temp_db, working_dir=temp_dir)
|
||||
tools._client = BdClient(bd_path=bd_executable, beads_dir=temp_db, working_dir=workspace_root)
|
||||
|
||||
# Create test client
|
||||
async with Client(mcp) as client:
|
||||
# Automatically set context for the tests
|
||||
await client.call_tool("set_context", {"workspace_root": temp_dir})
|
||||
await client.call_tool("set_context", {"workspace_root": workspace_root})
|
||||
yield client
|
||||
|
||||
# Reset client and context after test
|
||||
@@ -91,6 +97,8 @@ async def mcp_client(bd_executable, temp_db, monkeypatch):
|
||||
os.environ.pop("BEADS_CONTEXT_SET", None)
|
||||
os.environ.pop("BEADS_WORKING_DIR", None)
|
||||
os.environ.pop("BEADS_DB", None)
|
||||
os.environ.pop("BEADS_DIR", None)
|
||||
os.environ.pop("BEADS_NO_DAEMON", None)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@@ -595,31 +603,20 @@ async def test_blocked_tool(mcp_client):
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_init_tool(mcp_client, bd_executable):
|
||||
"""Test init tool."""
|
||||
"""Test init tool.
|
||||
|
||||
Note: This test validates that init can be called successfully via MCP.
|
||||
The actual database is created in the workspace from mcp_client fixture,
|
||||
not in a separate temp directory, because the init tool uses the connection
|
||||
pool which is keyed by workspace path.
|
||||
"""
|
||||
import os
|
||||
import tempfile
|
||||
|
||||
# Create a completely separate temp directory and database
|
||||
with tempfile.TemporaryDirectory(prefix="beads_init_test_", dir="/tmp") as temp_dir:
|
||||
new_db_path = os.path.join(temp_dir, "new_test.db")
|
||||
# Call init tool (will init in the current workspace from mcp_client fixture)
|
||||
result = await mcp_client.call_tool("init", {"prefix": "test-init"})
|
||||
output = result.content[0].text
|
||||
|
||||
# Temporarily override the client's BEADS_DB for this test
|
||||
from beads_mcp import tools
|
||||
|
||||
# Save original client
|
||||
original_client = tools._client
|
||||
|
||||
# Create a new client pointing to the new database path
|
||||
from beads_mcp.bd_client import BdClient
|
||||
tools._client = BdClient(bd_path=bd_executable, beads_db=new_db_path)
|
||||
|
||||
try:
|
||||
# Call init tool
|
||||
result = await mcp_client.call_tool("init", {"prefix": "test-init"})
|
||||
output = result.content[0].text
|
||||
|
||||
# Verify output contains success message
|
||||
assert "bd initialized successfully!" in output
|
||||
finally:
|
||||
# Restore original client
|
||||
tools._client = original_client
|
||||
# Verify output contains success message
|
||||
assert "bd initialized successfully!" in output
|
||||
assert "test-init" in output
|
||||
|
||||
Reference in New Issue
Block a user