Complete daemon RPC with per-request context routing (bd-115)

- MCP server now uses daemon client by default with CLI fallback
- Added BEADS_USE_DAEMON environment variable (default: enabled)
- Created multi-repo integration test (all tests pass)
- Updated .gitignore for daemon runtime files
- Added SETUP_DAEMON.md with migration instructions
- Closed bd-105 (investigation complete) and bd-114 (multi-server confusion)

This enables single MCP server to handle multiple repos via daemon
with per-request context routing. No more multiple MCP server configs!

Amp-Thread-ID: https://ampcode.com/threads/T-c222692e-f6ef-4649-9726-db59470b82ef
Co-authored-by: Amp <amp@ampcode.com>
This commit is contained in:
Steve Yegge
2025-10-17 16:55:14 -07:00
parent b40de9bc41
commit ac5578d5f1
6 changed files with 402 additions and 7 deletions

File diff suppressed because one or more lines are too long

5
.gitignore vendored
View File

@@ -33,5 +33,10 @@ Thumbs.db
*.db-wal
*.db-shm
# Daemon runtime files
.beads/daemon.log
.beads/daemon.pid
.beads/bd.sock
# Keep JSONL exports (source of truth for git)
!.beads/*.jsonl

View File

@@ -56,6 +56,7 @@ Then use in Claude Desktop config:
```
**Environment Variables** (all optional):
- `BEADS_USE_DAEMON` - Use daemon RPC instead of CLI (default: `1`, set to `0` to disable)
- `BEADS_PATH` - Path to bd executable (default: `~/.local/bin/bd`)
- `BEADS_DB` - Path to beads database file (default: auto-discover from cwd)
- `BEADS_WORKING_DIR` - Working directory for bd commands (default: `$PWD` or current directory)
@@ -114,3 +115,18 @@ uv run pytest --cov=beads_mcp tests/
```
Test suite includes both mocked unit tests and integration tests with real `bd` CLI.
### Multi-Repo Integration Test
Test daemon RPC with multiple repositories:
```bash
# Start the daemon first
cd /path/to/beads
./bd daemon start
# Run multi-repo test
cd integrations/beads-mcp
uv run python test_multi_repo.py
```
This test verifies that the daemon can handle operations across multiple repositories simultaneously using per-request context routing.

View File

@@ -0,0 +1,202 @@
# Setting Up Daemon-Based MCP Server
## Quick Start
Replace your multiple MCP server configs with a single daemon-based one:
### 1. Claude Desktop Config
Edit `~/Library/Application Support/Claude/claude_desktop_config.json`:
```json
{
"mcpServers": {
"beads": {
"command": "beads-mcp",
"env": {
"BEADS_USE_DAEMON": "1"
}
}
}
}
```
**Remove old configs** like `beads-wyvern`, `beads-adar`, etc.
### 2. Start the Daemon
In your beads project directory:
```bash
bd daemon start
```
The daemon will:
- Listen on `.beads/bd.sock`
- Route operations to correct database based on request cwd
- Handle multiple repos simultaneously
### 3. Test It
```bash
# Test with beads repo
cd ~/src/vc/adar/beads
bd list
# Test with another repo
cd ~/src/vc/wyvern
bd list
# Both should show correct issues for their respective databases
```
### 4. Restart Claude Desktop
After updating the config, restart Claude Desktop to load the new MCP server configuration.
## How It Works
```
Claude/Amp → Single MCP Server → Daemon Client → Daemon → Correct Database
Uses set_context to pass workspace_root
Daemon uses cwd to find .beads/*.db
```
- **No more multiple MCP servers** - one server handles all repos
- **Per-request routing** - daemon finds correct database for each operation
- **Automatic fallback** - if daemon not running, falls back to CLI mode
- **Concurrent access** - daemon handles multiple repos at once
## Advanced Configuration
### Optional Environment Variables
```json
{
"mcpServers": {
"beads": {
"command": "beads-mcp",
"env": {
"BEADS_USE_DAEMON": "1",
"BEADS_REQUIRE_CONTEXT": "1",
"BEADS_ACTOR": "claude"
}
}
}
}
```
- `BEADS_USE_DAEMON` - Use daemon (default: `1`)
- `BEADS_REQUIRE_CONTEXT` - Enforce set_context before writes (default: `0`)
- `BEADS_ACTOR` - Actor name for audit trail (default: `$USER`)
### Disable Daemon (Fallback to CLI)
If you want to temporarily use CLI mode:
```json
{
"env": {
"BEADS_USE_DAEMON": "0"
}
}
```
## Daemon Management
```bash
# Start daemon
bd daemon start
# Check status
bd daemon status
# View logs
bd daemon logs
# Stop daemon
bd daemon stop
# Restart daemon
bd daemon stop && bd daemon start
```
## Troubleshooting
### "Daemon not running" errors
Start the daemon in your beads project:
```bash
cd ~/src/vc/adar/beads
bd daemon start
```
### Wrong database being used
1. Check where daemon is running:
```bash
bd daemon status
```
2. Use `set_context` tool in Claude to set workspace root:
```
set_context /path/to/your/project
```
3. Verify with `where_am_i` tool
### Multiple repos not working
Ensure:
- Daemon is running in a parent directory that can reach all repos
- Each repo has `.beads/*.db` properly initialized
- MCP server is passing correct workspace_root via `set_context`
## Migration from Multi-Server Setup
### Old Config (Remove This)
```json
{
"mcpServers": {
"beads-adar": {
"command": "beads-mcp",
"env": {
"BEADS_DB": "/path/to/adar/.beads/bd.db"
}
},
"beads-wyvern": {
"command": "beads-mcp",
"env": {
"BEADS_DB": "/path/to/wyvern/.beads/wy.db"
}
}
}
}
```
### New Config (Use This)
```json
{
"mcpServers": {
"beads": {
"command": "beads-mcp",
"env": {
"BEADS_USE_DAEMON": "1"
}
}
}
}
```
## Benefits
✅ Single MCP server for all repos
✅ No manual BEADS_DB configuration per repo
✅ Automatic context switching
✅ Better performance (no process spawning per operation)
✅ Concurrent multi-repo operations
✅ Simpler configuration

View File

@@ -1,8 +1,9 @@
"""MCP tools for beads issue tracker."""
import os
from typing import Annotated
from .bd_client import BdClient, BdError
from .bd_client import create_bd_client, BdClientBase, BdError
from .models import (
AddDependencyParams,
BlockedIssue,
@@ -22,7 +23,7 @@ from .models import (
)
# Global client instance - initialized on first use
_client: BdClient | None = None
_client: BdClientBase | None = None
_version_checked: bool = False
# Default constants
@@ -30,20 +31,28 @@ DEFAULT_ISSUE_TYPE: IssueType = "task"
DEFAULT_DEPENDENCY_TYPE: DependencyType = "blocks"
async def _get_client() -> BdClient:
async def _get_client() -> BdClientBase:
"""Get a BdClient instance, creating it on first use.
Performs version check on first initialization.
Uses daemon client if available, falls back to CLI client.
Returns:
Configured BdClient instance (config loaded automatically)
Configured BdClientBase instance (config loaded automatically)
Raises:
BdError: If bd is not installed or version is incompatible
"""
global _client, _version_checked
if _client is None:
_client = BdClient()
# Check if daemon should be used (default: yes)
use_daemon = os.environ.get("BEADS_USE_DAEMON", "1") == "1"
workspace_root = os.environ.get("BEADS_WORKING_DIR")
_client = create_bd_client(
prefer_daemon=use_daemon,
workspace_root=workspace_root
)
# Check version once per server lifetime
if not _version_checked:

View File

@@ -0,0 +1,158 @@
#!/usr/bin/env python3
"""Integration test for multi-repo daemon support.
Tests that the daemon can handle operations across multiple repositories
simultaneously using per-request cwd context.
"""
import asyncio
import os
import shutil
import subprocess
import sys
import tempfile
from pathlib import Path
# Add src to path for imports
sys.path.insert(0, str(Path(__file__).parent / "src"))
from beads_mcp.bd_daemon_client import BdDaemonClient
from beads_mcp.models import CreateIssueParams, ListIssuesParams
async def main():
"""Run multi-repo integration test."""
print("=== Multi-Repo Daemon Integration Test ===\n")
# Create two temporary repositories
with tempfile.TemporaryDirectory() as tmpdir:
repo1 = Path(tmpdir) / "repo1"
repo2 = Path(tmpdir) / "repo2"
repo1.mkdir()
repo2.mkdir()
print(f"Created test repositories:")
print(f" repo1: {repo1}")
print(f" repo2: {repo2}\n")
# Initialize bd in both repos
print("Initializing beads in both repos...")
subprocess.run(["bd", "init", "--prefix", "r1"], cwd=repo1, check=True, capture_output=True)
subprocess.run(["bd", "init", "--prefix", "r2"], cwd=repo2, check=True, capture_output=True)
print("✅ Initialized\n")
# Find or start daemon in beads project
beads_project = Path(__file__).parent.parent.parent
beads_socket = beads_project / ".beads" / "bd.sock"
print("Checking daemon status...")
if not beads_socket.exists():
print("Starting daemon in beads project...")
subprocess.run(["bd", "daemon", "start"], cwd=beads_project, check=True, capture_output=True)
await asyncio.sleep(1) # Give daemon time to start
print("✅ Daemon started\n")
else:
print(f"✅ Daemon socket found at {beads_socket}\n")
# Create daemon clients for each repo, pointing to beads project socket
print("Creating daemon clients...")
client1 = BdDaemonClient(socket_path=str(beads_socket), working_dir=str(repo1))
client2 = BdDaemonClient(socket_path=str(beads_socket), working_dir=str(repo2))
print("✅ Clients created\n")
# Test 1: Create issues in both repos concurrently
print("Test 1: Creating issues concurrently in both repos...")
params1 = CreateIssueParams(
title="Issue in repo1",
description="This should go to repo1 database",
priority=1,
issue_type="task"
)
params2 = CreateIssueParams(
title="Issue in repo2",
description="This should go to repo2 database",
priority=1,
issue_type="task"
)
issue1, issue2 = await asyncio.gather(
client1.create(params1),
client2.create(params2)
)
print(f" ✅ Created {issue1.id} in repo1")
print(f" ✅ Created {issue2.id} in repo2")
assert issue1.id.startswith("r1-"), f"Expected r1- prefix, got {issue1.id}"
assert issue2.id.startswith("r2-"), f"Expected r2- prefix, got {issue2.id}"
print()
# Test 2: List issues from each repo - should be isolated
print("Test 2: Verifying issue isolation between repos...")
list_params = ListIssuesParams()
issues1 = await client1.list_issues(list_params)
issues2 = await client2.list_issues(list_params)
print(f" repo1 issues: {[i.id for i in issues1]}")
print(f" repo2 issues: {[i.id for i in issues2]}")
assert len(issues1) == 1, f"Expected 1 issue in repo1, got {len(issues1)}"
assert len(issues2) == 1, f"Expected 1 issue in repo2, got {len(issues2)}"
assert issues1[0].id == issue1.id, "repo1 issue mismatch"
assert issues2[0].id == issue2.id, "repo2 issue mismatch"
print(" ✅ Issues are properly isolated\n")
# Test 3: Rapid concurrent operations
print("Test 3: Rapid concurrent operations across repos...")
tasks = []
for i in range(5):
p1 = CreateIssueParams(
title=f"Concurrent issue {i} in repo1",
priority=2,
issue_type="task"
)
p2 = CreateIssueParams(
title=f"Concurrent issue {i} in repo2",
priority=2,
issue_type="task"
)
tasks.append(client1.create(p1))
tasks.append(client2.create(p2))
created = await asyncio.gather(*tasks)
print(f" ✅ Created {len(created)} issues concurrently")
# Verify counts
issues1 = await client1.list_issues(list_params)
issues2 = await client2.list_issues(list_params)
print(f" repo1 total: {len(issues1)} issues")
print(f" repo2 total: {len(issues2)} issues")
assert len(issues1) == 6, f"Expected 6 issues in repo1, got {len(issues1)}"
assert len(issues2) == 6, f"Expected 6 issues in repo2, got {len(issues2)}"
print(" ✅ All concurrent operations succeeded\n")
# Test 4: Verify prefixes are correct
print("Test 4: Verifying all prefixes are correct...")
for issue in issues1:
assert issue.id.startswith("r1-"), f"repo1 issue has wrong prefix: {issue.id}"
for issue in issues2:
assert issue.id.startswith("r2-"), f"repo2 issue has wrong prefix: {issue.id}"
print(" ✅ All prefixes correct\n")
print("=== All Tests Passed! ===")
print("\nSummary:")
print(" ✅ Per-request context routing works")
print(" ✅ Multiple repos are properly isolated")
print(" ✅ Concurrent operations succeed")
print(" ✅ Daemon handles rapid context switching")
if __name__ == "__main__":
try:
asyncio.run(main())
except Exception as e:
print(f"\n❌ Test failed: {e}", file=sys.stderr)
import traceback
traceback.print_exc()
sys.exit(1)