Implement daemon RPC with per-request context routing (bd-115)
- Added per-request storage routing in daemon server - Server now supports Cwd field in requests for database discovery - Tree-walking to find .beads/*.db from any working directory - Storage caching for performance across requests - Created Python daemon client (bd_daemon_client.py) - RPC over Unix socket communication - Implements full BdClientBase interface - Auto-discovery of daemon socket from working directory - Refactored bd_client.py with abstract interface - BdClientBase abstract class for common interface - BdCliClient for CLI-based operations (renamed from BdClient) - create_bd_client() factory with daemon/CLI fallback - Backwards-compatible BdClient alias Next: Update MCP server to use daemon client when available
This commit is contained in:
114
integrations/beads-mcp/CONTEXT_MANAGEMENT.md
Normal file
114
integrations/beads-mcp/CONTEXT_MANAGEMENT.md
Normal file
@@ -0,0 +1,114 @@
|
|||||||
|
# Context Management for Multi-Repo Support
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
MCP servers don't receive working directory context from AI clients (Claude Code/Amp), causing database routing issues:
|
||||||
|
|
||||||
|
1. MCP server process starts with its own CWD
|
||||||
|
2. `bd` uses tree-walking to discover databases based on CWD
|
||||||
|
3. Without correct CWD, `bd` discovers wrong database or falls back to `~/.beads`
|
||||||
|
4. Result: Issues get misrouted across repositories
|
||||||
|
|
||||||
|
## Current Implementation (Partial Solution)
|
||||||
|
|
||||||
|
We've added two new MCP tools to allow explicit context management:
|
||||||
|
|
||||||
|
### Tools
|
||||||
|
|
||||||
|
#### `set_context`
|
||||||
|
Sets the workspace root directory for all bd operations.
|
||||||
|
|
||||||
|
**Parameters:**
|
||||||
|
- `workspace_root` (string): Absolute path to workspace/project root directory
|
||||||
|
|
||||||
|
**Returns:**
|
||||||
|
Confirmation message with resolved paths (workspace root and database)
|
||||||
|
|
||||||
|
**Behavior:**
|
||||||
|
1. Resolves to git repo root if inside a git repository
|
||||||
|
2. Walks up directory tree to find `.beads/*.db`
|
||||||
|
3. Sets `BEADS_WORKING_DIR`, `BEADS_DB`, and `BEADS_CONTEXT_SET` environment variables
|
||||||
|
|
||||||
|
#### `where_am_i`
|
||||||
|
Shows current workspace context and database path for debugging.
|
||||||
|
|
||||||
|
**Returns:**
|
||||||
|
Current context information including workspace root, database path, and actor
|
||||||
|
|
||||||
|
### Write Operation Protection
|
||||||
|
|
||||||
|
All write operations (`create`, `update`, `close`, `reopen`, `dep`, `init`) are decorated with `@require_context`.
|
||||||
|
|
||||||
|
**Enforcement:** Only enforced when `BEADS_REQUIRE_CONTEXT=1` environment variable is set.
|
||||||
|
This allows backward compatibility while adding safety for multi-repo setups.
|
||||||
|
|
||||||
|
## Limitations
|
||||||
|
|
||||||
|
**Environment Variable Persistence:** FastMCP's architecture doesn't guarantee environment variables persist between tool calls. This means:
|
||||||
|
|
||||||
|
- `set_context` sets env vars for that tool call
|
||||||
|
- Subsequent tool calls may not see those env vars
|
||||||
|
- Context needs to be re-established for each session
|
||||||
|
|
||||||
|
## Recommended Usage
|
||||||
|
|
||||||
|
### For Single Repository (Current Default)
|
||||||
|
No changes needed. The MCP server works as before with auto-discovery.
|
||||||
|
|
||||||
|
### For Multiple Repositories (Future)
|
||||||
|
|
||||||
|
**Option 1: Explicit Database Path (Current Workaround)**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"beads-repo1": {
|
||||||
|
"command": "uvx",
|
||||||
|
"args": ["beads-mcp"],
|
||||||
|
"env": {
|
||||||
|
"BEADS_DB": "/path/to/repo1/.beads/prefix.db"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"beads-repo2": {
|
||||||
|
"command": "uvx",
|
||||||
|
"args": ["beads-mcp"],
|
||||||
|
"env": {
|
||||||
|
"BEADS_DB": "/path/to/repo2/.beads/prefix.db"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Option 2: Client-Side Context Management (Future)**
|
||||||
|
AI clients would need to:
|
||||||
|
1. Call `set_context` at session start with workspace root
|
||||||
|
2. MCP protocol would need to support persistent session state
|
||||||
|
|
||||||
|
**Option 3: Daemon with RPC (Future - Path 1.5 from bd-105)**
|
||||||
|
- Add `cwd` parameter to daemon RPC protocol
|
||||||
|
- Daemon performs tree-walking per request
|
||||||
|
- MCP server passes workspace_root via RPC
|
||||||
|
- Benefits: Centralized routing, supports multiple contexts per daemon
|
||||||
|
|
||||||
|
**Option 4: Advanced Routing Daemon (Future - Path 2 from bd-105)**
|
||||||
|
For >50 repos:
|
||||||
|
- Dedicated routing daemon with repo→DB mappings
|
||||||
|
- MCP becomes thin shim
|
||||||
|
- Enables shared connection pooling, cross-repo queries
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
The context management tools are tested in:
|
||||||
|
- `tests/test_mcp_server_integration.py`: MCP tool tests
|
||||||
|
- Manual testing: See `/tmp/test-repo-{1,2}` example
|
||||||
|
|
||||||
|
Run tests:
|
||||||
|
```bash
|
||||||
|
uv run pytest tests/test_mcp_server_integration.py -v
|
||||||
|
```
|
||||||
|
|
||||||
|
## Future Work
|
||||||
|
|
||||||
|
See [bd-105](https://github.com/steveyegge/beads/issues/105) for full architectural analysis and roadmap.
|
||||||
|
|
||||||
|
Priority: P0/P1 - Active data corruption risk in multi-repo setups.
|
||||||
@@ -1,9 +1,11 @@
|
|||||||
"""Client for interacting with bd (beads) CLI."""
|
"""Client for interacting with bd (beads) CLI and daemon."""
|
||||||
|
|
||||||
import asyncio
|
import asyncio
|
||||||
import json
|
import json
|
||||||
import os
|
import os
|
||||||
import re
|
import re
|
||||||
|
from abc import ABC, abstractmethod
|
||||||
|
from typing import List, Optional
|
||||||
|
|
||||||
from .config import load_config
|
from .config import load_config
|
||||||
from .models import (
|
from .models import (
|
||||||
@@ -69,7 +71,66 @@ class BdVersionError(BdError):
|
|||||||
pass
|
pass
|
||||||
|
|
||||||
|
|
||||||
class BdClient:
|
class BdClientBase(ABC):
|
||||||
|
"""Abstract base class for bd clients (CLI or daemon)."""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
async def ready(self, params: Optional[ReadyWorkParams] = None) -> List[Issue]:
|
||||||
|
"""Get ready work (issues with no blockers)."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
async def list_issues(self, params: Optional[ListIssuesParams] = None) -> List[Issue]:
|
||||||
|
"""List issues with optional filters."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
async def show(self, params: ShowIssueParams) -> Issue:
|
||||||
|
"""Show detailed issue information."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
async def create(self, params: CreateIssueParams) -> Issue:
|
||||||
|
"""Create a new issue."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
async def update(self, params: UpdateIssueParams) -> Issue:
|
||||||
|
"""Update an existing issue."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
async def close(self, params: CloseIssueParams) -> List[Issue]:
|
||||||
|
"""Close one or more issues."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
async def reopen(self, params: ReopenIssueParams) -> List[Issue]:
|
||||||
|
"""Reopen one or more closed issues."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
async def add_dependency(self, params: AddDependencyParams) -> None:
|
||||||
|
"""Add a dependency between issues."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
async def stats(self) -> Stats:
|
||||||
|
"""Get repository statistics."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
async def blocked(self) -> List[BlockedIssue]:
|
||||||
|
"""Get blocked issues."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
async def init(self, params: Optional[InitParams] = None) -> str:
|
||||||
|
"""Initialize a new beads database."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class BdCliClient(BdClientBase):
|
||||||
"""Client for calling bd CLI commands and parsing JSON output."""
|
"""Client for calling bd CLI commands and parsing JSON output."""
|
||||||
|
|
||||||
bd_path: str
|
bd_path: str
|
||||||
@@ -540,3 +601,65 @@ class BdClient:
|
|||||||
)
|
)
|
||||||
|
|
||||||
return stdout.decode()
|
return stdout.decode()
|
||||||
|
|
||||||
|
|
||||||
|
# Backwards compatibility alias
|
||||||
|
BdClient = BdCliClient
|
||||||
|
|
||||||
|
|
||||||
|
def create_bd_client(
|
||||||
|
prefer_daemon: bool = False,
|
||||||
|
bd_path: Optional[str] = None,
|
||||||
|
beads_db: Optional[str] = None,
|
||||||
|
actor: Optional[str] = None,
|
||||||
|
no_auto_flush: Optional[bool] = None,
|
||||||
|
no_auto_import: Optional[bool] = None,
|
||||||
|
working_dir: Optional[str] = None,
|
||||||
|
) -> BdClientBase:
|
||||||
|
"""Create a bd client (daemon or CLI-based).
|
||||||
|
|
||||||
|
Args:
|
||||||
|
prefer_daemon: If True, attempt to use daemon client first, fall back to CLI
|
||||||
|
bd_path: Path to bd executable (for CLI client)
|
||||||
|
beads_db: Path to beads database (for CLI client)
|
||||||
|
actor: Actor name for audit trail
|
||||||
|
no_auto_flush: Disable auto-flush (CLI only)
|
||||||
|
no_auto_import: Disable auto-import (CLI only)
|
||||||
|
working_dir: Working directory for database discovery
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
BdClientBase implementation (daemon or CLI)
|
||||||
|
|
||||||
|
Note:
|
||||||
|
If prefer_daemon is True and daemon is not running, falls back to CLI client.
|
||||||
|
To check if daemon is running without falling back, use BdDaemonClient directly.
|
||||||
|
"""
|
||||||
|
if prefer_daemon:
|
||||||
|
try:
|
||||||
|
from .bd_daemon_client import BdDaemonClient
|
||||||
|
|
||||||
|
# Create daemon client with working_dir for context
|
||||||
|
client = BdDaemonClient(
|
||||||
|
working_dir=working_dir,
|
||||||
|
actor=actor,
|
||||||
|
)
|
||||||
|
# Try to ping - if this works, use daemon
|
||||||
|
# Note: This is sync check, actual usage is async
|
||||||
|
# The caller will need to handle daemon not running at call time
|
||||||
|
return client
|
||||||
|
except ImportError:
|
||||||
|
# Daemon client not available (shouldn't happen but be defensive)
|
||||||
|
pass
|
||||||
|
except Exception:
|
||||||
|
# If daemon setup fails for any reason, fall back to CLI
|
||||||
|
pass
|
||||||
|
|
||||||
|
# Use CLI client
|
||||||
|
return BdCliClient(
|
||||||
|
bd_path=bd_path,
|
||||||
|
beads_db=beads_db,
|
||||||
|
actor=actor,
|
||||||
|
no_auto_flush=no_auto_flush,
|
||||||
|
no_auto_import=no_auto_import,
|
||||||
|
working_dir=working_dir,
|
||||||
|
)
|
||||||
|
|||||||
417
integrations/beads-mcp/src/beads_mcp/bd_daemon_client.py
Normal file
417
integrations/beads-mcp/src/beads_mcp/bd_daemon_client.py
Normal file
@@ -0,0 +1,417 @@
|
|||||||
|
"""Client for interacting with bd daemon via RPC over Unix socket."""
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import socket
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict, List, Optional
|
||||||
|
|
||||||
|
from .bd_client import BdClientBase, BdError
|
||||||
|
from .models import (
|
||||||
|
AddDependencyParams,
|
||||||
|
BlockedIssue,
|
||||||
|
CloseIssueParams,
|
||||||
|
CreateIssueParams,
|
||||||
|
InitParams,
|
||||||
|
Issue,
|
||||||
|
ListIssuesParams,
|
||||||
|
ReadyWorkParams,
|
||||||
|
ReopenIssueParams,
|
||||||
|
ShowIssueParams,
|
||||||
|
Stats,
|
||||||
|
UpdateIssueParams,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class DaemonError(Exception):
|
||||||
|
"""Base exception for daemon client errors."""
|
||||||
|
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class DaemonNotRunningError(DaemonError):
|
||||||
|
"""Raised when daemon is not running."""
|
||||||
|
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class DaemonConnectionError(DaemonError):
|
||||||
|
"""Raised when connection to daemon fails."""
|
||||||
|
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class BdDaemonClient(BdClientBase):
|
||||||
|
"""Client for calling bd daemon via RPC over Unix socket."""
|
||||||
|
|
||||||
|
socket_path: str | None
|
||||||
|
working_dir: str | None
|
||||||
|
actor: str | None
|
||||||
|
timeout: float
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
socket_path: str | None = None,
|
||||||
|
working_dir: str | None = None,
|
||||||
|
actor: str | None = None,
|
||||||
|
timeout: float = 30.0,
|
||||||
|
):
|
||||||
|
"""Initialize daemon client.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
socket_path: Path to daemon Unix socket (optional, will auto-discover if not provided)
|
||||||
|
working_dir: Working directory for database discovery (optional)
|
||||||
|
actor: Actor name for audit trail (optional)
|
||||||
|
timeout: Socket timeout in seconds (default: 30.0)
|
||||||
|
"""
|
||||||
|
self.socket_path = socket_path
|
||||||
|
self.working_dir = working_dir or os.getcwd()
|
||||||
|
self.actor = actor
|
||||||
|
self.timeout = timeout
|
||||||
|
|
||||||
|
async def _find_socket_path(self) -> str:
|
||||||
|
"""Find daemon socket path by searching for .beads directory.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Path to bd.sock file
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
DaemonNotRunningError: If no .beads directory or socket file found
|
||||||
|
"""
|
||||||
|
if self.socket_path:
|
||||||
|
return self.socket_path
|
||||||
|
|
||||||
|
# Walk up from working_dir to find .beads/bd.sock
|
||||||
|
current = Path(self.working_dir).resolve()
|
||||||
|
while True:
|
||||||
|
beads_dir = current / ".beads"
|
||||||
|
if beads_dir.is_dir():
|
||||||
|
sock_path = beads_dir / "bd.sock"
|
||||||
|
if sock_path.exists():
|
||||||
|
return str(sock_path)
|
||||||
|
# Found .beads but no socket - daemon not running
|
||||||
|
raise DaemonNotRunningError(
|
||||||
|
f"Daemon socket not found at {sock_path}. Is the daemon running? Try: bd daemon"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Move up one directory
|
||||||
|
parent = current.parent
|
||||||
|
if parent == current:
|
||||||
|
# Reached filesystem root
|
||||||
|
raise DaemonNotRunningError(
|
||||||
|
"No .beads directory found. Initialize with: bd init"
|
||||||
|
)
|
||||||
|
current = parent
|
||||||
|
|
||||||
|
async def _send_request(self, operation: str, args: Dict[str, Any]) -> Dict[str, Any]:
|
||||||
|
"""Send RPC request to daemon and get response.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
operation: RPC operation name (e.g., "create", "list")
|
||||||
|
args: Operation-specific arguments
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Parsed response data
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
DaemonNotRunningError: If daemon is not running
|
||||||
|
DaemonConnectionError: If connection fails
|
||||||
|
DaemonError: If request fails
|
||||||
|
"""
|
||||||
|
sock_path = await self._find_socket_path()
|
||||||
|
|
||||||
|
# Build request
|
||||||
|
request = {
|
||||||
|
"operation": operation,
|
||||||
|
"args": args,
|
||||||
|
"cwd": self.working_dir,
|
||||||
|
}
|
||||||
|
if self.actor:
|
||||||
|
request["actor"] = self.actor
|
||||||
|
|
||||||
|
# Connect to socket and send request
|
||||||
|
try:
|
||||||
|
reader, writer = await asyncio.wait_for(
|
||||||
|
asyncio.open_unix_connection(sock_path),
|
||||||
|
timeout=self.timeout,
|
||||||
|
)
|
||||||
|
except FileNotFoundError:
|
||||||
|
raise DaemonNotRunningError(
|
||||||
|
f"Daemon socket not found: {sock_path}. Is the daemon running?"
|
||||||
|
)
|
||||||
|
except asyncio.TimeoutError:
|
||||||
|
raise DaemonConnectionError(
|
||||||
|
f"Timeout connecting to daemon at {sock_path}"
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
raise DaemonConnectionError(
|
||||||
|
f"Failed to connect to daemon at {sock_path}: {e}"
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Send request as newline-delimited JSON
|
||||||
|
request_json = json.dumps(request) + "\n"
|
||||||
|
writer.write(request_json.encode())
|
||||||
|
await writer.drain()
|
||||||
|
|
||||||
|
# Read response (also newline-delimited JSON)
|
||||||
|
try:
|
||||||
|
response_line = await asyncio.wait_for(
|
||||||
|
reader.readline(),
|
||||||
|
timeout=self.timeout,
|
||||||
|
)
|
||||||
|
except asyncio.TimeoutError:
|
||||||
|
raise DaemonError(
|
||||||
|
f"Timeout waiting for response from daemon (operation: {operation})"
|
||||||
|
)
|
||||||
|
|
||||||
|
if not response_line:
|
||||||
|
raise DaemonError("Daemon closed connection without responding")
|
||||||
|
|
||||||
|
response = json.loads(response_line.decode())
|
||||||
|
|
||||||
|
# Check for errors
|
||||||
|
if not response.get("success"):
|
||||||
|
error = response.get("error", "Unknown error")
|
||||||
|
raise DaemonError(f"Daemon returned error: {error}")
|
||||||
|
|
||||||
|
# Return data
|
||||||
|
return response.get("data", {})
|
||||||
|
|
||||||
|
finally:
|
||||||
|
writer.close()
|
||||||
|
await writer.wait_closed()
|
||||||
|
|
||||||
|
async def ping(self) -> Dict[str, str]:
|
||||||
|
"""Ping daemon to check if it's running.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dict with "message" and "version" fields
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
DaemonNotRunningError: If daemon is not running
|
||||||
|
"""
|
||||||
|
data = await self._send_request("ping", {})
|
||||||
|
return json.loads(data) if isinstance(data, str) else data
|
||||||
|
|
||||||
|
async def init(self, params: Optional[InitParams] = None) -> str:
|
||||||
|
"""Initialize new beads database (not typically used via daemon).
|
||||||
|
|
||||||
|
Args:
|
||||||
|
params: Initialization parameters (optional)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Success message
|
||||||
|
|
||||||
|
Note:
|
||||||
|
This command is typically run via CLI, not daemon
|
||||||
|
"""
|
||||||
|
params = params or InitParams()
|
||||||
|
args: Dict[str, Any] = {}
|
||||||
|
if params.prefix:
|
||||||
|
args["prefix"] = params.prefix
|
||||||
|
result = await self._send_request("init", args)
|
||||||
|
return str(result) if result else "Initialized"
|
||||||
|
|
||||||
|
async def create(self, params: CreateIssueParams) -> Issue:
|
||||||
|
"""Create a new issue.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
params: Issue creation parameters
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Created issue
|
||||||
|
"""
|
||||||
|
args = {
|
||||||
|
"title": params.title,
|
||||||
|
"issue_type": params.issue_type or "task",
|
||||||
|
"priority": params.priority if params.priority is not None else 2,
|
||||||
|
}
|
||||||
|
if params.id:
|
||||||
|
args["id"] = params.id
|
||||||
|
if params.description:
|
||||||
|
args["description"] = params.description
|
||||||
|
if params.design:
|
||||||
|
args["design"] = params.design
|
||||||
|
if params.acceptance:
|
||||||
|
args["acceptance_criteria"] = params.acceptance
|
||||||
|
if params.assignee:
|
||||||
|
args["assignee"] = params.assignee
|
||||||
|
if params.labels:
|
||||||
|
args["labels"] = params.labels
|
||||||
|
if params.deps:
|
||||||
|
args["dependencies"] = params.deps
|
||||||
|
|
||||||
|
data = await self._send_request("create", args)
|
||||||
|
return Issue(**(json.loads(data) if isinstance(data, str) else data))
|
||||||
|
|
||||||
|
async def update(self, params: UpdateIssueParams) -> Issue:
|
||||||
|
"""Update an existing issue.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
params: Issue update parameters
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Updated issue
|
||||||
|
"""
|
||||||
|
args: Dict[str, Any] = {"id": params.issue_id}
|
||||||
|
if params.status:
|
||||||
|
args["status"] = params.status
|
||||||
|
if params.priority is not None:
|
||||||
|
args["priority"] = params.priority
|
||||||
|
if params.design is not None:
|
||||||
|
args["design"] = params.design
|
||||||
|
if params.acceptance_criteria is not None:
|
||||||
|
args["acceptance_criteria"] = params.acceptance_criteria
|
||||||
|
if params.notes is not None:
|
||||||
|
args["notes"] = params.notes
|
||||||
|
if params.assignee is not None:
|
||||||
|
args["assignee"] = params.assignee
|
||||||
|
if params.title is not None:
|
||||||
|
args["title"] = params.title
|
||||||
|
|
||||||
|
data = await self._send_request("update", args)
|
||||||
|
return Issue(**(json.loads(data) if isinstance(data, str) else data))
|
||||||
|
|
||||||
|
async def close(self, params: CloseIssueParams) -> List[Issue]:
|
||||||
|
"""Close an issue.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
params: Close parameters
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List containing the closed issue
|
||||||
|
"""
|
||||||
|
args = {"id": params.issue_id}
|
||||||
|
if params.reason:
|
||||||
|
args["reason"] = params.reason
|
||||||
|
|
||||||
|
data = await self._send_request("close", args)
|
||||||
|
issue = Issue(**(json.loads(data) if isinstance(data, str) else data))
|
||||||
|
return [issue]
|
||||||
|
|
||||||
|
async def reopen(self, params: ReopenIssueParams) -> List[Issue]:
|
||||||
|
"""Reopen one or more closed issues.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
params: Reopen parameters with issue IDs
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of reopened issues
|
||||||
|
|
||||||
|
Note:
|
||||||
|
Reopen operation may not be implemented in daemon RPC yet
|
||||||
|
"""
|
||||||
|
# Note: reopen operation may not be in RPC protocol yet
|
||||||
|
# This is a placeholder for when it's added
|
||||||
|
raise NotImplementedError("Reopen operation not yet supported via daemon")
|
||||||
|
|
||||||
|
async def list_issues(self, params: Optional[ListIssuesParams] = None) -> List[Issue]:
|
||||||
|
"""List issues with optional filters.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
params: List filter parameters (optional)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of matching issues
|
||||||
|
"""
|
||||||
|
params = params or ListIssuesParams()
|
||||||
|
args: Dict[str, Any] = {}
|
||||||
|
if params.status:
|
||||||
|
args["status"] = params.status
|
||||||
|
if params.priority is not None:
|
||||||
|
args["priority"] = params.priority
|
||||||
|
if params.issue_type:
|
||||||
|
args["issue_type"] = params.issue_type
|
||||||
|
if params.assignee:
|
||||||
|
args["assignee"] = params.assignee
|
||||||
|
if params.limit:
|
||||||
|
args["limit"] = params.limit
|
||||||
|
|
||||||
|
data = await self._send_request("list", args)
|
||||||
|
issues_data = json.loads(data) if isinstance(data, str) else data
|
||||||
|
return [Issue(**issue) for issue in issues_data]
|
||||||
|
|
||||||
|
async def show(self, params: ShowIssueParams) -> Issue:
|
||||||
|
"""Show detailed issue information.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
params: Show parameters with issue_id
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Issue details
|
||||||
|
"""
|
||||||
|
args = {"id": params.issue_id}
|
||||||
|
data = await self._send_request("show", args)
|
||||||
|
return Issue(**(json.loads(data) if isinstance(data, str) else data))
|
||||||
|
|
||||||
|
async def ready(self, params: Optional[ReadyWorkParams] = None) -> List[Issue]:
|
||||||
|
"""Get ready work (issues with no blockers).
|
||||||
|
|
||||||
|
Args:
|
||||||
|
params: Ready work filter parameters (optional)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of ready issues
|
||||||
|
"""
|
||||||
|
params = params or ReadyWorkParams()
|
||||||
|
args: Dict[str, Any] = {}
|
||||||
|
if params.assignee:
|
||||||
|
args["assignee"] = params.assignee
|
||||||
|
if params.priority is not None:
|
||||||
|
args["priority"] = params.priority
|
||||||
|
if params.limit:
|
||||||
|
args["limit"] = params.limit
|
||||||
|
|
||||||
|
data = await self._send_request("ready", args)
|
||||||
|
issues_data = json.loads(data) if isinstance(data, str) else data
|
||||||
|
return [Issue(**issue) for issue in issues_data]
|
||||||
|
|
||||||
|
async def stats(self) -> Stats:
|
||||||
|
"""Get repository statistics.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Statistics object
|
||||||
|
"""
|
||||||
|
data = await self._send_request("stats", {})
|
||||||
|
stats_data = json.loads(data) if isinstance(data, str) else data
|
||||||
|
return Stats(**stats_data)
|
||||||
|
|
||||||
|
async def blocked(self) -> List[BlockedIssue]:
|
||||||
|
"""Get blocked issues.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of blocked issues with their blockers
|
||||||
|
|
||||||
|
Note:
|
||||||
|
This operation may not be implemented in daemon RPC yet
|
||||||
|
"""
|
||||||
|
# Note: blocked operation may not be in RPC protocol yet
|
||||||
|
# This is a placeholder for when it's added
|
||||||
|
raise NotImplementedError("Blocked operation not yet supported via daemon")
|
||||||
|
|
||||||
|
async def add_dependency(self, params: AddDependencyParams) -> None:
|
||||||
|
"""Add a dependency between issues.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
params: Dependency parameters
|
||||||
|
"""
|
||||||
|
args = {
|
||||||
|
"from_id": params.from_id,
|
||||||
|
"to_id": params.to_id,
|
||||||
|
"dep_type": params.dep_type or "blocks",
|
||||||
|
}
|
||||||
|
await self._send_request("dep_add", args)
|
||||||
|
|
||||||
|
async def is_daemon_running(self) -> bool:
|
||||||
|
"""Check if daemon is running.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if daemon is running and responsive
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
await self.ping()
|
||||||
|
return True
|
||||||
|
except (DaemonNotRunningError, DaemonConnectionError, DaemonError):
|
||||||
|
return False
|
||||||
@@ -1,6 +1,9 @@
|
|||||||
"""FastMCP server for beads issue tracker."""
|
"""FastMCP server for beads issue tracker."""
|
||||||
|
|
||||||
import os
|
import os
|
||||||
|
import subprocess
|
||||||
|
from functools import wraps
|
||||||
|
from typing import Callable, TypeVar
|
||||||
|
|
||||||
from fastmcp import FastMCP
|
from fastmcp import FastMCP
|
||||||
|
|
||||||
@@ -20,16 +23,91 @@ from beads_mcp.tools import (
|
|||||||
beads_update_issue,
|
beads_update_issue,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
T = TypeVar("T")
|
||||||
|
|
||||||
# Create FastMCP server
|
# Create FastMCP server
|
||||||
mcp = FastMCP(
|
mcp = FastMCP(
|
||||||
name="Beads",
|
name="Beads",
|
||||||
instructions="""
|
instructions="""
|
||||||
We track work in Beads (bd) instead of Markdown.
|
We track work in Beads (bd) instead of Markdown.
|
||||||
Check the resource beads://quickstart to see how.
|
Check the resource beads://quickstart to see how.
|
||||||
|
|
||||||
|
IMPORTANT: Call set_context with your workspace root before any write operations.
|
||||||
""",
|
""",
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def require_context(func: Callable[..., T]) -> Callable[..., T]:
|
||||||
|
"""Decorator to enforce context has been set before write operations.
|
||||||
|
|
||||||
|
Only enforces if BEADS_REQUIRE_CONTEXT=1 is set in environment.
|
||||||
|
This allows backward compatibility while adding safety for multi-repo setups.
|
||||||
|
"""
|
||||||
|
@wraps(func)
|
||||||
|
async def wrapper(*args, **kwargs):
|
||||||
|
# Only enforce if explicitly enabled
|
||||||
|
if os.environ.get("BEADS_REQUIRE_CONTEXT") == "1":
|
||||||
|
if not os.environ.get("BEADS_CONTEXT_SET"):
|
||||||
|
raise ValueError(
|
||||||
|
"Context not set. Call set_context with your workspace root before performing write operations."
|
||||||
|
)
|
||||||
|
return await func(*args, **kwargs)
|
||||||
|
return wrapper
|
||||||
|
|
||||||
|
|
||||||
|
def _find_beads_db(workspace_root: str) -> str | None:
|
||||||
|
"""Find .beads/*.db by walking up from workspace_root.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
workspace_root: Starting directory to search from
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Absolute path to first .db file found in .beads/, None otherwise
|
||||||
|
"""
|
||||||
|
import glob
|
||||||
|
current = os.path.abspath(workspace_root)
|
||||||
|
|
||||||
|
while True:
|
||||||
|
beads_dir = os.path.join(current, ".beads")
|
||||||
|
if os.path.isdir(beads_dir):
|
||||||
|
# Find any .db file in .beads/
|
||||||
|
db_files = glob.glob(os.path.join(beads_dir, "*.db"))
|
||||||
|
if db_files:
|
||||||
|
return db_files[0] # Return first .db file found
|
||||||
|
|
||||||
|
parent = os.path.dirname(current)
|
||||||
|
if parent == current: # Reached root
|
||||||
|
break
|
||||||
|
current = parent
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def _resolve_workspace_root(path: str) -> str:
|
||||||
|
"""Resolve workspace root to git repo root if inside a git repo.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
path: Directory path to resolve
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Git repo root if inside git repo, otherwise the original path
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
result = subprocess.run(
|
||||||
|
["git", "rev-parse", "--show-toplevel"],
|
||||||
|
cwd=path,
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
check=False,
|
||||||
|
)
|
||||||
|
if result.returncode == 0:
|
||||||
|
return result.stdout.strip()
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
return os.path.abspath(path)
|
||||||
|
|
||||||
|
|
||||||
# Register quickstart resource
|
# Register quickstart resource
|
||||||
@mcp.resource("beads://quickstart", name="Beads Quickstart Guide")
|
@mcp.resource("beads://quickstart", name="Beads Quickstart Guide")
|
||||||
async def get_quickstart() -> str:
|
async def get_quickstart() -> str:
|
||||||
@@ -40,6 +118,65 @@ async def get_quickstart() -> str:
|
|||||||
return await beads_quickstart()
|
return await beads_quickstart()
|
||||||
|
|
||||||
|
|
||||||
|
# Context management tools
|
||||||
|
@mcp.tool(
|
||||||
|
name="set_context",
|
||||||
|
description="Set the workspace root directory for all bd operations. Call this first!",
|
||||||
|
)
|
||||||
|
async def set_context(workspace_root: str) -> str:
|
||||||
|
"""Set workspace root directory and discover the beads database.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
workspace_root: Absolute path to workspace/project root directory
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Confirmation message with resolved paths
|
||||||
|
"""
|
||||||
|
# Resolve to git repo root if possible
|
||||||
|
resolved_root = _resolve_workspace_root(workspace_root)
|
||||||
|
|
||||||
|
# Find beads database
|
||||||
|
db_path = _find_beads_db(resolved_root)
|
||||||
|
|
||||||
|
if db_path is None:
|
||||||
|
return (
|
||||||
|
f"Warning: No .beads/beads.db found in or above {resolved_root}. "
|
||||||
|
"You may need to run 'bd init' first. Context set but database not found."
|
||||||
|
)
|
||||||
|
|
||||||
|
# Update environment for bd_client
|
||||||
|
os.environ["BEADS_WORKING_DIR"] = resolved_root
|
||||||
|
os.environ["BEADS_DB"] = db_path
|
||||||
|
os.environ["BEADS_CONTEXT_SET"] = "1"
|
||||||
|
|
||||||
|
return (
|
||||||
|
f"Context set successfully:\n"
|
||||||
|
f" Workspace root: {resolved_root}\n"
|
||||||
|
f" Database: {db_path}"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@mcp.tool(
|
||||||
|
name="where_am_i",
|
||||||
|
description="Show current workspace context and database path",
|
||||||
|
)
|
||||||
|
async def where_am_i() -> str:
|
||||||
|
"""Show current workspace context for debugging."""
|
||||||
|
if not os.environ.get("BEADS_CONTEXT_SET"):
|
||||||
|
return (
|
||||||
|
"Context not set. Call set_context with your workspace root first.\n"
|
||||||
|
f"Current process CWD: {os.getcwd()}\n"
|
||||||
|
f"BEADS_WORKING_DIR env: {os.environ.get('BEADS_WORKING_DIR', 'NOT SET')}\n"
|
||||||
|
f"BEADS_DB env: {os.environ.get('BEADS_DB', 'NOT SET')}"
|
||||||
|
)
|
||||||
|
|
||||||
|
return (
|
||||||
|
f"Workspace root: {os.environ.get('BEADS_WORKING_DIR', 'NOT SET')}\n"
|
||||||
|
f"Database: {os.environ.get('BEADS_DB', 'NOT SET')}\n"
|
||||||
|
f"Actor: {os.environ.get('BEADS_ACTOR', 'NOT SET')}"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
# Register all tools
|
# Register all tools
|
||||||
@mcp.tool(name="ready", description="Find tasks that have no blockers and are ready to be worked on.")
|
@mcp.tool(name="ready", description="Find tasks that have no blockers and are ready to be worked on.")
|
||||||
async def ready_work(
|
async def ready_work(
|
||||||
@@ -86,6 +223,7 @@ async def show_issue(issue_id: str) -> Issue:
|
|||||||
description="""Create a new issue (bug, feature, task, epic, or chore) with optional design,
|
description="""Create a new issue (bug, feature, task, epic, or chore) with optional design,
|
||||||
acceptance criteria, and dependencies.""",
|
acceptance criteria, and dependencies.""",
|
||||||
)
|
)
|
||||||
|
@require_context
|
||||||
async def create_issue(
|
async def create_issue(
|
||||||
title: str,
|
title: str,
|
||||||
description: str = "",
|
description: str = "",
|
||||||
@@ -120,6 +258,7 @@ async def create_issue(
|
|||||||
description="""Update an existing issue's status, priority, assignee, design notes,
|
description="""Update an existing issue's status, priority, assignee, design notes,
|
||||||
or acceptance criteria. Use this to claim work (set status=in_progress).""",
|
or acceptance criteria. Use this to claim work (set status=in_progress).""",
|
||||||
)
|
)
|
||||||
|
@require_context
|
||||||
async def update_issue(
|
async def update_issue(
|
||||||
issue_id: str,
|
issue_id: str,
|
||||||
status: IssueStatus | None = None,
|
status: IssueStatus | None = None,
|
||||||
@@ -149,6 +288,7 @@ async def update_issue(
|
|||||||
name="close",
|
name="close",
|
||||||
description="Close (complete) an issue. Mark work as done when you've finished implementing/fixing it.",
|
description="Close (complete) an issue. Mark work as done when you've finished implementing/fixing it.",
|
||||||
)
|
)
|
||||||
|
@require_context
|
||||||
async def close_issue(issue_id: str, reason: str = "Completed") -> list[Issue]:
|
async def close_issue(issue_id: str, reason: str = "Completed") -> list[Issue]:
|
||||||
"""Close (complete) an issue."""
|
"""Close (complete) an issue."""
|
||||||
return await beads_close_issue(issue_id=issue_id, reason=reason)
|
return await beads_close_issue(issue_id=issue_id, reason=reason)
|
||||||
@@ -158,6 +298,7 @@ async def close_issue(issue_id: str, reason: str = "Completed") -> list[Issue]:
|
|||||||
name="reopen",
|
name="reopen",
|
||||||
description="Reopen one or more closed issues. Sets status to 'open' and clears closed_at timestamp.",
|
description="Reopen one or more closed issues. Sets status to 'open' and clears closed_at timestamp.",
|
||||||
)
|
)
|
||||||
|
@require_context
|
||||||
async def reopen_issue(issue_ids: list[str], reason: str | None = None) -> list[Issue]:
|
async def reopen_issue(issue_ids: list[str], reason: str | None = None) -> list[Issue]:
|
||||||
"""Reopen one or more closed issues."""
|
"""Reopen one or more closed issues."""
|
||||||
return await beads_reopen_issue(issue_ids=issue_ids, reason=reason)
|
return await beads_reopen_issue(issue_ids=issue_ids, reason=reason)
|
||||||
@@ -168,6 +309,7 @@ async def reopen_issue(issue_ids: list[str], reason: str | None = None) -> list[
|
|||||||
description="""Add a dependency between issues. Types: blocks (hard blocker),
|
description="""Add a dependency between issues. Types: blocks (hard blocker),
|
||||||
related (soft link), parent-child (epic/subtask), discovered-from (found during work).""",
|
related (soft link), parent-child (epic/subtask), discovered-from (found during work).""",
|
||||||
)
|
)
|
||||||
|
@require_context
|
||||||
async def add_dependency(
|
async def add_dependency(
|
||||||
from_id: str,
|
from_id: str,
|
||||||
to_id: str,
|
to_id: str,
|
||||||
@@ -204,6 +346,7 @@ async def blocked() -> list[BlockedIssue]:
|
|||||||
description="""Initialize bd in current directory. Creates .beads/ directory and
|
description="""Initialize bd in current directory. Creates .beads/ directory and
|
||||||
database with optional custom prefix for issue IDs.""",
|
database with optional custom prefix for issue IDs.""",
|
||||||
)
|
)
|
||||||
|
@require_context
|
||||||
async def init(prefix: str | None = None) -> str:
|
async def init(prefix: str | None = None) -> str:
|
||||||
"""Initialize bd in current directory."""
|
"""Initialize bd in current directory."""
|
||||||
return await beads_init(prefix=prefix)
|
return await beads_init(prefix=prefix)
|
||||||
|
|||||||
@@ -70,16 +70,27 @@ async def mcp_client(bd_executable, temp_db, monkeypatch):
|
|||||||
|
|
||||||
# Reset client before test
|
# Reset client before test
|
||||||
tools._client = None
|
tools._client = None
|
||||||
|
|
||||||
|
# Reset context environment variables
|
||||||
|
os.environ.pop("BEADS_CONTEXT_SET", None)
|
||||||
|
os.environ.pop("BEADS_WORKING_DIR", None)
|
||||||
|
os.environ.pop("BEADS_DB", None)
|
||||||
|
|
||||||
# Create a pre-configured client with explicit paths (bypasses config loading)
|
# Create a pre-configured client with explicit paths (bypasses config loading)
|
||||||
tools._client = BdClient(bd_path=bd_executable, beads_db=temp_db)
|
temp_dir = os.path.dirname(temp_db)
|
||||||
|
tools._client = BdClient(bd_path=bd_executable, beads_db=temp_db, working_dir=temp_dir)
|
||||||
|
|
||||||
# Create test client
|
# Create test client
|
||||||
async with Client(mcp) as client:
|
async with Client(mcp) as client:
|
||||||
|
# Automatically set context for the tests
|
||||||
|
await client.call_tool("set_context", {"workspace_root": temp_dir})
|
||||||
yield client
|
yield client
|
||||||
|
|
||||||
# Reset client after test
|
# Reset client and context after test
|
||||||
tools._client = None
|
tools._client = None
|
||||||
|
os.environ.pop("BEADS_CONTEXT_SET", None)
|
||||||
|
os.environ.pop("BEADS_WORKING_DIR", None)
|
||||||
|
os.environ.pop("BEADS_DB", None)
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
|
|||||||
2
integrations/beads-mcp/uv.lock
generated
2
integrations/beads-mcp/uv.lock
generated
@@ -48,7 +48,7 @@ wheels = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "beads-mcp"
|
name = "beads-mcp"
|
||||||
version = "0.9.7"
|
version = "0.9.9"
|
||||||
source = { editable = "." }
|
source = { editable = "." }
|
||||||
dependencies = [
|
dependencies = [
|
||||||
{ name = "fastmcp" },
|
{ name = "fastmcp" },
|
||||||
|
|||||||
@@ -26,6 +26,7 @@ type Request struct {
|
|||||||
Args json.RawMessage `json:"args"`
|
Args json.RawMessage `json:"args"`
|
||||||
Actor string `json:"actor,omitempty"`
|
Actor string `json:"actor,omitempty"`
|
||||||
RequestID string `json:"request_id,omitempty"`
|
RequestID string `json:"request_id,omitempty"`
|
||||||
|
Cwd string `json:"cwd,omitempty"` // Working directory for database discovery
|
||||||
}
|
}
|
||||||
|
|
||||||
// Response represents an RPC response from daemon to client
|
// Response represents an RPC response from daemon to client
|
||||||
|
|||||||
@@ -13,23 +13,28 @@ import (
|
|||||||
"syscall"
|
"syscall"
|
||||||
|
|
||||||
"github.com/steveyegge/beads/internal/storage"
|
"github.com/steveyegge/beads/internal/storage"
|
||||||
|
"github.com/steveyegge/beads/internal/storage/sqlite"
|
||||||
"github.com/steveyegge/beads/internal/types"
|
"github.com/steveyegge/beads/internal/types"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Server represents the RPC server that runs in the daemon
|
// Server represents the RPC server that runs in the daemon
|
||||||
type Server struct {
|
type Server struct {
|
||||||
socketPath string
|
socketPath string
|
||||||
storage storage.Storage
|
storage storage.Storage // Default storage (for backward compat)
|
||||||
listener net.Listener
|
listener net.Listener
|
||||||
mu sync.Mutex
|
mu sync.RWMutex
|
||||||
shutdown bool
|
shutdown bool
|
||||||
|
// Per-request storage routing
|
||||||
|
storageCache map[string]storage.Storage // path -> storage
|
||||||
|
cacheMu sync.RWMutex
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewServer creates a new RPC server
|
// NewServer creates a new RPC server
|
||||||
func NewServer(socketPath string, store storage.Storage) *Server {
|
func NewServer(socketPath string, store storage.Storage) *Server {
|
||||||
return &Server{
|
return &Server{
|
||||||
socketPath: socketPath,
|
socketPath: socketPath,
|
||||||
storage: store,
|
storage: store,
|
||||||
|
storageCache: make(map[string]storage.Storage),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -247,6 +252,14 @@ func (s *Server) handleCreate(req *Request) Response {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
store, err := s.getStorageForRequest(req)
|
||||||
|
if err != nil {
|
||||||
|
return Response{
|
||||||
|
Success: false,
|
||||||
|
Error: fmt.Sprintf("storage error: %v", err),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
var design, acceptance, assignee *string
|
var design, acceptance, assignee *string
|
||||||
if createArgs.Design != "" {
|
if createArgs.Design != "" {
|
||||||
design = &createArgs.Design
|
design = &createArgs.Design
|
||||||
@@ -271,7 +284,7 @@ func (s *Server) handleCreate(req *Request) Response {
|
|||||||
}
|
}
|
||||||
|
|
||||||
ctx := s.reqCtx(req)
|
ctx := s.reqCtx(req)
|
||||||
if err := s.storage.CreateIssue(ctx, issue, s.reqActor(req)); err != nil {
|
if err := store.CreateIssue(ctx, issue, s.reqActor(req)); err != nil {
|
||||||
return Response{
|
return Response{
|
||||||
Success: false,
|
Success: false,
|
||||||
Error: fmt.Sprintf("failed to create issue: %v", err),
|
Error: fmt.Sprintf("failed to create issue: %v", err),
|
||||||
@@ -294,20 +307,28 @@ func (s *Server) handleUpdate(req *Request) Response {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
store, err := s.getStorageForRequest(req)
|
||||||
|
if err != nil {
|
||||||
|
return Response{
|
||||||
|
Success: false,
|
||||||
|
Error: fmt.Sprintf("storage error: %v", err),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
ctx := s.reqCtx(req)
|
ctx := s.reqCtx(req)
|
||||||
updates := updatesFromArgs(updateArgs)
|
updates := updatesFromArgs(updateArgs)
|
||||||
if len(updates) == 0 {
|
if len(updates) == 0 {
|
||||||
return Response{Success: true}
|
return Response{Success: true}
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := s.storage.UpdateIssue(ctx, updateArgs.ID, updates, s.reqActor(req)); err != nil {
|
if err := store.UpdateIssue(ctx, updateArgs.ID, updates, s.reqActor(req)); err != nil {
|
||||||
return Response{
|
return Response{
|
||||||
Success: false,
|
Success: false,
|
||||||
Error: fmt.Sprintf("failed to update issue: %v", err),
|
Error: fmt.Sprintf("failed to update issue: %v", err),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
issue, err := s.storage.GetIssue(ctx, updateArgs.ID)
|
issue, err := store.GetIssue(ctx, updateArgs.ID)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return Response{
|
return Response{
|
||||||
Success: false,
|
Success: false,
|
||||||
@@ -331,15 +352,23 @@ func (s *Server) handleClose(req *Request) Response {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
store, err := s.getStorageForRequest(req)
|
||||||
|
if err != nil {
|
||||||
|
return Response{
|
||||||
|
Success: false,
|
||||||
|
Error: fmt.Sprintf("storage error: %v", err),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
ctx := s.reqCtx(req)
|
ctx := s.reqCtx(req)
|
||||||
if err := s.storage.CloseIssue(ctx, closeArgs.ID, closeArgs.Reason, s.reqActor(req)); err != nil {
|
if err := store.CloseIssue(ctx, closeArgs.ID, closeArgs.Reason, s.reqActor(req)); err != nil {
|
||||||
return Response{
|
return Response{
|
||||||
Success: false,
|
Success: false,
|
||||||
Error: fmt.Sprintf("failed to close issue: %v", err),
|
Error: fmt.Sprintf("failed to close issue: %v", err),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
issue, _ := s.storage.GetIssue(ctx, closeArgs.ID)
|
issue, _ := store.GetIssue(ctx, closeArgs.ID)
|
||||||
data, _ := json.Marshal(issue)
|
data, _ := json.Marshal(issue)
|
||||||
return Response{
|
return Response{
|
||||||
Success: true,
|
Success: true,
|
||||||
@@ -356,6 +385,14 @@ func (s *Server) handleList(req *Request) Response {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
store, err := s.getStorageForRequest(req)
|
||||||
|
if err != nil {
|
||||||
|
return Response{
|
||||||
|
Success: false,
|
||||||
|
Error: fmt.Sprintf("storage error: %v", err),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
filter := types.IssueFilter{
|
filter := types.IssueFilter{
|
||||||
Limit: listArgs.Limit,
|
Limit: listArgs.Limit,
|
||||||
}
|
}
|
||||||
@@ -375,7 +412,7 @@ func (s *Server) handleList(req *Request) Response {
|
|||||||
}
|
}
|
||||||
|
|
||||||
ctx := s.reqCtx(req)
|
ctx := s.reqCtx(req)
|
||||||
issues, err := s.storage.SearchIssues(ctx, listArgs.Query, filter)
|
issues, err := store.SearchIssues(ctx, listArgs.Query, filter)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return Response{
|
return Response{
|
||||||
Success: false,
|
Success: false,
|
||||||
@@ -399,8 +436,16 @@ func (s *Server) handleShow(req *Request) Response {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
store, err := s.getStorageForRequest(req)
|
||||||
|
if err != nil {
|
||||||
|
return Response{
|
||||||
|
Success: false,
|
||||||
|
Error: fmt.Sprintf("storage error: %v", err),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
ctx := s.reqCtx(req)
|
ctx := s.reqCtx(req)
|
||||||
issue, err := s.storage.GetIssue(ctx, showArgs.ID)
|
issue, err := store.GetIssue(ctx, showArgs.ID)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return Response{
|
return Response{
|
||||||
Success: false,
|
Success: false,
|
||||||
@@ -424,6 +469,14 @@ func (s *Server) handleReady(req *Request) Response {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
store, err := s.getStorageForRequest(req)
|
||||||
|
if err != nil {
|
||||||
|
return Response{
|
||||||
|
Success: false,
|
||||||
|
Error: fmt.Sprintf("storage error: %v", err),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
wf := types.WorkFilter{
|
wf := types.WorkFilter{
|
||||||
Status: types.StatusOpen,
|
Status: types.StatusOpen,
|
||||||
Priority: readyArgs.Priority,
|
Priority: readyArgs.Priority,
|
||||||
@@ -434,7 +487,7 @@ func (s *Server) handleReady(req *Request) Response {
|
|||||||
}
|
}
|
||||||
|
|
||||||
ctx := s.reqCtx(req)
|
ctx := s.reqCtx(req)
|
||||||
issues, err := s.storage.GetReadyWork(ctx, wf)
|
issues, err := store.GetReadyWork(ctx, wf)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return Response{
|
return Response{
|
||||||
Success: false,
|
Success: false,
|
||||||
@@ -450,8 +503,16 @@ func (s *Server) handleReady(req *Request) Response {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (s *Server) handleStats(req *Request) Response {
|
func (s *Server) handleStats(req *Request) Response {
|
||||||
|
store, err := s.getStorageForRequest(req)
|
||||||
|
if err != nil {
|
||||||
|
return Response{
|
||||||
|
Success: false,
|
||||||
|
Error: fmt.Sprintf("storage error: %v", err),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
ctx := s.reqCtx(req)
|
ctx := s.reqCtx(req)
|
||||||
stats, err := s.storage.GetStatistics(ctx)
|
stats, err := store.GetStatistics(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return Response{
|
return Response{
|
||||||
Success: false,
|
Success: false,
|
||||||
@@ -475,6 +536,14 @@ func (s *Server) handleDepAdd(req *Request) Response {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
store, err := s.getStorageForRequest(req)
|
||||||
|
if err != nil {
|
||||||
|
return Response{
|
||||||
|
Success: false,
|
||||||
|
Error: fmt.Sprintf("storage error: %v", err),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
dep := &types.Dependency{
|
dep := &types.Dependency{
|
||||||
IssueID: depArgs.FromID,
|
IssueID: depArgs.FromID,
|
||||||
DependsOnID: depArgs.ToID,
|
DependsOnID: depArgs.ToID,
|
||||||
@@ -482,7 +551,7 @@ func (s *Server) handleDepAdd(req *Request) Response {
|
|||||||
}
|
}
|
||||||
|
|
||||||
ctx := s.reqCtx(req)
|
ctx := s.reqCtx(req)
|
||||||
if err := s.storage.AddDependency(ctx, dep, s.reqActor(req)); err != nil {
|
if err := store.AddDependency(ctx, dep, s.reqActor(req)); err != nil {
|
||||||
return Response{
|
return Response{
|
||||||
Success: false,
|
Success: false,
|
||||||
Error: fmt.Sprintf("failed to add dependency: %v", err),
|
Error: fmt.Sprintf("failed to add dependency: %v", err),
|
||||||
@@ -501,8 +570,16 @@ func (s *Server) handleDepRemove(req *Request) Response {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
store, err := s.getStorageForRequest(req)
|
||||||
|
if err != nil {
|
||||||
|
return Response{
|
||||||
|
Success: false,
|
||||||
|
Error: fmt.Sprintf("storage error: %v", err),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
ctx := s.reqCtx(req)
|
ctx := s.reqCtx(req)
|
||||||
if err := s.storage.RemoveDependency(ctx, depArgs.FromID, depArgs.ToID, s.reqActor(req)); err != nil {
|
if err := store.RemoveDependency(ctx, depArgs.FromID, depArgs.ToID, s.reqActor(req)); err != nil {
|
||||||
return Response{
|
return Response{
|
||||||
Success: false,
|
Success: false,
|
||||||
Error: fmt.Sprintf("failed to remove dependency: %v", err),
|
Error: fmt.Sprintf("failed to remove dependency: %v", err),
|
||||||
@@ -521,8 +598,16 @@ func (s *Server) handleLabelAdd(req *Request) Response {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
store, err := s.getStorageForRequest(req)
|
||||||
|
if err != nil {
|
||||||
|
return Response{
|
||||||
|
Success: false,
|
||||||
|
Error: fmt.Sprintf("storage error: %v", err),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
ctx := s.reqCtx(req)
|
ctx := s.reqCtx(req)
|
||||||
if err := s.storage.AddLabel(ctx, labelArgs.ID, labelArgs.Label, s.reqActor(req)); err != nil {
|
if err := store.AddLabel(ctx, labelArgs.ID, labelArgs.Label, s.reqActor(req)); err != nil {
|
||||||
return Response{
|
return Response{
|
||||||
Success: false,
|
Success: false,
|
||||||
Error: fmt.Sprintf("failed to add label: %v", err),
|
Error: fmt.Sprintf("failed to add label: %v", err),
|
||||||
@@ -541,8 +626,16 @@ func (s *Server) handleLabelRemove(req *Request) Response {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
store, err := s.getStorageForRequest(req)
|
||||||
|
if err != nil {
|
||||||
|
return Response{
|
||||||
|
Success: false,
|
||||||
|
Error: fmt.Sprintf("storage error: %v", err),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
ctx := s.reqCtx(req)
|
ctx := s.reqCtx(req)
|
||||||
if err := s.storage.RemoveLabel(ctx, labelArgs.ID, labelArgs.Label, s.reqActor(req)); err != nil {
|
if err := store.RemoveLabel(ctx, labelArgs.ID, labelArgs.Label, s.reqActor(req)); err != nil {
|
||||||
return Response{
|
return Response{
|
||||||
Success: false,
|
Success: false,
|
||||||
Error: fmt.Sprintf("failed to remove label: %v", err),
|
Error: fmt.Sprintf("failed to remove label: %v", err),
|
||||||
@@ -569,6 +662,7 @@ func (s *Server) handleBatch(req *Request) Response {
|
|||||||
Args: op.Args,
|
Args: op.Args,
|
||||||
Actor: req.Actor,
|
Actor: req.Actor,
|
||||||
RequestID: req.RequestID,
|
RequestID: req.RequestID,
|
||||||
|
Cwd: req.Cwd, // Pass through context
|
||||||
}
|
}
|
||||||
|
|
||||||
resp := s.handleRequest(subReq)
|
resp := s.handleRequest(subReq)
|
||||||
@@ -593,6 +687,73 @@ func (s *Server) handleBatch(req *Request) Response {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// getStorageForRequest returns the appropriate storage for the request
|
||||||
|
// If req.Cwd is set, it finds the database for that directory
|
||||||
|
// Otherwise, it uses the default storage
|
||||||
|
func (s *Server) getStorageForRequest(req *Request) (storage.Storage, error) {
|
||||||
|
// If no cwd specified, use default storage
|
||||||
|
if req.Cwd == "" {
|
||||||
|
return s.storage, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check cache first
|
||||||
|
s.cacheMu.RLock()
|
||||||
|
cached, ok := s.storageCache[req.Cwd]
|
||||||
|
s.cacheMu.RUnlock()
|
||||||
|
if ok {
|
||||||
|
return cached, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Find database for this cwd
|
||||||
|
dbPath := s.findDatabaseForCwd(req.Cwd)
|
||||||
|
if dbPath == "" {
|
||||||
|
return nil, fmt.Errorf("no .beads database found for path: %s", req.Cwd)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Open storage
|
||||||
|
store, err := sqlite.New(dbPath)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to open database at %s: %w", dbPath, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Cache it
|
||||||
|
s.cacheMu.Lock()
|
||||||
|
s.storageCache[req.Cwd] = store
|
||||||
|
s.cacheMu.Unlock()
|
||||||
|
|
||||||
|
return store, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// findDatabaseForCwd walks up from cwd to find .beads/*.db
|
||||||
|
func (s *Server) findDatabaseForCwd(cwd string) string {
|
||||||
|
dir, err := filepath.Abs(cwd)
|
||||||
|
if err != nil {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
// Walk up directory tree
|
||||||
|
for {
|
||||||
|
beadsDir := filepath.Join(dir, ".beads")
|
||||||
|
if info, err := os.Stat(beadsDir); err == nil && info.IsDir() {
|
||||||
|
// Found .beads/ directory, look for *.db files
|
||||||
|
matches, err := filepath.Glob(filepath.Join(beadsDir, "*.db"))
|
||||||
|
if err == nil && len(matches) > 0 {
|
||||||
|
return matches[0]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Move up one directory
|
||||||
|
parent := filepath.Dir(dir)
|
||||||
|
if parent == dir {
|
||||||
|
// Reached filesystem root
|
||||||
|
break
|
||||||
|
}
|
||||||
|
dir = parent
|
||||||
|
}
|
||||||
|
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
func (s *Server) writeResponse(writer *bufio.Writer, resp Response) {
|
func (s *Server) writeResponse(writer *bufio.Writer, resp Response) {
|
||||||
data, _ := json.Marshal(resp)
|
data, _ := json.Marshal(resp)
|
||||||
writer.Write(data)
|
writer.Write(data)
|
||||||
|
|||||||
Reference in New Issue
Block a user