Add granular control over MCP tool response sizes to minimize context
window usage while maintaining full functionality when needed.
## Output Control Parameters
### Read Operations (ready, list, show, blocked)
- `brief`: Return BriefIssue {id, title, status, priority} (~97% smaller)
- `fields`: Custom field projection with validation - invalid fields raise ValueError
- `max_description_length`: Truncate descriptions to N chars with "..."
- `brief_deps`: Full issue but dependencies as BriefDep (~48% smaller)
### Write Operations (create, update, close, reopen)
- `brief`: Return OperationResult {id, action, message} (~97% smaller)
- Default is brief=True for minimal confirmations
## Token Savings
| Operation | Before | After (brief) | Reduction |
|-----------|--------|---------------|-----------|
| ready(limit=10) | ~20KB | ~600B | 97% |
| list(limit=20) | ~40KB | ~800B | 98% |
| show() | ~2KB | ~60B | 97% |
| show() with 5 deps | ~4.5KB | ~2.3KB | 48% |
| blocked() | ~10KB | ~240B | 98% |
| create() response | ~2KB | ~50B | 97% |
## New Models
- BriefIssue: Ultra-minimal issue (4 fields, ~60B)
- BriefDep: Compact dependency (5 fields, ~70B)
- OperationResult: Write confirmation (3 fields, ~50B)
- OperationAction: Literal["created", "updated", "closed", "reopened"]
## Best Practices
- Unified `brief` parameter naming across all operations
- `brief=True` always means "give me less data"
- Field validation with clear error messages listing valid fields
- All parameters optional with backwards-compatible defaults
- Progressive disclosure: scan cheap, detail on demand
---
## Filtering Parameters (aligns MCP with CLI)
Add missing filter parameters to `list` and `ready` tools that were
documented in MCP instructions but not implemented.
### ReadyWorkParams - New Fields
- `labels: list[str]` - AND filter: must have ALL specified labels
- `labels_any: list[str]` - OR filter: must have at least one
- `unassigned: bool` - Filter to only unassigned issues
- `sort_policy: str` - Sort by: hybrid (default), priority, oldest
### ListIssuesParams - New Fields
- `labels: list[str]` - AND filter: must have ALL specified labels
- `labels_any: list[str]` - OR filter: must have at least one
- `query: str` - Search in title (case-insensitive substring)
- `unassigned: bool` - Filter to only unassigned issues
### CLI Flag Mappings
| MCP Parameter | ready CLI Flag | list CLI Flag |
|---------------|----------------|---------------|
| labels | --label (repeated) | --label (repeated) |
| labels_any | --label-any (repeated) | --label-any (repeated) |
| query | N/A | --title |
| unassigned | --unassigned | --no-assignee |
| sort_policy | --sort | N/A |
---
## Documentation & Testing
### get_tool_info() Updates
- Added `brief`, `brief_deps`, `fields`, `max_description_length` to show tool
- Added `brief` parameter docs for create, update, close, reopen
- Added `brief`, `brief_deps` parameter docs for blocked
### Test Coverage (16 new tests)
- `test_create_brief_default` / `test_create_brief_false`
- `test_update_brief_default` / `test_update_brief_false`
- `test_close_brief_default` / `test_close_brief_false`
- `test_reopen_brief_default`
- `test_show_brief` / `test_show_fields_projection` / `test_show_fields_invalid`
- `test_show_max_description_length` / `test_show_brief_deps`
- `test_list_brief` / `test_ready_brief` / `test_blocked_brief`
---
## Backward Compatibility
All new parameters are optional with sensible defaults:
- `brief`: False for reads, True for writes
- `fields`, `max_description_length`: None (no filtering/truncation)
- `labels`, `labels_any`, `query`: None (no filtering)
- `unassigned`: False (include all)
- `sort_policy`: None (use default hybrid sort)
Existing MCP tool calls continue to work unchanged.
beads-mcp
MCP server for beads issue tracker and agentic memory system. Enables AI agents to manage tasks using bd CLI through Model Context Protocol.
Note: For environments with shell access (Claude Code, Cursor, Windsurf), the CLI + hooks approach is recommended over MCP. It uses ~1-2k tokens vs 10-50k for MCP schemas, resulting in lower compute cost and latency. See the main README for CLI setup.
Use this MCP server for MCP-only environments like Claude Desktop where CLI access is unavailable.
Installing
Install from PyPI:
# Using uv (recommended)
uv tool install beads-mcp
# Or using pip
pip install beads-mcp
Add to your Claude Desktop config:
{
"mcpServers": {
"beads": {
"command": "beads-mcp"
}
}
}
Development Installation
For development, clone the repository:
git clone https://github.com/steveyegge/beads
cd beads/integrations/beads-mcp
uv sync
Then use in Claude Desktop config:
{
"mcpServers": {
"beads": {
"command": "uv",
"args": [
"--directory",
"/path/to/beads-mcp",
"run",
"beads-mcp"
]
}
}
}
Environment Variables (all optional):
BEADS_USE_DAEMON- Use daemon RPC instead of CLI (default:1, set to0to disable)BEADS_PATH- Path to bd executable (default:~/.local/bin/bd)BEADS_DB- Path to beads database file (default: auto-discover from cwd)BEADS_WORKING_DIR- Working directory for bd commands (default:$PWDor current directory). Used for multi-repo setups - see belowBEADS_ACTOR- Actor name for audit trail (default:$USER)BEADS_NO_AUTO_FLUSH- Disable automatic JSONL sync (default:false)BEADS_NO_AUTO_IMPORT- Disable automatic JSONL import (default:false)
Multi-Repository Setup
Recommended: Use a single MCP server instance for all beads projects - it automatically routes to per-project local daemons.
Single MCP Server (Recommended)
Simple config - works for all projects:
{
"mcpServers": {
"beads": {
"command": "beads-mcp"
}
}
}
How it works (LSP model):
- MCP server checks for local daemon socket (
.beads/bd.sock) in your current workspace - Routes requests to the per-project daemon based on working directory
- Auto-starts the local daemon if not running
- Each project gets its own isolated daemon serving only its database
Architecture:
MCP Server (one instance)
↓
Per-Project Daemons (one per workspace)
↓
SQLite Databases (complete isolation)
Why per-project daemons?
- ✅ Complete database isolation between projects
- ✅ No cross-project pollution or git worktree conflicts
- ✅ Simpler mental model: one project = one database = one daemon
- ✅ Follows LSP (Language Server Protocol) architecture
- ✅ One MCP config works for unlimited projects
Note: Global daemon support was removed in v0.16.0 to prevent cross-project database pollution.
Alternative: Per-Project MCP Instances (Not Recommended)
Configure separate MCP servers for specific projects using BEADS_WORKING_DIR:
{
"mcpServers": {
"beads-webapp": {
"command": "beads-mcp",
"env": {
"BEADS_WORKING_DIR": "/Users/yourname/projects/webapp"
}
},
"beads-api": {
"command": "beads-mcp",
"env": {
"BEADS_WORKING_DIR": "/Users/yourname/projects/api"
}
}
}
}
⚠️ Problem: AI may select the wrong MCP server for your workspace, causing commands to operate on the wrong database. Use single MCP server instead.
Multi-Project Support
The MCP server supports managing multiple beads projects in a single session using per-request workspace routing.
Using workspace_root Parameter
Every tool accepts an optional workspace_root parameter for explicit project targeting:
# Query issues from different projects concurrently
results = await asyncio.gather(
beads_ready_work(workspace_root="/Users/you/project-a"),
beads_ready_work(workspace_root="/Users/you/project-b"),
)
# Create issue in specific project
await beads_create_issue(
title="Fix auth bug",
priority=1,
workspace_root="/Users/you/project-a"
)
Architecture
Connection Pool: The MCP server maintains a connection pool keyed by canonical workspace path:
- Each workspace gets its own daemon socket connection
- Paths are canonicalized (symlinks resolved, git toplevel detected)
- Concurrent requests use
asyncio.Lockto prevent race conditions - No LRU eviction (keeps all connections open for session)
ContextVar Routing: Per-request workspace context is managed via Python's ContextVar:
- Each tool call sets the workspace for its duration
- Properly isolated for concurrent calls (no cross-contamination)
- Falls back to
BEADS_WORKING_DIRifworkspace_rootnot provided
Path Canonicalization:
- Symlinks are resolved to physical paths (prevents duplicate connections)
- Git submodules with
.beadsdirectories use local context - Git toplevel is used for non-initialized directories
- Results are cached for performance
Backward Compatibility
The set_context() tool still works and sets a default workspace:
# Old way (still supported)
await set_context(workspace_root="/Users/you/project-a")
await beads_ready_work() # Uses project-a
# New way (more flexible)
await beads_ready_work(workspace_root="/Users/you/project-a")
Concurrency Gotchas
⚠️ IMPORTANT: Tool implementations must NOT spawn background tasks using asyncio.create_task().
Why? ContextVar doesn't propagate to spawned tasks, which can cause cross-project data leakage.
Solution: Keep all tool logic synchronous or use sequential await calls.
Troubleshooting
Symlink aliasing: Different paths to same project are deduplicated automatically via realpath.
Submodule handling: Submodules with their own .beads directory are treated as separate projects.
Stale sockets: Currently no health checks. Phase 2 will add retry-on-failure if monitoring shows need.
Version mismatches: Daemon version is auto-checked since v0.16.0. Mismatched daemons are automatically restarted.
Features
Resource:
beads://quickstart- Quickstart guide for using beads
Tools (all support workspace_root parameter):
init- Initialize bd in current directorycreate- Create new issue (bug, feature, task, epic, chore)list- List issues with filters (status, priority, type, assignee)ready- Find tasks with no blockers ready to work onshow- Show detailed issue info including dependenciesupdate- Update issue (status, priority, design, notes, etc). Note:status="closed"orstatus="open"automatically route tocloseorreopentools to respect approval workflowsclose- Close completed issuedep- Add dependency (blocks, related, parent-child, discovered-from)blocked- Get blocked issuesstats- Get project statisticsreopen- Reopen a closed issue with optional reasonset_context- Set default workspace for subsequent calls (backward compatibility)
Known Issues
MCP Tools Not Loading in Claude Code (Issue #346) - RESOLVED
Status: ✅ Fixed in v0.24.0+
This issue affected versions prior to v0.24.0. The problem was caused by self-referential Pydantic models (Issue with dependencies: list["Issue"]) generating invalid MCP schemas with $ref at root level.
Solution: The issue was fixed in commit f3a678f by refactoring the data models:
- Created
IssueBasewith common fields - Created
LinkedIssue(IssueBase)for dependency references - Changed
Issueto uselist[LinkedIssue]instead oflist["Issue"]
This breaks the circular reference and ensures all tool outputSchemas have type: object at root level.
Upgrade: If you're running beads-mcp < 0.24.0:
pip install --upgrade beads-mcp
All MCP tools now load correctly in Claude Code with v0.24.0+.
Development
Run MCP inspector:
# inside beads-mcp dir
uv run fastmcp dev src/beads_mcp/server.py
Type checking:
uv run mypy src/beads_mcp
Linting and formatting:
uv run ruff check src/beads_mcp
uv run ruff format src/beads_mcp
Testing
Run all tests:
uv run pytest
With coverage:
uv run pytest --cov=beads_mcp tests/
Test suite includes both mocked unit tests and integration tests with real bd CLI.
Multi-Repo Integration Test
Test daemon RPC with multiple repositories:
# Start the daemon first
cd /path/to/beads
./bd daemon --start
# Run multi-repo test
cd integrations/beads-mcp
uv run python test_multi_repo.py
This test verifies that the daemon can handle operations across multiple repositories simultaneously using per-request context routing.