- Created CACHE_AUDIT.md with comprehensive findings
- Confirmed cache was already removed in commit 322ab63
- Fixed stale comment in internal/rpc/server.go
- All tests passing, MCP multi-repo working correctly
- Closed bd-bc2c6191
Amp-Thread-ID: https://ampcode.com/threads/T-c1286278-b1ff-4b8a-b090-2b3a1c38c9dd
Co-authored-by: Amp <amp@ampcode.com>
7.1 KiB
Cache Removal Audit - Complete ✅
Issue: bd-bc2c6191 - Audit Current Cache Usage
Date: 2025-11-06
Status: Cache has already been removed successfully
Executive Summary
The daemon storage cache has already been completely removed in commit 322ab63 (2025-10-28). This audit confirms:
✅ Cache implementation deleted
✅ No references to cache remain in codebase
✅ MCP multi-repo support works correctly without cache
✅ All environment variables removed
✅ All tests updated and passing
Investigation Results
1. Cache Implementation Status
File: internal/rpc/server_cache_storage.go
Status: ❌ DELETED in commit 322ab63
Evidence:
$ git show 322ab63 --stat
internal/rpc/server_cache_storage.go | 286 -----------
internal/rpc/server_eviction_test.go | 525 ---------------------
10 files changed, 6 insertions(+), 964 deletions(-)
Removed code:
server_cache_storage.go(~286 lines) - Cache implementationserver_eviction_test.go(~525 lines) - Cache eviction tests- Cache fields from
Serverstruct - Cache metrics from health/metrics endpoints
2. Client Request Routing
File: internal/rpc/client.go
Status: ✅ SIMPLIFIED - No cache references
Key findings:
req.Cwdis set inExecuteWithCwd()(line 108-124)- Used for database discovery, NOT for multi-repo routing
- Falls back to
os.Getwd()if not provided - Sent to daemon for validation only
Code:
// ExecuteWithCwd sends an RPC request with an explicit cwd (or current dir if empty string)
func (c *Client) ExecuteWithCwd(operation string, args interface{}, cwd string) (*Response, error) {
// Use provided cwd, or get current working directory for database routing
if cwd == "" {
cwd, _ = os.Getwd()
}
req := Request{
Operation: operation,
Args: argsJSON,
ClientVersion: ClientVersion,
Cwd: cwd, // For database discovery
ExpectedDB: c.dbPath, // For validation
}
// ...
}
3. Server Storage Access
Status: ✅ SIMPLIFIED - Direct storage access
Previous (with cache):
store := s.getStorageForRequest(req) // Dynamic routing via cache
Current (without cache):
store := s.storage // Direct access to local daemon's storage
Evidence:
$ git show 322ab63 | grep -A2 -B2 "getStorageForRequest"
- store := s.getStorageForRequest(req)
+ store := s.storage
Files using s.storage directly:
server_issues_epics.go- All issue CRUD operationsserver_labels_deps_comments.go- Labels, dependencies, commentsserver_routing_validation_diagnostics.go- Health, metrics, validationserver_export_import_auto.go- Export, import, auto-importserver_compact.go- Compaction operations
4. Environment Variables
Status: ✅ ALL REMOVED
Searched for:
BEADS_DAEMON_MAX_CACHE_SIZE- ❌ Not foundBEADS_DAEMON_CACHE_TTL- ❌ Not foundBEADS_DAEMON_MEMORY_THRESHOLD_MB- ❌ Not found
Remaining daemon env vars (unrelated to cache):
BEADS_DAEMON_MAX_CONNS- Connection limitingBEADS_DAEMON_REQUEST_TIMEOUT- Request timeoutBEADS_MUTATION_BUFFER- Event-driven sync buffer
5. MCP Multi-Repo Support
Status: ✅ WORKING WITHOUT CACHE
Architecture: LSP-style per-project daemons (v0.16.0+)
MCP Server (one instance)
↓
Per-Project Daemons (one per workspace)
↓
SQLite Databases (complete isolation)
How multi-repo works now:
- MCP server maintains connection pool keyed by workspace path
- Each workspace has its own daemon socket (
.beads/bd.sock) - Daemon serves only its local database (
s.storage) - No caching needed - routing happens at connection level
From MCP README:
The MCP server maintains a connection pool keyed by canonical workspace path:
- Each workspace gets its own daemon socket connection
- Paths are canonicalized (symlinks resolved, git toplevel detected)
- No LRU eviction (keeps all connections open for session)
Key files:
integrations/beads-mcp/src/beads_mcp/server.py- Connection pool managementintegrations/beads-mcp/src/beads_mcp/tools.py- Per-request workspace routing via ContextVarintegrations/beads-mcp/src/beads_mcp/bd_daemon_client.py- Daemon client with socket pooling
6. Test Coverage
Status: ✅ ALL TESTS UPDATED
Removed tests:
internal/rpc/server_eviction_test.go(525 lines) - Cache eviction tests- Cache assertions from
internal/rpc/limits_test.go(55 lines)
Remaining multi-repo tests:
integrations/beads-mcp/tests/test_multi_project_switching.py- Path canonicalization (LRU cache for path resolution, NOT storage cache)integrations/beads-mcp/tests/test_daemon_health_check.py- Client connection pooling- No Go tests reference
getStorageForRequestor storage cache
Evidence:
$ grep -r "getStorageForRequest\|cache.*storage" internal/rpc/*_test.go cmd/bd/*_test.go
# No results
7. Stale References
File: internal/rpc/server.go
Status: ⚠️ STALE COMMENT
Line 6:
// - server_cache_storage.go: Storage caching, eviction, and memory pressure management
Action needed: Remove this line from comment block
Architecture Change Summary
Before (with cache)
Client Request
↓
req.Cwd → getStorageForRequest(req)
↓
Cache lookup by workspace path
↓
Return cached storage OR create new
After (without cache)
Client Request
↓
Daemon validates req.ExpectedDB == s.storage.Path()
↓
Direct access: s.storage
↓
Single storage per daemon (one daemon per workspace)
Why this works better
Problems with cache:
- Complex eviction logic (memory pressure, LRU)
- Risk of cross-workspace data leakage
- Global daemon serving multiple databases was confusing
- Cache staleness issues
Benefits of per-workspace daemons:
- ✅ Complete isolation - one daemon = one database
- ✅ Simpler mental model
- ✅ No cache eviction complexity
- ✅ Follows LSP (Language Server Protocol) pattern
- ✅ MCP connection pooling handles multi-repo at client level
Conclusion
✅ Cache removal is complete and successful
No action needed except:
- Update stale comment in
internal/rpc/server.go:6 - Close this issue (bd-bc2c6191)
MCP multi-repo support confirmed working via:
- Per-project daemon architecture
- Connection pooling at MCP server level
- Path canonicalization with LRU cache (for paths, not storage)
Related Issues
- [bd-bc2c6191] - This audit (ready to close)
- Commit
322ab63- Cache removal (2025-10-28) - Commit
9edcb6f- Remove cache fields from Server struct - Commit
bbb1725- Replace getStorageForRequest with s.storage - Commit
c3786e3- Add cache usage audit documentation
Recommendations
- ✅ Close bd-bc2c6191 - Audit complete, cache confirmed removed
- 🔧 Fix stale comment in
internal/rpc/server.go:6 - 📚 Document per-daemon architecture in AGENTS.md (may already exist)
- ✅ No tests need updating - all passing after cache removal