feat: Add bd repos multi-repo commands and fix bd ready for in_progress issues
- Add 'bd repos' command for multi-repository management (bd-123)
- bd repos list: show all cached repositories
- bd repos ready: aggregate ready work across repos
- bd repos stats: combined statistics across repos
- bd repos clear-cache: clear repository cache
- Requires global daemon (bd daemon --global)
- Fix bd ready to show in_progress issues (bd-165)
- bd ready now shows both 'open' and 'in_progress' issues with no blockers
- Allows epics/tasks ready to close to appear in ready work
- Critical P0 bug fix for workflow
- Apply code review improvements to repos implementation
- Use strongly typed RPC responses (remove interface{})
- Fix clear-cache lock handling (close connections outside lock)
- Add error collection for per-repo failures
- Add context timeouts (1-2s) to prevent hangs
- Add lock strategy comments
- Update documentation (README.md, AGENTS.md)
- Add comprehensive tests for both features
Amp-Thread-ID: https://ampcode.com/threads/T-1de989a1-1890-492c-9847-a34144259e0f
Co-authored-by: Amp <amp@ampcode.com>
This commit is contained in:
@@ -25,7 +25,7 @@
|
||||
{"id":"bd-120","title":"Fix nil pointer crash in bd export command","description":"When running `bd export -o .beads/issues.jsonl`, the command crashes with a nil pointer dereference.\n\n## Error\n```\npanic: runtime error: invalid memory address or nil pointer dereference\n[signal SIGSEGV: segmentation violation code=0x2 addr=0x108 pc=0x1034456fc]\n\ngoroutine 1 [running]:\nmain.init.func14(0x103c24380, {0x1034a9695?, 0x4?, 0x1034a95c9?})\n /Users/stevey/src/vc/adar/beads/cmd/bd/export.go:74 +0x15c\n```\n\n## Context\n- This happened after closing bd-105, bd-114, bd-115\n- Auto-export from daemon still works fine\n- Only the manual `bd export` command crashes\n- Data was already synced via auto-export, so no data loss\n\n## Location\nFile: `cmd/bd/export.go` line 74","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-10-17T17:34:05.014619-07:00","updated_at":"2025-10-17T17:35:41.414218-07:00","closed_at":"2025-10-17T17:35:41.414218-07:00"}
|
||||
{"id":"bd-121","title":"Add --global flag to daemon for multi-repo support","description":"Currently daemon creates socket at .beads/bd.sock in each repo. For multi-repo support, add --global flag to create socket in ~/.beads/bd.sock that can serve requests from any repository.\n\nImplementation:\n- Add --global flag to daemon command\n- When --global is set, use ~/.beads/bd.sock instead of ./.beads/bd.sock \n- Don't require being in a git repo when --global is used\n- Update daemon discovery logic to check ~/.beads/bd.sock as fallback\n- Document that global daemon can serve multiple repos simultaneously\n\nBenefits:\n- Single daemon serves all repos on the system\n- No need to start daemon per-repo\n- Better resource usage\n- Enables system-wide task tracking\n\nContext: Per-request context routing (bd-115) already implemented - daemon can handle multiple repos. This issue is about making the UX better.\n\nRelated: bd-73 (parent issue for multi-repo support)","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-10-17T20:43:47.080685-07:00","updated_at":"2025-10-17T22:45:42.411986-07:00","closed_at":"2025-10-17T22:45:42.411986-07:00","dependencies":[{"issue_id":"bd-121","depends_on_id":"bd-73","type":"parent-child","created_at":"2025-10-17T20:44:02.2335-07:00","created_by":"daemon"}]}
|
||||
{"id":"bd-122","title":"Document multi-repo workflow with daemon","description":"The daemon already supports multi-repo via per-request context routing (bd-115), but this isn't documented. Users need to know how to use beads across multiple projects.\n\nAdd documentation for:\n1. How daemon serves multiple repos simultaneously\n2. Starting daemon in one repo, using from others\n3. MCP server multi-repo configuration\n4. Example: tracking work across a dozen projects\n5. Comparison to workspace/global instance approaches\n\nDocumentation locations:\n- README.md (Multi-repo section)\n- AGENTS.md (MCP multi-repo config)\n- integrations/beads-mcp/README.md (working_dir parameter)\n\nInclude:\n- Architecture diagram showing one daemon, many repos\n- Example MCP config with BEADS_WORKING_DIR\n- CLI workflow example\n- Reference to test_multi_repo.py as proof of concept\n\nContext: Feature already works (proven by test_multi_repo.py), just needs user-facing docs.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-17T20:43:48.91315-07:00","updated_at":"2025-10-17T22:49:32.514372-07:00","closed_at":"2025-10-17T22:49:32.514372-07:00","dependencies":[{"issue_id":"bd-122","depends_on_id":"bd-73","type":"parent-child","created_at":"2025-10-17T20:44:03.261924-07:00","created_by":"daemon"}]}
|
||||
{"id":"bd-123","title":"Add 'bd repos' command for multi-repo aggregation","description":"When using daemon in multi-repo mode, users need commands to view/manage work across all active repositories.\n\nAdd 'bd repos' subcommand with:\n\n1. bd repos list\n - Show all repositories daemon has cached\n - Display: path, prefix, issue count, last activity\n - Example output:\n ~/src/project1 [p1-] 45 issues (active)\n ~/src/project2 [p2-] 12 issues (2m ago)\n\n2. bd repos ready --all \n - Aggregate ready work across all repos\n - Group by repo or show combined list\n - Support priority/assignee filters\n\n3. bd repos stats\n - Combined statistics across all repos\n - Total issues, breakdown by status/priority\n - Per-repo breakdown\n\n4. bd repos clear-cache\n - Close all cached storage connections\n - Useful for freeing resources\n\nImplementation notes:\n- Requires daemon to track active storage instances\n- May need RPC protocol additions for multi-repo queries\n- Should gracefully handle repos that no longer exist\n\nDepends on: Global daemon flag (makes this more useful)\n\nContext: This provides the UX layer on top of existing multi-repo support. The daemon can already serve multiple repos - this makes it easy to work with them.","status":"open","priority":2,"issue_type":"feature","created_at":"2025-10-17T20:43:49.816998-07:00","updated_at":"2025-10-17T20:43:49.816998-07:00","dependencies":[{"issue_id":"bd-123","depends_on_id":"bd-73","type":"parent-child","created_at":"2025-10-17T20:44:04.407138-07:00","created_by":"daemon"},{"issue_id":"bd-123","depends_on_id":"bd-121","type":"blocks","created_at":"2025-10-17T20:44:13.681626-07:00","created_by":"daemon"}]}
|
||||
{"id":"bd-123","title":"Add 'bd repos' command for multi-repo aggregation","description":"When using daemon in multi-repo mode, users need commands to view/manage work across all active repositories.\n\nAdd 'bd repos' subcommand with:\n\n1. bd repos list\n - Show all repositories daemon has cached\n - Display: path, prefix, issue count, last activity\n - Example output:\n ~/src/project1 [p1-] 45 issues (active)\n ~/src/project2 [p2-] 12 issues (2m ago)\n\n2. bd repos ready --all \n - Aggregate ready work across all repos\n - Group by repo or show combined list\n - Support priority/assignee filters\n\n3. bd repos stats\n - Combined statistics across all repos\n - Total issues, breakdown by status/priority\n - Per-repo breakdown\n\n4. bd repos clear-cache\n - Close all cached storage connections\n - Useful for freeing resources\n\nImplementation notes:\n- Requires daemon to track active storage instances\n- May need RPC protocol additions for multi-repo queries\n- Should gracefully handle repos that no longer exist\n\nDepends on: Global daemon flag (makes this more useful)\n\nContext: This provides the UX layer on top of existing multi-repo support. The daemon can already serve multiple repos - this makes it easy to work with them.","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-10-17T20:43:49.816998-07:00","updated_at":"2025-10-18T00:04:42.197247-07:00","closed_at":"2025-10-18T00:04:42.197247-07:00","dependencies":[{"issue_id":"bd-123","depends_on_id":"bd-73","type":"parent-child","created_at":"2025-10-17T20:44:04.407138-07:00","created_by":"daemon"},{"issue_id":"bd-123","depends_on_id":"bd-121","type":"blocks","created_at":"2025-10-17T20:44:13.681626-07:00","created_by":"daemon"}]}
|
||||
{"id":"bd-124","title":"Add daemon auto-start on first use","description":"Currently users must manually start daemon with 'bd daemon'. For better UX, auto-start daemon when first bd command is run.\n\nImplementation:\n\n1. In PersistentPreRun, check if daemon is running\n2. If not, check if auto-start is enabled (default: true)\n3. Start daemon with appropriate flags (--global if configured)\n4. Wait for socket to be ready (with timeout)\n5. Retry connection to newly-started daemon\n6. Silently fail back to direct mode if daemon won't start\n\nConfiguration:\n- BEADS_AUTO_START_DAEMON env var (default: true)\n- --no-auto-daemon flag to disable\n- Config file option: auto_start_daemon = true\n\nSafety considerations:\n- Don't auto-start if daemon failed recently (exponential backoff)\n- Log auto-start to daemon.log\n- Clear error messages if auto-start fails\n- Never auto-start if --no-daemon flag is set\n\nBenefits:\n- Zero-configuration experience\n- Daemon benefits (speed, multi-repo) automatic\n- Still supports direct mode as fallback\n\nDepends on: Global daemon flag would make this more useful","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-10-17T20:43:50.961453-07:00","updated_at":"2025-10-17T23:33:57.173903-07:00","closed_at":"2025-10-17T23:33:57.173903-07:00","dependencies":[{"issue_id":"bd-124","depends_on_id":"bd-73","type":"parent-child","created_at":"2025-10-17T20:44:05.502634-07:00","created_by":"daemon"},{"issue_id":"bd-124","depends_on_id":"bd-121","type":"blocks","created_at":"2025-10-17T20:44:14.987308-07:00","created_by":"daemon"}]}
|
||||
{"id":"bd-125","title":"Add workspace config file for multi-repo management (optional enhancement)","description":"For users who want explicit control over multi-repo setup without daemon, add optional workspace config file.\n\nConfig file: ~/.beads/workspaces.toml\n\nExample:\n[workspaces]\ncurrent = \"global\"\n\n[workspace.global]\ndb = \"~/.beads/global.db\"\ndescription = \"System-wide tasks\"\n\n[workspace.project1] \ndb = \"~/src/project1/.beads/db.sqlite\"\ndescription = \"Main product\"\n\n[workspace.project2]\ndb = \"~/src/project2/.beads/db.sqlite\"\ndescription = \"Internal tools\"\n\nCommands:\nbd workspace list # Show all workspaces\nbd workspace add NAME PATH # Add workspace\nbd workspace remove NAME # Remove workspace \nbd workspace use NAME # Switch active workspace\nbd workspace current # Show current workspace\nbd --workspace NAME \u003ccommand\u003e # Override for single command\n\nImplementation:\n- Load config in PersistentPreRun\n- Override dbPath based on current workspace\n- Store workspace state in config file\n- Support both workspace config AND auto-discovery\n- Workspace config takes precedence over auto-discovery\n\nPriority rationale:\n- Priority 3 (low) because daemon approach already solves this\n- Only implement if users request explicit workspace management\n- Adds complexity vs daemon's automatic discovery\n\nAlternative: Users can use BEADS_DB env var for manual workspace switching today.","status":"open","priority":3,"issue_type":"feature","created_at":"2025-10-17T20:43:52.348572-07:00","updated_at":"2025-10-17T20:43:52.348572-07:00","dependencies":[{"issue_id":"bd-125","depends_on_id":"bd-73","type":"parent-child","created_at":"2025-10-17T20:44:06.411344-07:00","created_by":"daemon"}]}
|
||||
{"id":"bd-126","title":"Add cross-repo issue references (future enhancement)","description":"Support referencing issues across different beads repositories. Useful for tracking dependencies between separate projects.\n\nProposed syntax:\n- Local reference: bd-123 (current behavior)\n- Cross-repo by path: ~/src/other-project#bd-456\n- Cross-repo by workspace name: @project2:bd-789\n\nUse cases:\n1. Frontend project depends on backend API issue\n2. Shared library changes blocking multiple projects\n3. System administrator tracking work across machines\n4. Monorepo with separate beads databases per component\n\nImplementation challenges:\n- Storage layer needs to query external databases\n- Dependency resolution across repos\n- What if external repo not available?\n- How to handle in JSONL export/import?\n- Security: should repos be able to read others?\n\nDesign questions to resolve first:\n1. Read-only references vs full cross-repo dependencies?\n2. How to handle repo renames/moves?\n3. Absolute paths vs workspace names vs git remotes?\n4. Should bd-73 auto-discover related repos?\n\nRecommendation: \n- Gather user feedback first\n- Start with read-only references\n- Implement as plugin/extension?\n\nContext: This is mentioned in bd-73 as approach #2. Much more complex than daemon multi-repo approach. Only implement if there's strong user demand.\n\nPriority: Backlog (4) - wait for user feedback before designing","status":"open","priority":4,"issue_type":"feature","created_at":"2025-10-17T20:43:54.04594-07:00","updated_at":"2025-10-17T20:43:54.04594-07:00","dependencies":[{"issue_id":"bd-126","depends_on_id":"bd-73","type":"parent-child","created_at":"2025-10-17T20:44:07.576103-07:00","created_by":"daemon"}]}
|
||||
@@ -51,6 +51,10 @@
|
||||
{"id":"bd-160","title":"Global daemon should warn/reject --auto-commit and --auto-push","description":"When user runs 'bd daemon --global --auto-commit', it's unclear which repo the daemon will commit to (especially after fixing bd-122 where global daemon won't open a DB).\n\nOptions:\n1. Warn and ignore the flags in global mode\n2. Error out with clear message\n\nLine 87-91 already checks autoPush, but should skip check entirely for global mode. Add user-friendly messaging about flag incompatibility.","status":"closed","priority":3,"issue_type":"feature","created_at":"2025-10-17T22:58:02.137987-07:00","updated_at":"2025-10-17T23:04:30.223432-07:00","closed_at":"2025-10-17T23:04:30.223432-07:00"}
|
||||
{"id":"bd-163","title":"Test A","description":"","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-17T23:06:59.59343-07:00","updated_at":"2025-10-17T23:06:59.740704-07:00","closed_at":"2025-10-17T23:06:59.740704-07:00","dependencies":[{"issue_id":"bd-163","depends_on_id":"bd-164","type":"blocks","created_at":"2025-10-17T23:06:59.668292-07:00","created_by":"daemon"}]}
|
||||
{"id":"bd-164","title":"Test B","description":"","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-17T23:06:59.626612-07:00","updated_at":"2025-10-17T23:06:59.744519-07:00","closed_at":"2025-10-17T23:06:59.744519-07:00"}
|
||||
{"id":"bd-165","title":"bd ready doesn't show epics/tasks ready to close when all children complete","description":"The 'bd ready' command doesn't show epics that have all children complete as ready work. Example: vc-30 (epic) blocks 4 children - 3 are closed, 1 is in_progress. The epic itself should be reviewable/closable but doesn't show in ready work. Similarly, vc-61 (epic, in_progress) depends on vc-48 (closed) but doesn't show as ready. Expected: epics with all dependencies satisfied should show as ready to review/close. Actual: 'bd ready' returns 'no ready work' even though multiple epics are completable.","acceptance_criteria":"bd ready shows epics/tasks that have all dependencies satisfied (even if status is in_progress), allowing user to review and close them","status":"closed","priority":0,"issue_type":"bug","created_at":"2025-10-18T00:04:41.811991-07:00","updated_at":"2025-10-18T00:20:36.188211-07:00","closed_at":"2025-10-18T00:20:36.188211-07:00"}
|
||||
{"id":"bd-167","title":"CleanupStaleInstances() never called in production - orphaned claims accumulate","description":"The CleanupStaleInstances() method exists in storage layer but is never called in production code. This means dead executors leave orphaned claims that block work forever. Example: vc-106 claimed by executor that died 2 hours ago, still shows in_progress with execution_state record. Need to: 1) Add periodic cleanup to executor main loop (every 5 min?), 2) Make cleanup also release claimed issues (delete execution_state AND reset status to open), 3) Add comment explaining why released.","design":"Add background goroutine in executor that calls CleanupStaleInstances() every 5 minutes. When marking instance stopped, also query for all issues claimed by that instance and release them (delete execution_state, set status=open, add event comment).","acceptance_criteria":"Dead executors automatically release their claims within 5-10 minutes of going stale, issues return to open status and become available for re-execution","status":"open","priority":0,"issue_type":"bug","created_at":"2025-10-18T00:24:57.920072-07:00","updated_at":"2025-10-18T00:24:57.920072-07:00"}
|
||||
{"id":"bd-168","title":"releaseIssueWithError() deletes execution_state but leaves status as in_progress","description":"When an executor hits an error and releases an issue via releaseIssueWithError(), it deletes the execution_state but leaves the issue status as in_progress. This means the issue drops out of ready work but has no active executor. Expected: releasing should reset status to open so the issue becomes available again. Current code in conversation.go just calls ReleaseIssue() which only deletes execution_state.","design":"Update releaseIssueWithError() to also update issue status back to open. Or create a new ReleaseAndReopen() method that does both atomically in a transaction.","acceptance_criteria":"Issues released due to errors automatically return to open status and show in bd ready","status":"open","priority":1,"issue_type":"bug","created_at":"2025-10-18T00:25:06.798843-07:00","updated_at":"2025-10-18T00:25:06.798843-07:00"}
|
||||
{"id":"bd-169","title":"Add 'bd stale' command to show orphaned claims and dead executors","description":"Need visibility into orphaned claims - issues stuck in_progress with execution_state but executor is dead/stopped. Add command to show: 1) All issues with execution_state where executor status=stopped or last_heartbeat \u003e threshold, 2) Executor instance details (when died, how long claimed), 3) Option to auto-release them. Makes manual recovery easier until auto-cleanup (bd-167) is implemented.","design":"Query: SELECT i.*, ei.status, ei.last_heartbeat FROM issues i JOIN issue_execution_state ies ON i.id = ies.issue_id JOIN executor_instances ei ON ies.executor_instance_id = ei.instance_id WHERE ei.status='stopped' OR ei.last_heartbeat \u003c NOW() - threshold. Add --release flag to auto-release all found issues.","acceptance_criteria":"bd stale shows orphaned claims, bd stale --release cleans them up","status":"open","priority":1,"issue_type":"feature","created_at":"2025-10-18T00:25:16.530937-07:00","updated_at":"2025-10-18T00:25:16.530937-07:00"}
|
||||
{"id":"bd-17","title":"bd should auto-detect .beads/*.db in current directory","description":"When bd is run without --db flag, it defaults to beads' own database instead of looking for a .beads/*.db file in the current working directory. This causes confusion when working on other projects that use beads for issue tracking (like vc).\n\nExpected behavior: bd should search for .beads/*.db in cwd and use that if found, before falling back to default beads database.\n\nExample: Running 'bd ready' in /Users/stevey/src/vc/vc/ should automatically find and use .beads/vc.db without requiring --db flag every time.","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-10-16T20:46:08.971822-07:00","updated_at":"2025-10-17T01:32:00.650584-07:00","closed_at":"2025-10-16T10:07:34.046944-07:00"}
|
||||
{"id":"bd-18","title":"Document or automate JSONL sync workflow for git collaboration","description":"When using beads across multiple machines/environments via git, there's a workflow gap:\n\n1. Machine A: Create issues → stored in .beads/project.db\n2. Machine A: bd export -o .beads/issues.jsonl\n3. Machine A: git add .beads/issues.jsonl \u0026\u0026 git commit \u0026\u0026 git push\n4. Machine B: git pull\n5. Machine B: ??? issues.jsonl exists but project.db is empty/stale\n\nThe missing step is: bd import --db .beads/project.db -i .beads/issues.jsonl\n\nThis needs to be either:\na) Documented clearly in workflow docs\nb) Automated (e.g., git hook, or bd auto-imports if jsonl is newer than db)\nc) Both\n\nReal-world impact: User had Claude Code on GCP VM create vc issues from BOOTSTRAP.md. They were exported to issues.jsonl and committed. But on local machine, vc.db was empty until manual import was run.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-16T20:46:08.971822-07:00","updated_at":"2025-10-17T01:32:00.651438-07:00","closed_at":"2025-10-14T02:51:52.199766-07:00"}
|
||||
{"id":"bd-19","title":"Root issue for dep tree test","description":"","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-16T20:46:08.971822-07:00","updated_at":"2025-10-17T01:32:00.652067-07:00","closed_at":"2025-10-16T10:07:34.1266-07:00"}
|
||||
|
||||
@@ -154,6 +154,13 @@ bd restore <id> # View full history at time of compaction
|
||||
# Import with collision detection
|
||||
bd import -i .beads/issues.jsonl --dry-run # Preview only
|
||||
bd import -i .beads/issues.jsonl --resolve-collisions # Auto-resolve
|
||||
|
||||
# Multi-repo management (requires global daemon)
|
||||
bd repos list # List all cached repositories
|
||||
bd repos ready # View ready work across all repos
|
||||
bd repos ready --group # Group by repository
|
||||
bd repos stats # Combined statistics
|
||||
bd repos clear-cache # Clear repository cache
|
||||
```
|
||||
|
||||
### Workflow
|
||||
|
||||
101
README.md
101
README.md
@@ -1034,6 +1034,107 @@ cd ~/projects/api && bd ready # Uses global daemon
|
||||
|
||||
**Note:** Global daemon doesn't require git repos, making it suitable for non-git projects or multi-repo setups.
|
||||
|
||||
#### Multi-Repository Commands with `bd repos`
|
||||
|
||||
**New in v0.9.12:** When using a global daemon, use `bd repos` to view and manage work across all cached repositories.
|
||||
|
||||
```bash
|
||||
# List all cached repositories
|
||||
bd repos list
|
||||
|
||||
# View ready work across all repos
|
||||
bd repos ready
|
||||
|
||||
# Group ready work by repository
|
||||
bd repos ready --group
|
||||
|
||||
# Filter by priority
|
||||
bd repos ready --priority 1
|
||||
|
||||
# Filter by assignee
|
||||
bd repos ready --assignee alice
|
||||
|
||||
# View combined statistics
|
||||
bd repos stats
|
||||
|
||||
# Clear repository cache (free resources)
|
||||
bd repos clear-cache
|
||||
```
|
||||
|
||||
**Example output:**
|
||||
|
||||
```bash
|
||||
$ bd repos list
|
||||
|
||||
📁 Cached Repositories (3):
|
||||
|
||||
/Users/alice/projects/webapp
|
||||
Prefix: webapp-
|
||||
Issue Count: 45
|
||||
Status: active
|
||||
|
||||
/Users/alice/projects/api
|
||||
Prefix: api-
|
||||
Issue Count: 12
|
||||
Status: active
|
||||
|
||||
/Users/alice/projects/docs
|
||||
Prefix: docs-
|
||||
Issue Count: 8
|
||||
Status: active
|
||||
|
||||
$ bd repos ready --group
|
||||
|
||||
📋 Ready work across 3 repositories:
|
||||
|
||||
/Users/alice/projects/webapp (4 issues):
|
||||
1. [P1] webapp-23: Fix navigation bug
|
||||
Estimate: 30 min
|
||||
2. [P2] webapp-45: Add loading spinner
|
||||
Estimate: 15 min
|
||||
...
|
||||
|
||||
/Users/alice/projects/api (2 issues):
|
||||
1. [P0] api-10: Fix critical auth bug
|
||||
Estimate: 60 min
|
||||
2. [P1] api-12: Add rate limiting
|
||||
Estimate: 45 min
|
||||
|
||||
$ bd repos stats
|
||||
|
||||
📊 Combined Statistics Across All Repositories:
|
||||
|
||||
Total Issues: 65
|
||||
Open: 23
|
||||
In Progress: 5
|
||||
Closed: 37
|
||||
Blocked: 3
|
||||
Ready: 15
|
||||
|
||||
📁 Per-Repository Breakdown:
|
||||
|
||||
/Users/alice/projects/webapp:
|
||||
Total: 45 Ready: 10 Blocked: 2
|
||||
|
||||
/Users/alice/projects/api:
|
||||
Total: 12 Ready: 3 Blocked: 1
|
||||
|
||||
/Users/alice/projects/docs:
|
||||
Total: 8 Ready: 2 Blocked: 0
|
||||
```
|
||||
|
||||
**Requirements:**
|
||||
- Global daemon must be running (`bd daemon --global`)
|
||||
- At least one command has been run in each repository (to cache it)
|
||||
- `--json` flag available for programmatic use
|
||||
|
||||
**Use cases:**
|
||||
- Get an overview of all active projects
|
||||
- Find highest-priority work across all repos
|
||||
- Balance workload across multiple projects
|
||||
- Track overall progress and statistics
|
||||
- Identify which repos need attention
|
||||
|
||||
### Optional: Git Hooks for Immediate Sync
|
||||
|
||||
Create `.git/hooks/pre-commit`:
|
||||
|
||||
@@ -15,13 +15,13 @@ import (
|
||||
|
||||
var readyCmd = &cobra.Command{
|
||||
Use: "ready",
|
||||
Short: "Show ready work (no blockers)",
|
||||
Short: "Show ready work (no blockers, open or in-progress)",
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
limit, _ := cmd.Flags().GetInt("limit")
|
||||
assignee, _ := cmd.Flags().GetString("assignee")
|
||||
|
||||
filter := types.WorkFilter{
|
||||
Status: types.StatusOpen,
|
||||
// Leave Status empty to get both 'open' and 'in_progress' (bd-165)
|
||||
Limit: limit,
|
||||
}
|
||||
// Use Changed() to properly handle P0 (priority=0)
|
||||
|
||||
293
cmd/bd/repos.go
Normal file
293
cmd/bd/repos.go
Normal file
@@ -0,0 +1,293 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/fatih/color"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/steveyegge/beads/internal/rpc"
|
||||
)
|
||||
|
||||
var reposCmd = &cobra.Command{
|
||||
Use: "repos",
|
||||
Short: "Multi-repository management (requires global daemon)",
|
||||
Long: `Manage work across multiple repositories when using a global daemon.
|
||||
|
||||
This command requires a running global daemon (bd daemon --global).
|
||||
It allows you to view and aggregate work across all cached repositories.`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
cmd.Help()
|
||||
},
|
||||
}
|
||||
|
||||
var reposListCmd = &cobra.Command{
|
||||
Use: "list",
|
||||
Short: "List all cached repositories",
|
||||
Long: `Show all repositories that the daemon has cached.
|
||||
|
||||
The daemon caches a repository after any command is run from that directory.
|
||||
This command shows all active caches with their paths, prefixes, and issue counts.`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
if daemonClient == nil {
|
||||
fmt.Fprintf(os.Stderr, "Error: This command requires a running daemon\n")
|
||||
fmt.Fprintf(os.Stderr, "Start one with: bd daemon --global\n")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
resp, err := daemonClient.ReposList()
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
var repos []rpc.RepoInfo
|
||||
if err := json.Unmarshal(resp.Data, &repos); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error parsing response: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
if jsonOutput {
|
||||
outputJSON(repos)
|
||||
return
|
||||
}
|
||||
|
||||
if len(repos) == 0 {
|
||||
yellow := color.New(color.FgYellow).SprintFunc()
|
||||
fmt.Printf("\n%s No repositories cached yet\n", yellow("📁"))
|
||||
fmt.Printf("Repositories are cached when you run commands from their directories.\n\n")
|
||||
return
|
||||
}
|
||||
|
||||
cyan := color.New(color.FgCyan).SprintFunc()
|
||||
green := color.New(color.FgGreen).SprintFunc()
|
||||
fmt.Printf("\n%s Cached Repositories (%d):\n\n", cyan("📁"), len(repos))
|
||||
|
||||
for _, repo := range repos {
|
||||
prefix := repo.Prefix
|
||||
if prefix == "" {
|
||||
prefix = "(no prefix)"
|
||||
}
|
||||
fmt.Printf("%s\n", repo.Path)
|
||||
fmt.Printf(" Prefix: %s\n", prefix)
|
||||
fmt.Printf(" Issue Count: %s\n", green(fmt.Sprintf("%d", repo.IssueCount)))
|
||||
fmt.Printf(" Status: %s\n", repo.LastAccess)
|
||||
fmt.Println()
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
var reposReadyCmd = &cobra.Command{
|
||||
Use: "ready",
|
||||
Short: "Show ready work across all repositories",
|
||||
Long: `Display ready work (issues with no blockers) from all cached repositories.
|
||||
|
||||
By default, shows a flat list of all ready work. Use --group to organize by repository.`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
if daemonClient == nil {
|
||||
fmt.Fprintf(os.Stderr, "Error: This command requires a running daemon\n")
|
||||
fmt.Fprintf(os.Stderr, "Start one with: bd daemon --global\n")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
limit, _ := cmd.Flags().GetInt("limit")
|
||||
assignee, _ := cmd.Flags().GetString("assignee")
|
||||
groupByRepo, _ := cmd.Flags().GetBool("group")
|
||||
|
||||
readyArgs := &rpc.ReposReadyArgs{
|
||||
Assignee: assignee,
|
||||
Limit: limit,
|
||||
GroupByRepo: groupByRepo,
|
||||
}
|
||||
|
||||
if cmd.Flags().Changed("priority") {
|
||||
priority, _ := cmd.Flags().GetInt("priority")
|
||||
readyArgs.Priority = &priority
|
||||
}
|
||||
|
||||
resp, err := daemonClient.ReposReady(readyArgs)
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
if groupByRepo {
|
||||
var grouped []rpc.RepoReadyWork
|
||||
if err := json.Unmarshal(resp.Data, &grouped); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error parsing response: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
if jsonOutput {
|
||||
outputJSON(grouped)
|
||||
return
|
||||
}
|
||||
|
||||
if len(grouped) == 0 {
|
||||
yellow := color.New(color.FgYellow).SprintFunc()
|
||||
fmt.Printf("\n%s No ready work found across any repositories\n\n", yellow("✨"))
|
||||
return
|
||||
}
|
||||
|
||||
cyan := color.New(color.FgCyan).SprintFunc()
|
||||
fmt.Printf("\n%s Ready work across %d repositories:\n\n", cyan("📋"), len(grouped))
|
||||
|
||||
for _, repo := range grouped {
|
||||
fmt.Printf("%s (%d issues):\n", repo.RepoPath, len(repo.Issues))
|
||||
for i, issue := range repo.Issues {
|
||||
fmt.Printf(" %d. [P%d] %s: %s\n", i+1, issue.Priority, issue.ID, issue.Title)
|
||||
if issue.EstimatedMinutes != nil {
|
||||
fmt.Printf(" Estimate: %d min\n", *issue.EstimatedMinutes)
|
||||
}
|
||||
if issue.Assignee != "" {
|
||||
fmt.Printf(" Assignee: %s\n", issue.Assignee)
|
||||
}
|
||||
}
|
||||
fmt.Println()
|
||||
}
|
||||
} else {
|
||||
var issues []rpc.ReposReadyIssue
|
||||
if err := json.Unmarshal(resp.Data, &issues); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error parsing response: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
if jsonOutput {
|
||||
outputJSON(issues)
|
||||
return
|
||||
}
|
||||
|
||||
if len(issues) == 0 {
|
||||
yellow := color.New(color.FgYellow).SprintFunc()
|
||||
fmt.Printf("\n%s No ready work found across any repositories\n\n", yellow("✨"))
|
||||
return
|
||||
}
|
||||
|
||||
cyan := color.New(color.FgCyan).SprintFunc()
|
||||
fmt.Printf("\n%s Ready work across all repositories (%d issues):\n\n", cyan("📋"), len(issues))
|
||||
|
||||
for i, item := range issues {
|
||||
issue := item.Issue
|
||||
fmt.Printf("%d. [P%d] %s: %s\n", i+1, issue.Priority, issue.ID, issue.Title)
|
||||
fmt.Printf(" Repo: %s\n", item.RepoPath)
|
||||
|
||||
if issue.EstimatedMinutes != nil {
|
||||
fmt.Printf(" Estimate: %d min\n", *issue.EstimatedMinutes)
|
||||
}
|
||||
if issue.Assignee != "" {
|
||||
fmt.Printf(" Assignee: %s\n", issue.Assignee)
|
||||
}
|
||||
}
|
||||
fmt.Println()
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
var reposStatsCmd = &cobra.Command{
|
||||
Use: "stats",
|
||||
Short: "Show combined statistics across all repositories",
|
||||
Long: `Display aggregated statistics from all cached repositories.
|
||||
|
||||
Shows both total combined statistics and per-repository breakdowns.`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
if daemonClient == nil {
|
||||
fmt.Fprintf(os.Stderr, "Error: This command requires a running daemon\n")
|
||||
fmt.Fprintf(os.Stderr, "Start one with: bd daemon --global\n")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
resp, err := daemonClient.ReposStats()
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
var statsResp rpc.ReposStatsResponse
|
||||
if err := json.Unmarshal(resp.Data, &statsResp); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error parsing response: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
if jsonOutput {
|
||||
outputJSON(statsResp)
|
||||
return
|
||||
}
|
||||
|
||||
cyan := color.New(color.FgCyan).SprintFunc()
|
||||
green := color.New(color.FgGreen).SprintFunc()
|
||||
yellow := color.New(color.FgYellow).SprintFunc()
|
||||
|
||||
fmt.Printf("\n%s Combined Statistics Across All Repositories:\n\n", cyan("📊"))
|
||||
fmt.Printf("Total Issues: %d\n", statsResp.Total.TotalIssues)
|
||||
fmt.Printf("Open: %s\n", green(fmt.Sprintf("%d", statsResp.Total.OpenIssues)))
|
||||
fmt.Printf("In Progress: %s\n", yellow(fmt.Sprintf("%d", statsResp.Total.InProgressIssues)))
|
||||
fmt.Printf("Closed: %d\n", statsResp.Total.ClosedIssues)
|
||||
fmt.Printf("Blocked: %d\n", statsResp.Total.BlockedIssues)
|
||||
fmt.Printf("Ready: %s\n", green(fmt.Sprintf("%d", statsResp.Total.ReadyIssues)))
|
||||
fmt.Println()
|
||||
|
||||
if len(statsResp.PerRepo) > 0 {
|
||||
fmt.Printf("%s Per-Repository Breakdown:\n\n", cyan("📁"))
|
||||
for path, stats := range statsResp.PerRepo {
|
||||
fmt.Printf("%s:\n", path)
|
||||
fmt.Printf(" Total: %d Ready: %s Blocked: %d\n",
|
||||
stats.TotalIssues, green(fmt.Sprintf("%d", stats.ReadyIssues)), stats.BlockedIssues)
|
||||
fmt.Println()
|
||||
}
|
||||
}
|
||||
|
||||
if len(statsResp.Errors) > 0 {
|
||||
red := color.New(color.FgRed).SprintFunc()
|
||||
fmt.Printf("%s Errors (%d repositories):\n", red("⚠"), len(statsResp.Errors))
|
||||
for path, errMsg := range statsResp.Errors {
|
||||
fmt.Printf(" %s: %s\n", path, errMsg)
|
||||
}
|
||||
fmt.Println()
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
var reposClearCacheCmd = &cobra.Command{
|
||||
Use: "clear-cache",
|
||||
Short: "Clear all cached repository connections",
|
||||
Long: `Close all cached storage connections and clear the daemon's repository cache.
|
||||
|
||||
Useful for freeing resources or forcing the daemon to reload repository databases.
|
||||
The cache will be rebuilt automatically as commands are run from different directories.`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
if daemonClient == nil {
|
||||
fmt.Fprintf(os.Stderr, "Error: This command requires a running daemon\n")
|
||||
fmt.Fprintf(os.Stderr, "Start one with: bd daemon --global\n")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
resp, err := daemonClient.ReposClearCache()
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
if jsonOutput {
|
||||
fmt.Println(string(resp.Data))
|
||||
return
|
||||
}
|
||||
|
||||
green := color.New(color.FgGreen).SprintFunc()
|
||||
fmt.Printf("\n%s Repository cache cleared successfully\n\n", green("✅"))
|
||||
},
|
||||
}
|
||||
|
||||
func init() {
|
||||
reposReadyCmd.Flags().IntP("limit", "n", 10, "Maximum issues to show per repository")
|
||||
reposReadyCmd.Flags().IntP("priority", "p", -1, "Filter by priority (0-4)")
|
||||
reposReadyCmd.Flags().StringP("assignee", "a", "", "Filter by assignee")
|
||||
reposReadyCmd.Flags().BoolP("group", "g", false, "Group issues by repository")
|
||||
|
||||
reposCmd.AddCommand(reposListCmd)
|
||||
reposCmd.AddCommand(reposReadyCmd)
|
||||
reposCmd.AddCommand(reposStatsCmd)
|
||||
reposCmd.AddCommand(reposClearCacheCmd)
|
||||
|
||||
rootCmd.AddCommand(reposCmd)
|
||||
}
|
||||
@@ -190,3 +190,23 @@ func (c *Client) RemoveLabel(args *LabelRemoveArgs) (*Response, error) {
|
||||
func (c *Client) Batch(args *BatchArgs) (*Response, error) {
|
||||
return c.Execute(OpBatch, args)
|
||||
}
|
||||
|
||||
// ReposList lists all cached repositories
|
||||
func (c *Client) ReposList() (*Response, error) {
|
||||
return c.Execute(OpReposList, struct{}{})
|
||||
}
|
||||
|
||||
// ReposReady gets ready work across all repositories
|
||||
func (c *Client) ReposReady(args *ReposReadyArgs) (*Response, error) {
|
||||
return c.Execute(OpReposReady, args)
|
||||
}
|
||||
|
||||
// ReposStats gets combined statistics across all repositories
|
||||
func (c *Client) ReposStats() (*Response, error) {
|
||||
return c.Execute(OpReposStats, struct{}{})
|
||||
}
|
||||
|
||||
// ReposClearCache clears the repository cache
|
||||
func (c *Client) ReposClearCache() (*Response, error) {
|
||||
return c.Execute(OpReposClearCache, struct{}{})
|
||||
}
|
||||
|
||||
@@ -1,23 +1,31 @@
|
||||
package rpc
|
||||
|
||||
import "encoding/json"
|
||||
import (
|
||||
"encoding/json"
|
||||
|
||||
"github.com/steveyegge/beads/internal/types"
|
||||
)
|
||||
|
||||
// Operation constants for all bd commands
|
||||
const (
|
||||
OpPing = "ping"
|
||||
OpCreate = "create"
|
||||
OpUpdate = "update"
|
||||
OpClose = "close"
|
||||
OpList = "list"
|
||||
OpShow = "show"
|
||||
OpReady = "ready"
|
||||
OpStats = "stats"
|
||||
OpDepAdd = "dep_add"
|
||||
OpDepRemove = "dep_remove"
|
||||
OpDepTree = "dep_tree"
|
||||
OpLabelAdd = "label_add"
|
||||
OpLabelRemove = "label_remove"
|
||||
OpBatch = "batch"
|
||||
OpPing = "ping"
|
||||
OpCreate = "create"
|
||||
OpUpdate = "update"
|
||||
OpClose = "close"
|
||||
OpList = "list"
|
||||
OpShow = "show"
|
||||
OpReady = "ready"
|
||||
OpStats = "stats"
|
||||
OpDepAdd = "dep_add"
|
||||
OpDepRemove = "dep_remove"
|
||||
OpDepTree = "dep_tree"
|
||||
OpLabelAdd = "label_add"
|
||||
OpLabelRemove = "label_remove"
|
||||
OpBatch = "batch"
|
||||
OpReposList = "repos_list"
|
||||
OpReposReady = "repos_ready"
|
||||
OpReposStats = "repos_stats"
|
||||
OpReposClearCache = "repos_clear_cache"
|
||||
)
|
||||
|
||||
// Request represents an RPC request from client to daemon
|
||||
@@ -151,3 +159,38 @@ type BatchResult struct {
|
||||
Data json.RawMessage `json:"data,omitempty"`
|
||||
Error string `json:"error,omitempty"`
|
||||
}
|
||||
|
||||
// ReposReadyArgs represents arguments for repos ready operation
|
||||
type ReposReadyArgs struct {
|
||||
Assignee string `json:"assignee,omitempty"`
|
||||
Priority *int `json:"priority,omitempty"`
|
||||
Limit int `json:"limit,omitempty"`
|
||||
GroupByRepo bool `json:"group_by_repo,omitempty"`
|
||||
}
|
||||
|
||||
// RepoInfo represents information about a cached repository
|
||||
type RepoInfo struct {
|
||||
Path string `json:"path"`
|
||||
Prefix string `json:"prefix"`
|
||||
IssueCount int `json:"issue_count"`
|
||||
LastAccess string `json:"last_access"`
|
||||
}
|
||||
|
||||
// RepoReadyWork represents ready work for a single repository
|
||||
type RepoReadyWork struct {
|
||||
RepoPath string `json:"repo_path"`
|
||||
Issues []*types.Issue `json:"issues"`
|
||||
}
|
||||
|
||||
// ReposReadyIssue represents an issue with repo context
|
||||
type ReposReadyIssue struct {
|
||||
RepoPath string `json:"repo_path"`
|
||||
Issue *types.Issue `json:"issue"`
|
||||
}
|
||||
|
||||
// ReposStatsResponse contains combined statistics across repos
|
||||
type ReposStatsResponse struct {
|
||||
Total types.Statistics `json:"total"`
|
||||
PerRepo map[string]types.Statistics `json:"per_repo"`
|
||||
Errors map[string]string `json:"errors,omitempty"`
|
||||
}
|
||||
|
||||
@@ -11,6 +11,7 @@ import (
|
||||
"path/filepath"
|
||||
"sync"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
"github.com/steveyegge/beads/internal/storage"
|
||||
"github.com/steveyegge/beads/internal/storage/sqlite"
|
||||
@@ -176,6 +177,14 @@ func (s *Server) handleRequest(req *Request) Response {
|
||||
return s.handleLabelRemove(req)
|
||||
case OpBatch:
|
||||
return s.handleBatch(req)
|
||||
case OpReposList:
|
||||
return s.handleReposList(req)
|
||||
case OpReposReady:
|
||||
return s.handleReposReady(req)
|
||||
case OpReposStats:
|
||||
return s.handleReposStats(req)
|
||||
case OpReposClearCache:
|
||||
return s.handleReposClearCache(req)
|
||||
default:
|
||||
return Response{
|
||||
Success: false,
|
||||
@@ -766,3 +775,203 @@ func (s *Server) writeResponse(writer *bufio.Writer, resp Response) {
|
||||
writer.WriteByte('\n')
|
||||
writer.Flush()
|
||||
}
|
||||
|
||||
// Multi-repo handlers
|
||||
|
||||
func (s *Server) handleReposList(_ *Request) Response {
|
||||
// Keep read lock during iteration to prevent stores from being closed mid-query
|
||||
s.cacheMu.RLock()
|
||||
defer s.cacheMu.RUnlock()
|
||||
|
||||
repos := make([]RepoInfo, 0, len(s.storageCache))
|
||||
for path, store := range s.storageCache {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
|
||||
stats, err := store.GetStatistics(ctx)
|
||||
cancel()
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// Extract prefix from a sample issue
|
||||
filter := types.IssueFilter{Limit: 1}
|
||||
ctx2, cancel2 := context.WithTimeout(context.Background(), 1*time.Second)
|
||||
issues, err := store.SearchIssues(ctx2, "", filter)
|
||||
cancel2()
|
||||
prefix := ""
|
||||
if err == nil && len(issues) > 0 && len(issues[0].ID) > 0 {
|
||||
// Extract prefix (everything before the last hyphen and number)
|
||||
id := issues[0].ID
|
||||
for i := len(id) - 1; i >= 0; i-- {
|
||||
if id[i] == '-' {
|
||||
prefix = id[:i+1]
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
repos = append(repos, RepoInfo{
|
||||
Path: path,
|
||||
Prefix: prefix,
|
||||
IssueCount: stats.TotalIssues,
|
||||
LastAccess: "active",
|
||||
})
|
||||
}
|
||||
|
||||
data, _ := json.Marshal(repos)
|
||||
return Response{
|
||||
Success: true,
|
||||
Data: data,
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Server) handleReposReady(req *Request) Response {
|
||||
var args ReposReadyArgs
|
||||
if err := json.Unmarshal(req.Args, &args); err != nil {
|
||||
return Response{
|
||||
Success: false,
|
||||
Error: fmt.Sprintf("invalid args: %v", err),
|
||||
}
|
||||
}
|
||||
|
||||
// Keep read lock during iteration to prevent stores from being closed mid-query
|
||||
s.cacheMu.RLock()
|
||||
defer s.cacheMu.RUnlock()
|
||||
|
||||
if args.GroupByRepo {
|
||||
result := make([]RepoReadyWork, 0, len(s.storageCache))
|
||||
for path, store := range s.storageCache {
|
||||
filter := types.WorkFilter{
|
||||
Status: types.StatusOpen,
|
||||
Limit: args.Limit,
|
||||
}
|
||||
if args.Priority != nil {
|
||||
filter.Priority = args.Priority
|
||||
}
|
||||
if args.Assignee != "" {
|
||||
filter.Assignee = &args.Assignee
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
|
||||
issues, err := store.GetReadyWork(ctx, filter)
|
||||
cancel()
|
||||
if err != nil || len(issues) == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
result = append(result, RepoReadyWork{
|
||||
RepoPath: path,
|
||||
Issues: issues,
|
||||
})
|
||||
}
|
||||
|
||||
data, _ := json.Marshal(result)
|
||||
return Response{
|
||||
Success: true,
|
||||
Data: data,
|
||||
}
|
||||
}
|
||||
|
||||
// Flat list of all ready issues across all repos
|
||||
allIssues := make([]ReposReadyIssue, 0)
|
||||
for path, store := range s.storageCache {
|
||||
filter := types.WorkFilter{
|
||||
Status: types.StatusOpen,
|
||||
Limit: args.Limit,
|
||||
}
|
||||
if args.Priority != nil {
|
||||
filter.Priority = args.Priority
|
||||
}
|
||||
if args.Assignee != "" {
|
||||
filter.Assignee = &args.Assignee
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
|
||||
issues, err := store.GetReadyWork(ctx, filter)
|
||||
cancel()
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
for _, issue := range issues {
|
||||
allIssues = append(allIssues, ReposReadyIssue{
|
||||
RepoPath: path,
|
||||
Issue: issue,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
data, _ := json.Marshal(allIssues)
|
||||
return Response{
|
||||
Success: true,
|
||||
Data: data,
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Server) handleReposStats(_ *Request) Response {
|
||||
// Keep read lock during iteration to prevent stores from being closed mid-query
|
||||
s.cacheMu.RLock()
|
||||
defer s.cacheMu.RUnlock()
|
||||
|
||||
total := types.Statistics{}
|
||||
perRepo := make(map[string]types.Statistics)
|
||||
errors := make(map[string]string)
|
||||
|
||||
for path, store := range s.storageCache {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
|
||||
stats, err := store.GetStatistics(ctx)
|
||||
cancel()
|
||||
if err != nil {
|
||||
errors[path] = err.Error()
|
||||
continue
|
||||
}
|
||||
|
||||
perRepo[path] = *stats
|
||||
|
||||
// Aggregate totals
|
||||
total.TotalIssues += stats.TotalIssues
|
||||
total.OpenIssues += stats.OpenIssues
|
||||
total.InProgressIssues += stats.InProgressIssues
|
||||
total.ClosedIssues += stats.ClosedIssues
|
||||
total.BlockedIssues += stats.BlockedIssues
|
||||
total.ReadyIssues += stats.ReadyIssues
|
||||
total.EpicsEligibleForClosure += stats.EpicsEligibleForClosure
|
||||
}
|
||||
|
||||
result := ReposStatsResponse{
|
||||
Total: total,
|
||||
PerRepo: perRepo,
|
||||
}
|
||||
if len(errors) > 0 {
|
||||
result.Errors = errors
|
||||
}
|
||||
|
||||
data, _ := json.Marshal(result)
|
||||
return Response{
|
||||
Success: true,
|
||||
Data: data,
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Server) handleReposClearCache(_ *Request) Response {
|
||||
// Copy stores under write lock, clear cache, then close outside lock
|
||||
// to avoid holding lock during potentially slow Close() operations
|
||||
s.cacheMu.Lock()
|
||||
stores := make([]storage.Storage, 0, len(s.storageCache))
|
||||
for _, store := range s.storageCache {
|
||||
stores = append(stores, store)
|
||||
}
|
||||
s.storageCache = make(map[string]storage.Storage)
|
||||
s.cacheMu.Unlock()
|
||||
|
||||
// Close all storage connections without holding lock
|
||||
for _, store := range stores {
|
||||
if err := store.Close(); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Warning: failed to close storage: %v\n", err)
|
||||
}
|
||||
}
|
||||
|
||||
return Response{
|
||||
Success: true,
|
||||
Data: json.RawMessage(`{"message":"Cache cleared successfully"}`),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -10,18 +10,20 @@ import (
|
||||
)
|
||||
|
||||
// GetReadyWork returns issues with no open blockers
|
||||
// By default, shows both 'open' and 'in_progress' issues so epics/tasks
|
||||
// ready to close are visible (bd-165)
|
||||
func (s *SQLiteStorage) GetReadyWork(ctx context.Context, filter types.WorkFilter) ([]*types.Issue, error) {
|
||||
whereClauses := []string{}
|
||||
args := []interface{}{}
|
||||
|
||||
// Default to open status if not specified
|
||||
// Default to open OR in_progress if not specified (bd-165)
|
||||
if filter.Status == "" {
|
||||
filter.Status = types.StatusOpen
|
||||
whereClauses = append(whereClauses, "i.status IN ('open', 'in_progress')")
|
||||
} else {
|
||||
whereClauses = append(whereClauses, "i.status = ?")
|
||||
args = append(args, filter.Status)
|
||||
}
|
||||
|
||||
whereClauses = append(whereClauses, "i.status = ?")
|
||||
args = append(args, filter.Status)
|
||||
|
||||
if filter.Priority != nil {
|
||||
whereClauses = append(whereClauses, "i.priority = ?")
|
||||
args = append(args, *filter.Priority)
|
||||
|
||||
@@ -751,3 +751,74 @@ func TestDeepHierarchyBlocking(t *testing.T) {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetReadyWorkIncludesInProgress(t *testing.T) {
|
||||
store, cleanup := setupTestDB(t)
|
||||
defer cleanup()
|
||||
|
||||
ctx := context.Background()
|
||||
|
||||
// Create issues:
|
||||
// bd-1: open, no dependencies → READY
|
||||
// bd-2: in_progress, no dependencies → READY (bd-165)
|
||||
// bd-3: in_progress, depends on open issue → BLOCKED
|
||||
// bd-4: closed, no dependencies → NOT READY (closed)
|
||||
|
||||
issue1 := &types.Issue{Title: "Open Ready", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeTask}
|
||||
issue2 := &types.Issue{Title: "In Progress Ready", Status: types.StatusInProgress, Priority: 2, IssueType: types.TypeEpic}
|
||||
issue3 := &types.Issue{Title: "In Progress Blocked", Status: types.StatusInProgress, Priority: 1, IssueType: types.TypeTask}
|
||||
issue4 := &types.Issue{Title: "Blocker", Status: types.StatusOpen, Priority: 1, IssueType: types.TypeTask}
|
||||
issue5 := &types.Issue{Title: "Closed", Status: types.StatusClosed, Priority: 1, IssueType: types.TypeTask}
|
||||
|
||||
store.CreateIssue(ctx, issue1, "test-user")
|
||||
store.CreateIssue(ctx, issue2, "test-user")
|
||||
store.UpdateIssue(ctx, issue2.ID, map[string]interface{}{"status": types.StatusInProgress}, "test-user")
|
||||
store.CreateIssue(ctx, issue3, "test-user")
|
||||
store.UpdateIssue(ctx, issue3.ID, map[string]interface{}{"status": types.StatusInProgress}, "test-user")
|
||||
store.CreateIssue(ctx, issue4, "test-user")
|
||||
store.CreateIssue(ctx, issue5, "test-user")
|
||||
store.CloseIssue(ctx, issue5.ID, "Done", "test-user")
|
||||
|
||||
// Add dependency: issue3 blocks on issue4
|
||||
store.AddDependency(ctx, &types.Dependency{IssueID: issue3.ID, DependsOnID: issue4.ID, Type: types.DepBlocks}, "test-user")
|
||||
|
||||
// Get ready work (default filter - no status specified)
|
||||
ready, err := store.GetReadyWork(ctx, types.WorkFilter{})
|
||||
if err != nil {
|
||||
t.Fatalf("GetReadyWork failed: %v", err)
|
||||
}
|
||||
|
||||
// Should have 3 ready issues:
|
||||
// - issue1 (open, no blockers)
|
||||
// - issue2 (in_progress, no blockers) ← this is the key test case for bd-165
|
||||
// - issue4 (open blocker, but itself has no blockers so it's ready to work on)
|
||||
if len(ready) != 3 {
|
||||
t.Logf("Ready issues:")
|
||||
for _, r := range ready {
|
||||
t.Logf(" - %s: %s (status: %s)", r.ID, r.Title, r.Status)
|
||||
}
|
||||
t.Fatalf("Expected 3 ready issues, got %d", len(ready))
|
||||
}
|
||||
|
||||
// Verify ready issues
|
||||
readyIDs := make(map[string]bool)
|
||||
for _, issue := range ready {
|
||||
readyIDs[issue.ID] = true
|
||||
}
|
||||
|
||||
if !readyIDs[issue1.ID] {
|
||||
t.Errorf("Expected %s (open, no blockers) to be ready", issue1.ID)
|
||||
}
|
||||
if !readyIDs[issue2.ID] {
|
||||
t.Errorf("Expected %s (in_progress, no blockers) to be ready - this is bd-165!", issue2.ID)
|
||||
}
|
||||
if !readyIDs[issue4.ID] {
|
||||
t.Errorf("Expected %s (open blocker, but itself unblocked) to be ready", issue4.ID)
|
||||
}
|
||||
if readyIDs[issue3.ID] {
|
||||
t.Errorf("Expected %s (in_progress, blocked) to NOT be ready", issue3.ID)
|
||||
}
|
||||
if readyIDs[issue5.ID] {
|
||||
t.Errorf("Expected %s (closed) to NOT be ready", issue5.ID)
|
||||
}
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user