feat(close): Add --suggest-next flag to show newly unblocked issues (GH#679)

When closing an issue, the new --suggest-next flag returns a list of
issues that became unblocked (ready to work on) as a result of the close.

This helps agents and users quickly identify what work is now available
after completing a blocker.

Example:
  $ bd close bd-5 --suggest-next
  ✓ Closed bd-5: Completed

  Newly unblocked:
    • bd-7 "Implement feature X" (P1)
    • bd-8 "Write tests for X" (P2)

Implementation:
- Added GetNewlyUnblockedByClose to storage interface
- Implemented efficient single-query for SQLite using blocked_issues_cache
- Added SuggestNext field to CloseArgs in RPC protocol
- Added CloseResult type for structured response
- CLI handles both daemon and direct modes

Thanks to @kraitsura for the detailed feature request and design.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
Steve Yegge
2025-12-25 20:05:04 -08:00
parent 35ab0d7a7f
commit f3a5e02a35
16 changed files with 306 additions and 140 deletions
+1 -1
View File
@@ -450,7 +450,7 @@
{"id":"bd-qqc.7","title":"Push release v{{version}} to remote","description":"Push the commit and tag:\n\n```bash\ngit push \u0026\u0026 git push --tags\n```\n\nVerify on GitHub that the tag appears in releases.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-18T13:00:26.933082-08:00","updated_at":"2025-12-24T16:25:30.689807-08:00","dependencies":[{"issue_id":"bd-qqc.7","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T13:00:26.933687-08:00","created_by":"stevey"},{"issue_id":"bd-qqc.7","depends_on_id":"bd-qqc.6","type":"blocks","created_at":"2025-12-18T13:01:12.711161-08:00","created_by":"stevey"}],"deleted_at":"2025-12-24T16:25:30.689807-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"}
{"id":"bd-qqc.8","title":"Create and push git tag v{{version}}","description":"Create the release tag and push it:\n\n```bash\ngit tag v{{version}}\ngit push origin v{{version}}\n```\n\nThis triggers the GoReleaser GitHub Action to build release binaries.","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-18T22:42:34.659927-08:00","updated_at":"2025-12-24T16:25:30.608841-08:00","dependencies":[{"issue_id":"bd-qqc.8","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T22:42:34.660248-08:00","created_by":"daemon"},{"issue_id":"bd-qqc.8","depends_on_id":"bd-vgi5","type":"blocks","created_at":"2025-12-18T22:43:21.209529-08:00","created_by":"daemon"}],"deleted_at":"2025-12-24T16:25:30.608841-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"}
{"id":"bd-qqc.9","title":"Update Homebrew formula","description":"Update the Homebrew tap with new version:\n\n```bash\n./scripts/update-homebrew.sh {{version}}\n```\n\nThis script waits for GitHub Actions to complete (~5 min), then updates the formula with new SHA256 hashes.\n\nAfter running, verify the formula with:\n\n```bash\nbrew info steveyegge/beads/bd\n```","status":"tombstone","priority":1,"issue_type":"task","created_at":"2025-12-18T22:42:35.815096-08:00","updated_at":"2025-12-24T16:25:30.525596-08:00","dependencies":[{"issue_id":"bd-qqc.9","depends_on_id":"bd-qqc","type":"parent-child","created_at":"2025-12-18T22:42:35.816752-08:00","created_by":"daemon"},{"issue_id":"bd-qqc.9","depends_on_id":"bd-qqc.8","type":"blocks","created_at":"2025-12-18T22:43:21.332955-08:00","created_by":"daemon"}],"deleted_at":"2025-12-24T16:25:30.525596-08:00","deleted_by":"daemon","delete_reason":"delete","original_type":"task"}
{"id":"bd-qy37","title":"Work on gt-8tmz.36: Validate expanded step IDs are unique...","description":"Work on gt-8tmz.36: Validate expanded step IDs are unique. After ApplyExpansions in internal/formula/expand.go, new steps can be created that might have duplicate IDs. Add validation in ApplyExpansions to check for duplicate step IDs after expansion. If duplicates found, return an error with the duplicate IDs. Add test in expand_test.go. When done, commit and push to main.","status":"in_progress","priority":2,"issue_type":"task","assignee":"beads/slit","created_at":"2025-12-25T20:01:27.048018-08:00","updated_at":"2025-12-25T20:01:27.135376-08:00"}
{"id":"bd-qy37","title":"Work on gt-8tmz.36: Validate expanded step IDs are unique...","description":"Work on gt-8tmz.36: Validate expanded step IDs are unique. After ApplyExpansions in internal/formula/expand.go, new steps can be created that might have duplicate IDs. Add validation in ApplyExpansions to check for duplicate step IDs after expansion. If duplicates found, return an error with the duplicate IDs. Add test in expand_test.go. When done, commit and push to main.","status":"closed","priority":2,"issue_type":"task","assignee":"beads/slit","created_at":"2025-12-25T20:01:27.048018-08:00","updated_at":"2025-12-25T20:04:42.594254-08:00","closed_at":"2025-12-25T20:04:42.594254-08:00","close_reason":"Implemented step ID validation in ApplyExpansions with tests"}
{"id":"bd-r06v","title":"Merge: bd-phtv","description":"branch: polecat/Pinner\ntarget: main\nsource_issue: bd-phtv\nrig: beads","status":"closed","priority":1,"issue_type":"merge-request","created_at":"2025-12-23T13:48:16.853715-08:00","updated_at":"2025-12-23T19:12:08.342414-08:00","closed_at":"2025-12-23T19:12:08.342414-08:00","close_reason":"Stale merge-requests from orphaned polecat branches - refinery not processing"}
{"id":"bd-r2n1","title":"Add integration tests for RPC server and event loops","description":"After adding basic unit tests for daemon utilities, the complex daemon functions still need integration tests:\n\nCore daemon lifecycle:\n- startRPCServer: Initializes and starts RPC server with proper error handling\n- runEventLoop: Polling-based sync loop with parent monitoring and signal handling\n- runDaemonLoop: Main daemon initialization and setup\n\nHealth checking:\n- isDaemonHealthy: Checks daemon responsiveness and health metrics\n- checkDaemonHealth: Periodic health verification\n\nThese require more complex test infrastructure (mock RPC, test contexts, signal handling) and should be tackled after the unit test foundation is in place.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-18T12:28:56.022996362-07:00","updated_at":"2025-12-18T12:44:32.167862713-07:00","closed_at":"2025-12-18T12:44:32.167862713-07:00","dependencies":[{"issue_id":"bd-r2n1","depends_on_id":"bd-4or","type":"discovered-from","created_at":"2025-12-18T12:28:56.045893852-07:00","created_by":"mhwilkie"}]}
{"id":"bd-r36u","title":"gt mq list shows empty when MRs exist","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-12-20T01:13:07.561256-08:00","updated_at":"2025-12-21T17:51:25.891037-08:00","closed_at":"2025-12-21T17:51:25.891037-08:00","close_reason":"Moved to gastown: gt-uhc3"}
-10
View File
@@ -83,16 +83,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Skips interactions.jsonl and molecules.jsonl in sync checks
- These files are runtime state, not sync targets
- **Windows npm postinstall file locking** (GH#670) - Windows install fix
- Fixed file handle not being released before extraction on Windows
- Moved write stream creation after redirect handling to avoid orphan streams
- Added delay after file close to ensure Windows releases file handle
- **Windows MCP daemon mode crash** (GH#387) - Windows compatibility
- beads-mcp now gracefully falls back to CLI mode on Windows
- Avoids `asyncio.open_unix_connection` which doesn't exist on Windows
- Daemon mode still works on Unix/macOS
- **FatalErrorRespectJSON** (bd-28sq) - Consistent error output
- All commands respect `--json` flag for error output
- Errors return proper JSON structure when flag is set
+2 -2
View File
@@ -18,8 +18,8 @@ var formulaCmd = &cobra.Command{
Short: "Manage workflow formulas",
Long: `Manage workflow formulas - the source layer for molecule templates.
Formulas are JSON files (.formula.json) that define workflows with composition rules.
They are "cooked" into ephemeral protos which can then be poured or wisped.
Formulas are YAML/JSON files that define workflows with composition rules.
They are "cooked" into proto beads which can then be poured or wisped.
The Rig Cook Run lifecycle:
- Rig: Compose formulas (extends, compose)
+30 -34
View File
@@ -15,22 +15,21 @@ import (
)
var molDistillCmd = &cobra.Command{
Use: "distill <id> [formula-name]",
Short: "Extract a formula from a mol, wisp, or epic",
Long: `Extract a reusable formula from completed work.
Use: "distill <epic-id> [formula-name]",
Short: "Extract a formula from an existing epic",
Long: `Distill a molecule by extracting a reusable formula from an existing epic.
This is the reverse of pour: instead of formula mol, it's mol formula.
Works with any hierarchical work: mols, wisps, or plain epics.
This is the reverse of pour: instead of formula molecule, it's molecule formula.
The distill command:
1. Loads the work item and all its children
1. Loads the existing epic and all its children
2. Converts the structure to a .formula.json file
3. Replaces concrete values with {{variable}} placeholders (via --var flags)
Use cases:
- Emergent patterns: structured work manually, want to templatize
- Modified execution: poured formula, added steps, want to capture
- Learning from success: extract what made a workflow succeed
- Team develops good workflow organically, wants to reuse it
- Capture tribal knowledge as executable templates
- Create starting point for similar future work
Variable syntax (both work - we detect which side is the concrete value):
--var branch=feature-auth Spawn-style: variable=value (recommended)
@@ -41,10 +40,8 @@ Output locations (first writable wins):
2. ~/.beads/formulas/ (user-level, if project not writable)
Examples:
bd mol distill bd-mol-xyz my-workflow
bd mol distill bd-wisp-abc patrol-template
bd mol distill bd-epic-123 release-workflow --var version=1.2.3
bd mol distill bd-xyz workflow -o ./formulas/`,
bd mol distill bd-o5xe my-workflow
bd mol distill bd-abc release-workflow --var feature_name=auth-refactor`,
Args: cobra.RangeArgs(1, 2),
Run: runMolDistill,
}
@@ -105,9 +102,14 @@ func parseDistillVar(varFlag, searchableText string) (string, string, error) {
func runMolDistill(cmd *cobra.Command, args []string) {
ctx := rootCtx
// Check we have some database access
if store == nil && daemonClient == nil {
fmt.Fprintf(os.Stderr, "Error: no database connection\n")
// mol distill requires direct store access for reading the epic
if store == nil {
if daemonClient != nil {
fmt.Fprintf(os.Stderr, "Error: mol distill requires direct database access\n")
fmt.Fprintf(os.Stderr, "Hint: use --no-daemon flag: bd --no-daemon mol distill %s ...\n", args[0])
} else {
fmt.Fprintf(os.Stderr, "Error: no database connection\n")
}
os.Exit(1)
}
@@ -115,23 +117,17 @@ func runMolDistill(cmd *cobra.Command, args []string) {
dryRun, _ := cmd.Flags().GetBool("dry-run")
outputDir, _ := cmd.Flags().GetString("output")
// Load the subgraph (works with daemon or direct)
// Show/GetIssue handle partial ID resolution
var subgraph *TemplateSubgraph
var err error
if daemonClient != nil {
subgraph, err = loadTemplateSubgraphViaDaemon(daemonClient, args[0])
} else {
// Resolve ID for direct access
issueID, resolveErr := utils.ResolvePartialID(ctx, store, args[0])
if resolveErr != nil {
fmt.Fprintf(os.Stderr, "Error: '%s' not found\n", args[0])
os.Exit(1)
}
subgraph, err = loadTemplateSubgraph(ctx, store, issueID)
}
// Resolve epic ID
epicID, err := utils.ResolvePartialID(ctx, store, args[0])
if err != nil {
fmt.Fprintf(os.Stderr, "Error loading issue: %v\n", err)
fmt.Fprintf(os.Stderr, "Error: '%s' not found\n", args[0])
os.Exit(1)
}
// Load the epic subgraph
subgraph, err := loadTemplateSubgraph(ctx, store, epicID)
if err != nil {
fmt.Fprintf(os.Stderr, "Error loading epic: %v\n", err)
os.Exit(1)
}
@@ -176,7 +172,7 @@ func runMolDistill(cmd *cobra.Command, args []string) {
}
if dryRun {
fmt.Printf("\nDry run: would distill %d steps from %s into formula\n\n", countSteps(f.Steps), subgraph.Root.ID)
fmt.Printf("\nDry run: would distill %d steps from %s into formula\n\n", countSteps(f.Steps), epicID)
fmt.Printf("Formula: %s\n", formulaName)
fmt.Printf("Output: %s\n", outputPath)
if len(replacements) > 0 {
@@ -369,7 +365,7 @@ func subgraphToFormula(subgraph *TemplateSubgraph, name string, replacements map
func init() {
molDistillCmd.Flags().StringSlice("var", []string{}, "Replace value with {{variable}} placeholder (variable=value)")
molDistillCmd.Flags().Bool("dry-run", false, "Preview what would be created")
molDistillCmd.Flags().StringP("output", "o", "", "Output directory for formula file")
molDistillCmd.Flags().String("output", "", "Output directory for formula file")
molCmd.AddCommand(molDistillCmd)
}
+64 -12
View File
@@ -952,6 +952,7 @@ var closeCmd = &cobra.Command{
force, _ := cmd.Flags().GetBool("force")
continueFlag, _ := cmd.Flags().GetBool("continue")
noAuto, _ := cmd.Flags().GetBool("no-auto")
suggestNext, _ := cmd.Flags().GetBool("suggest-next")
ctx := rootCtx
@@ -960,6 +961,11 @@ var closeCmd = &cobra.Command{
FatalErrorRespectJSON("--continue only works when closing a single issue")
}
// --suggest-next only works with a single issue
if suggestNext && len(args) > 1 {
FatalErrorRespectJSON("--suggest-next only works when closing a single issue")
}
// Resolve partial IDs first
var resolvedIDs []string
if daemonClient != nil {
@@ -1007,8 +1013,9 @@ var closeCmd = &cobra.Command{
}
closeArgs := &rpc.CloseArgs{
ID: id,
Reason: reason,
ID: id,
Reason: reason,
SuggestNext: suggestNext,
}
resp, err := daemonClient.CloseIssue(closeArgs)
if err != nil {
@@ -1016,18 +1023,44 @@ var closeCmd = &cobra.Command{
continue
}
var issue types.Issue
if err := json.Unmarshal(resp.Data, &issue); err == nil {
// Run close hook (bd-kwro.8)
if hookRunner != nil {
hookRunner.Run(hooks.EventClose, &issue)
// Handle response based on whether SuggestNext was requested (GH#679)
if suggestNext {
var result rpc.CloseResult
if err := json.Unmarshal(resp.Data, &result); err == nil {
if result.Closed != nil {
// Run close hook (bd-kwro.8)
if hookRunner != nil {
hookRunner.Run(hooks.EventClose, result.Closed)
}
if jsonOutput {
closedIssues = append(closedIssues, result.Closed)
}
}
if !jsonOutput {
fmt.Printf("%s Closed %s: %s\n", ui.RenderPass("✓"), id, reason)
// Display newly unblocked issues (GH#679)
if len(result.Unblocked) > 0 {
fmt.Printf("\nNewly unblocked:\n")
for _, issue := range result.Unblocked {
fmt.Printf(" • %s %q (P%d)\n", issue.ID, issue.Title, issue.Priority)
}
}
}
}
if jsonOutput {
closedIssues = append(closedIssues, &issue)
} else {
var issue types.Issue
if err := json.Unmarshal(resp.Data, &issue); err == nil {
// Run close hook (bd-kwro.8)
if hookRunner != nil {
hookRunner.Run(hooks.EventClose, &issue)
}
if jsonOutput {
closedIssues = append(closedIssues, &issue)
}
}
if !jsonOutput {
fmt.Printf("%s Closed %s: %s\n", ui.RenderPass("✓"), id, reason)
}
}
if !jsonOutput {
fmt.Printf("%s Closed %s: %s\n", ui.RenderPass("✓"), id, reason)
}
}
@@ -1087,6 +1120,24 @@ var closeCmd = &cobra.Command{
}
}
// Handle --suggest-next flag in direct mode (GH#679)
if suggestNext && len(resolvedIDs) == 1 && closedCount > 0 {
unblocked, err := store.GetNewlyUnblockedByClose(ctx, resolvedIDs[0])
if err == nil && len(unblocked) > 0 {
if jsonOutput {
outputJSON(map[string]interface{}{
"closed": closedIssues,
"unblocked": unblocked,
})
return
}
fmt.Printf("\nNewly unblocked:\n")
for _, issue := range unblocked {
fmt.Printf(" • %s %q (P%d)\n", issue.ID, issue.Title, issue.Priority)
}
}
}
// Schedule auto-flush if any issues were closed
if len(args) > 0 {
markDirtyAndScheduleFlush()
@@ -1380,5 +1431,6 @@ func init() {
closeCmd.Flags().BoolP("force", "f", false, "Force close pinned issues")
closeCmd.Flags().Bool("continue", false, "Auto-advance to next step in molecule")
closeCmd.Flags().Bool("no-auto", false, "With --continue, show next step but don't claim it")
closeCmd.Flags().Bool("suggest-next", false, "Show newly unblocked issues after closing (GH#679)")
rootCmd.AddCommand(closeCmd)
}
+6 -37
View File
@@ -238,49 +238,18 @@ bd wisp gc # Garbage collect old wisps
For reference, here's how the layers stack:
```
Formulas (.formula.json) ← SOURCE: shareable workflow definitions
cook (ephemeral)
[Protos] ← TRANSIENT: compiled templates (auto-deleted)
pour/wisp
Molecules (bond, squash, burn) ← EXECUTION: workflow operations
Formulas (JSON compile-time macros) ← optional, for complex composition
Protos (template issues) ← optional, for reusable patterns
Molecules (bond, squash, burn) ← workflow operations
Epics (parent-child, dependencies) ← DATA PLANE (the core)
Issues (JSONL, git-backed) ← STORAGE
```
**Protos are ephemeral**: When you `bd pour formula-name` or `bd wisp create formula-name`, the formula is cooked into a temporary proto, used to spawn the mol/wisp, then automatically deleted. Protos are an implementation detail, not something users manage directly.
**Most users only need the bottom two layers.** Formulas are for sharing reusable patterns.
## Distillation: Extracting Patterns
The lifecycle is circular - you can extract formulas from completed work:
```
Formulas ──cook──→ Mols/Wisps ──distill──→ Formulas
```
**Use cases for distillation:**
- **Emergent patterns**: Manually structured an epic that worked well
- **Modified execution**: Poured a formula but added custom steps
- **Learning from success**: Extract what made a complex mol succeed
```bash
bd distill <mol-id> -o my-workflow.formula.json # Extract formula from mol
```
## Sharing: The Mol Mall
All workflow sharing happens via formulas:
```bash
bd mol publish my-workflow.formula.json # Share to GitHub repo
bd mol install github.com/org/mol-code-review # Install from GitHub
bd pour mol-code-review --var repo=myproject # Use installed formula
```
Formulas are clean source code: composable, versioned, parameterized. Mols contain execution-specific context and aren't shared directly.
**Most users only need the bottom two layers.** Protos and formulas are for reusable patterns and complex composition.
## Commands Quick Reference
@@ -831,11 +831,6 @@ def create_bd_client(
If prefer_daemon is True and daemon is not running, falls back to CLI client.
To check if daemon is running without falling back, use BdDaemonClient directly.
"""
# Windows doesn't support Unix domain sockets (GH#387)
# Skip daemon mode entirely on Windows
if prefer_daemon and sys.platform == 'win32':
prefer_daemon = False
if prefer_daemon:
try:
from .bd_daemon_client import BdDaemonClient
+9 -13
View File
@@ -249,20 +249,16 @@ type LoopSpec struct {
// OnCompleteSpec defines actions triggered when a step completes (gt-8tmz.8).
// Used for runtime expansion over step output (the for-each construct).
//
// Example JSON:
// Example YAML:
//
// {
// "id": "survey-workers",
// "on_complete": {
// "for_each": "output.polecats",
// "bond": "mol-polecat-arm",
// "vars": {
// "polecat_name": "{item.name}",
// "rig": "{item.rig}"
// },
// "parallel": true
// }
// }
// step: survey-workers
// on_complete:
// for_each: output.polecats
// bond: mol-polecat-arm
// vars:
// polecat_name: "{item.name}"
// rig: "{item.rig}"
// parallel: true
type OnCompleteSpec struct {
// ForEach is the path to the iterable collection in step output.
// Format: "output.<field>" or "output.<field>.<nested>"
+12 -2
View File
@@ -3,6 +3,8 @@ package rpc
import (
"encoding/json"
"time"
"github.com/steveyegge/beads/internal/types"
)
// Operation constants for all bd commands
@@ -124,8 +126,16 @@ type UpdateArgs struct {
// CloseArgs represents arguments for the close operation
type CloseArgs struct {
ID string `json:"id"`
Reason string `json:"reason,omitempty"`
ID string `json:"id"`
Reason string `json:"reason,omitempty"`
SuggestNext bool `json:"suggest_next,omitempty"` // Return newly unblocked issues (GH#679)
}
// CloseResult is returned when SuggestNext is true (GH#679)
// When SuggestNext is false, just the closed issue is returned for backward compatibility
type CloseResult struct {
Closed *types.Issue `json:"closed"` // The issue that was closed
Unblocked []*types.Issue `json:"unblocked,omitempty"` // Issues newly unblocked by closing
}
// DeleteArgs represents arguments for the delete operation
+20
View File
@@ -555,6 +555,26 @@ func (s *Server) handleClose(req *Request) Response {
})
closedIssue, _ := store.GetIssue(ctx, closeArgs.ID)
// If SuggestNext is requested, find newly unblocked issues (GH#679)
if closeArgs.SuggestNext {
unblocked, err := store.GetNewlyUnblockedByClose(ctx, closeArgs.ID)
if err != nil {
// Non-fatal: still return the closed issue
unblocked = nil
}
result := CloseResult{
Closed: closedIssue,
Unblocked: unblocked,
}
data, _ := json.Marshal(result)
return Response{
Success: true,
Data: data,
}
}
// Backward compatible: just return the closed issue
data, _ := json.Marshal(closedIssue)
return Response{
Success: true,
+52
View File
@@ -1184,6 +1184,58 @@ func (m *MemoryStorage) GetStaleIssues(ctx context.Context, filter types.StaleFi
return stale, nil
}
// GetNewlyUnblockedByClose returns issues that became unblocked when the given issue was closed.
// This is used by the --suggest-next flag on bd close (GH#679).
func (m *MemoryStorage) GetNewlyUnblockedByClose(ctx context.Context, closedIssueID string) ([]*types.Issue, error) {
m.mu.RLock()
defer m.mu.RUnlock()
var unblocked []*types.Issue
// Find issues that depend on the closed issue
for issueID, deps := range m.dependencies {
issue, exists := m.issues[issueID]
if !exists {
continue
}
// Only consider open/in_progress, non-pinned issues
if issue.Status != types.StatusOpen && issue.Status != types.StatusInProgress {
continue
}
if issue.Pinned {
continue
}
// Check if this issue depended on the closed issue
dependedOnClosed := false
for _, dep := range deps {
if dep.DependsOnID == closedIssueID && dep.Type == types.DepBlocks {
dependedOnClosed = true
break
}
}
if !dependedOnClosed {
continue
}
// Check if now unblocked (no remaining open blockers)
blockers := m.getOpenBlockers(issueID)
if len(blockers) == 0 {
issueCopy := *issue
unblocked = append(unblocked, &issueCopy)
}
}
// Sort by priority ascending
sort.Slice(unblocked, func(i, j int) bool {
return unblocked[i].Priority < unblocked[j].Priority
})
return unblocked, nil
}
func (m *MemoryStorage) AddComment(ctx context.Context, issueID, actor, comment string) error {
return nil
}
+43
View File
@@ -596,6 +596,49 @@ func filterBlockedByExternalDeps(ctx context.Context, blocked []*types.BlockedIs
return result
}
// GetNewlyUnblockedByClose returns issues that became unblocked when the given issue was closed.
// This is used by the --suggest-next flag on bd close to show what work is now available.
// An issue is "newly unblocked" if:
// - It had a 'blocks' dependency on the closed issue
// - It is now unblocked (not in blocked_issues_cache)
// - It has status open or in_progress (ready to work on)
//
// The cache is already rebuilt by CloseIssue before this is called, so we just need to
// find dependents that are no longer blocked.
func (s *SQLiteStorage) GetNewlyUnblockedByClose(ctx context.Context, closedIssueID string) ([]*types.Issue, error) {
// Find issues that:
// 1. Had a 'blocks' dependency on the closed issue
// 2. Are now NOT in blocked_issues_cache (unblocked)
// 3. Have status open or in_progress
// 4. Are not pinned
query := `
SELECT i.id, i.content_hash, i.title, i.description, i.design, i.acceptance_criteria, i.notes,
i.status, i.priority, i.issue_type, i.assignee, i.estimated_minutes,
i.created_at, i.updated_at, i.closed_at, i.external_ref, i.source_repo, i.close_reason,
i.deleted_at, i.deleted_by, i.delete_reason, i.original_type,
i.sender, i.ephemeral, i.pinned, i.is_template,
i.await_type, i.await_id, i.timeout_ns, i.waiters
FROM issues i
JOIN dependencies d ON i.id = d.issue_id
WHERE d.depends_on_id = ?
AND d.type = 'blocks'
AND i.status IN ('open', 'in_progress')
AND i.pinned = 0
AND NOT EXISTS (
SELECT 1 FROM blocked_issues_cache WHERE issue_id = i.id
)
ORDER BY i.priority ASC
`
rows, err := s.db.QueryContext(ctx, query, closedIssueID)
if err != nil {
return nil, fmt.Errorf("failed to get newly unblocked issues: %w", err)
}
defer func() { _ = rows.Close() }()
return s.scanIssues(ctx, rows)
}
// buildOrderByClause generates the ORDER BY clause based on sort policy
func buildOrderByClause(policy types.SortPolicy) string {
switch policy {
+53
View File
@@ -1512,3 +1512,56 @@ func TestCheckExternalDepInvalidFormats(t *testing.T) {
})
}
}
// TestGetNewlyUnblockedByClose tests the --suggest-next functionality (GH#679)
func TestGetNewlyUnblockedByClose(t *testing.T) {
env := newTestEnv(t)
// Create a blocker issue
blocker := env.CreateIssueWith("Blocker", types.StatusOpen, 1, types.TypeTask)
// Create two issues blocked by the blocker
blocked1 := env.CreateIssueWith("Blocked 1", types.StatusOpen, 2, types.TypeTask)
blocked2 := env.CreateIssueWith("Blocked 2", types.StatusOpen, 3, types.TypeTask)
// Create one issue blocked by multiple issues (blocker + another)
otherBlocker := env.CreateIssueWith("Other Blocker", types.StatusOpen, 1, types.TypeTask)
multiBlocked := env.CreateIssueWith("Multi Blocked", types.StatusOpen, 2, types.TypeTask)
// Add dependencies (issue depends on blocker)
env.AddDep(blocked1, blocker)
env.AddDep(blocked2, blocker)
env.AddDep(multiBlocked, blocker)
env.AddDep(multiBlocked, otherBlocker)
// Close the blocker
env.Close(blocker, "Done")
// Get newly unblocked issues
ctx := context.Background()
unblocked, err := env.Store.GetNewlyUnblockedByClose(ctx, blocker.ID)
if err != nil {
t.Fatalf("GetNewlyUnblockedByClose failed: %v", err)
}
// Should return blocked1 and blocked2 (but not multiBlocked, which is still blocked by otherBlocker)
if len(unblocked) != 2 {
t.Errorf("Expected 2 unblocked issues, got %d", len(unblocked))
}
// Check that the right issues are unblocked
unblockedIDs := make(map[string]bool)
for _, issue := range unblocked {
unblockedIDs[issue.ID] = true
}
if !unblockedIDs[blocked1.ID] {
t.Errorf("Expected %s to be unblocked", blocked1.ID)
}
if !unblockedIDs[blocked2.ID] {
t.Errorf("Expected %s to be unblocked", blocked2.ID)
}
if unblockedIDs[multiBlocked.ID] {
t.Errorf("Expected %s to still be blocked (has another blocker)", multiBlocked.ID)
}
}
+1
View File
@@ -110,6 +110,7 @@ type Storage interface {
GetBlockedIssues(ctx context.Context) ([]*types.BlockedIssue, error)
GetEpicsEligibleForClosure(ctx context.Context) ([]*types.EpicStatus, error)
GetStaleIssues(ctx context.Context, filter types.StaleFilter) ([]*types.Issue, error)
GetNewlyUnblockedByClose(ctx context.Context, closedIssueID string) ([]*types.Issue, error) // GH#679
// Events
AddComment(ctx context.Context, issueID, actor, comment string) error
+3
View File
@@ -98,6 +98,9 @@ func (m *mockStorage) GetEpicsEligibleForClosure(ctx context.Context) ([]*types.
func (m *mockStorage) GetStaleIssues(ctx context.Context, filter types.StaleFilter) ([]*types.Issue, error) {
return nil, nil
}
func (m *mockStorage) GetNewlyUnblockedByClose(ctx context.Context, closedIssueID string) ([]*types.Issue, error) {
return nil, nil
}
func (m *mockStorage) AddComment(ctx context.Context, issueID, actor, comment string) error {
return nil
}
+10 -24
View File
@@ -50,18 +50,14 @@ function getPlatformInfo() {
return { platformName, archName, binaryName };
}
// Small delay helper for Windows file handle release
function delay(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
// Download file from URL
function downloadFile(url, dest) {
return new Promise((resolve, reject) => {
console.log(`Downloading from: ${url}`);
const file = fs.createWriteStream(dest);
const request = https.get(url, (response) => {
// Handle redirects - must happen BEFORE creating write stream
// Handle redirects
if (response.statusCode === 301 || response.statusCode === 302) {
const redirectUrl = response.headers.location;
console.log(`Following redirect to: ${redirectUrl}`);
@@ -74,37 +70,27 @@ function downloadFile(url, dest) {
return;
}
// Only create write stream after we know we have the final URL
const file = fs.createWriteStream(dest);
response.pipe(file);
file.on('finish', () => {
// Wait for file.close() to complete before resolving
// This is critical on Windows where the file may still be locked
file.close(async (err) => {
if (err) {
reject(err);
return;
}
// On Windows, add a small delay to ensure file handle is fully released
if (os.platform() === 'win32') {
await delay(100);
}
resolve();
file.close((err) => {
if (err) reject(err);
else resolve();
});
});
file.on('error', (err) => {
fs.unlink(dest, () => {});
reject(err);
});
});
request.on('error', (err) => {
fs.unlink(dest, () => {});
reject(err);
});
file.on('error', (err) => {
fs.unlink(dest, () => {});
reject(err);
});
});
}