Add epic closure management commands (fixes #62)
- Add 'bd epic status' to show epic completion with child progress - Add 'bd epic close-eligible' to bulk-close completed epics - Add GetEpicsEligibleForClosure() storage method - Update 'bd stats' to show count of epics ready to close - Add EpicStatus type for tracking epic/child relationships - Support --eligible-only, --dry-run, and --json flags - Fix golangci-lint config version requirement Addresses GitHub issue #62 - epics now have visibility and management tools for closure when all children are complete. Amp-Thread-ID: https://ampcode.com/threads/T-e8ac3f48-f0cf-4858-8e8f-aace2481c30d Co-authored-by: Amp <amp@ampcode.com>
This commit is contained in:
@@ -5,7 +5,17 @@
|
||||
{"id":"bd-102","title":"Test daemon auto-detection","description":"","status":"closed","priority":3,"issue_type":"task","created_at":"2025-10-16T23:04:51.334824-07:00","updated_at":"2025-10-17T01:32:00.965331-07:00","closed_at":"2025-10-16T23:04:55.769268-07:00"}
|
||||
{"id":"bd-103","title":"Test daemon RPC","description":"","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-16T23:18:41.845364-07:00","updated_at":"2025-10-17T01:32:00.981005-07:00","closed_at":"2025-10-16T23:19:11.402442-07:00"}
|
||||
{"id":"bd-104","title":"Add comprehensive daemon tests for RPC integration","description":"Add tests for:\n- RPC server integration (daemon accepts connections)\n- Concurrent client operations\n- Socket cleanup on shutdown\n- Server start failures (socket already exists)\n- Graceful shutdown verification\n\nThese tests were identified in bd-100 code review but not implemented yet.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-16T23:28:30.552132-07:00","updated_at":"2025-10-17T01:32:01.028003-07:00","closed_at":"2025-10-16T23:57:54.583646-07:00"}
|
||||
{"id":"bd-105","title":"Investigate CWD propagation from Claude Code/Amp to MCP server","description":"","design":"## Problem\n\nMCP servers don't know which directory the user is working in within Claude Code or Amp. This causes database routing issues for beads because:\n\n1. MCP server process starts with its own CWD (wherever it was launched from)\n2. `bd` binary uses tree-walking to discover databases based on CWD\n3. Without correct CWD, `bd` discovers the wrong database or falls back to ~/.beads\n\n## Current Workaround\n\nWe're using explicit `BEADS_DB` environment variables in MCP server configuration:\n- One MCP server per repo with explicit database path\n- Works but doesn't scale (30+ repos with .beads/ directories)\n\n## Desired Solution\n\nMCP server should receive CWD context either:\n\n### Option A: Startup Time\n- Claude Code/Amp passes working directory when launching MCP server\n- MCP server uses that directory for all tool calls\n- **Question:** Is this supported in MCP protocol/implementations?\n\n### Option B: Tool Call Time \n- Each MCP tool call includes a `cwd` parameter\n- Tools use that CWD for subprocess execution\n- **Question:** Does MCP protocol support per-call context?\n\n### Option C: Hybrid\n- MCP server detects directory from Claude Code workspace/project\n- Tools accept optional `cwd` override parameter\n\n## Investigation Steps\n\n1. Review MCP protocol specification for context passing\n2. Check Claude Code MCP implementation for CWD handling\n3. Check Amp MCP implementation for CWD handling \n4. Test if PWD environment variable is set correctly by Claude Code\n5. Prototype dynamic CWD detection in beads-mcp\n6. Document findings and recommend approach\n\n## References\n\n- beads-mcp already has `BEADS_WORKING_DIR` config support\n- bd_client.py uses `cwd` parameter for subprocess calls\n- Current implementation: `os.environ.get('PWD', os.getcwd())`\n","acceptance_criteria":"- Documented investigation findings\n- Tested CWD propagation in both Claude Code and Amp\n- Recommended approach for solving multi-repo MCP database routing\n- Prototype or proof-of-concept if feasible","notes":"## Context\n\nThis investigation was created after debugging severe database routing issues:\n- 1,034 issues were misrouted to ~/.beads/ (elephant graveyard)\n- Data loss, corruption, and duplication across repos\n- Root cause: MCP server CWD != Claude Code working directory\n\n## Current Temporary Fix\n\n1. Removed ~/.beads fallback from bd code (fail fast instead)\n2. Configured explicit BEADS_DB per repo in amp/settings.json\n3. Added debug logging to MCP client\n4. Deleted ~/.beads graveyard\n\n## This Blocks\n\n- Scalable use of beads MCP across 30+ repos\n- Dynamic workspace/project switching in Claude Code\n- Clean developer experience (requires manual MCP config per repo)\n\n## Priority\n\nP1 because current workaround is fragile and doesn't scale.","status":"open","priority":1,"issue_type":"feature","created_at":"2025-10-17T02:06:09.737832-07:00","updated_at":"2025-10-17T02:06:21.85233-07:00"}
|
||||
{"id":"bd-106","title":"Test Epic for epic commands","description":"","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-10-17T12:07:19.224482-07:00","updated_at":"2025-10-17T12:07:59.213044-07:00","closed_at":"2025-10-17T12:07:59.213044-07:00"}
|
||||
{"id":"bd-107","title":"Child task 1","description":"","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-17T12:07:24.27717-07:00","updated_at":"2025-10-17T12:07:38.659749-07:00","closed_at":"2025-10-17T12:07:38.659749-07:00","dependencies":[{"issue_id":"bd-107","depends_on_id":"bd-106","type":"parent-child","created_at":"2025-10-17T12:07:29.09999-07:00","created_by":"stevey"}]}
|
||||
{"id":"bd-108","title":"Child task 2","description":"","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-17T12:07:24.316272-07:00","updated_at":"2025-10-17T12:07:45.496304-07:00","closed_at":"2025-10-17T12:07:45.496304-07:00","dependencies":[{"issue_id":"bd-108","depends_on_id":"bd-106","type":"parent-child","created_at":"2025-10-17T12:07:29.133888-07:00","created_by":"stevey"}]}
|
||||
{"id":"bd-109","title":"Child task 3","description":"","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-17T12:07:24.33996-07:00","updated_at":"2025-10-17T12:07:45.624974-07:00","closed_at":"2025-10-17T12:07:45.624974-07:00","dependencies":[{"issue_id":"bd-109","depends_on_id":"bd-106","type":"parent-child","created_at":"2025-10-17T12:07:29.169846-07:00","created_by":"stevey"}]}
|
||||
{"id":"bd-11","title":"Implement collision detection in import","description":"Create collision.go with detectCollisions() function. Compare incoming JSONL issues against DB state. Distinguish between: (1) exact match (idempotent), (2) ID match but different content (collision), (3) new issue. Return list of colliding issues.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-16T20:46:08.971822-07:00","updated_at":"2025-10-17T01:32:00.633538-07:00","closed_at":"2025-10-16T10:07:22.461107-07:00","dependencies":[{"issue_id":"bd-11","depends_on_id":"bd-48","type":"parent-child","created_at":"2025-10-16T21:51:08.920845-07:00","created_by":"renumber"}]}
|
||||
{"id":"bd-110","title":"Another epic","description":"","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-10-17T12:08:10.396072-07:00","updated_at":"2025-10-17T12:10:06.062102-07:00","closed_at":"2025-10-17T12:10:06.062102-07:00"}
|
||||
{"id":"bd-111","title":"Test epic 2","description":"","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-10-17T12:09:59.880202-07:00","updated_at":"2025-10-17T12:10:06.063293-07:00","closed_at":"2025-10-17T12:10:06.063293-07:00"}
|
||||
{"id":"bd-112","title":"Child A","description":"","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-17T12:09:59.923718-07:00","updated_at":"2025-10-17T12:10:00.087913-07:00","closed_at":"2025-10-17T12:10:00.087913-07:00","dependencies":[{"issue_id":"bd-112","depends_on_id":"bd-111","type":"parent-child","created_at":"2025-10-17T12:09:59.965897-07:00","created_by":"stevey"}]}
|
||||
{"id":"bd-113","title":"Auto-close or warn about epics when all children complete","description":"","design":"See epic.go for implementation. Commands: bd epic status, bd epic close-eligible. Stats integration added.","acceptance_criteria":"Commands work, tests pass, addresses GitHub issue #62","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-10-17T13:47:42.9642-07:00","updated_at":"2025-10-17T13:47:48.136662-07:00","closed_at":"2025-10-17T13:47:48.136662-07:00","external_ref":"gh-62"}
|
||||
{"id":"bd-114","title":"Agents confused by multiple MCP beads servers - use wrong database","description":"When multiple beads MCP servers are configured (e.g., beads-wyvern, beads-adar), agents may use the wrong server and create issues in wrong database. In this session, created wy-22 (wyvern) when working in beads repo. Root cause: All MCP servers available simultaneously with different BEADS_WORKING_DIR/BEADS_DB env vars. Agent must manually choose correct server or use direct bd commands.","design":"Possible solutions: 1) Context-aware MCP routing based on pwd, 2) Single MCP server that auto-detects context, 3) Better agent instructions about which server to use, 4) Naming convention that makes server purpose obvious (beads-wyvern vs beads-current)","status":"open","priority":0,"issue_type":"bug","created_at":"2025-10-17T13:47:58.092565-07:00","updated_at":"2025-10-17T13:47:58.092565-07:00"}
|
||||
{"id":"bd-12","title":"Implement reference scoring algorithm","description":"Count references for each colliding issue: text mentions in descriptions/notes/design fields + dependency references. Sort collisions by score ascending (fewest refs first). This minimizes total updates during renumbering.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-16T20:46:08.971822-07:00","updated_at":"2025-10-17T01:32:00.634423-07:00","closed_at":"2025-10-14T02:51:52.198288-07:00","dependencies":[{"issue_id":"bd-12","depends_on_id":"bd-48","type":"parent-child","created_at":"2025-10-16T21:51:08.913972-07:00","created_by":"renumber"}]}
|
||||
{"id":"bd-13","title":"Implement ID remapping with reference updates","description":"Allocate new IDs for colliding issues. Update all text field references using word-boundary regex (\\bbd-10\\b). Update dependency records. Build id_mapping for reporting. Handle chain dependencies properly.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-16T20:46:08.971822-07:00","updated_at":"2025-10-17T01:32:00.643252-07:00","closed_at":"2025-10-14T02:51:52.198356-07:00","dependencies":[{"issue_id":"bd-13","depends_on_id":"bd-48","type":"parent-child","created_at":"2025-10-16T21:51:08.92251-07:00","created_by":"renumber"}]}
|
||||
{"id":"bd-14","title":"Add --resolve-collisions flag and user reporting","description":"Add import flags: --resolve-collisions (auto-fix) and --dry-run (preview). Display clear report: collisions detected, remappings applied (old→new with scores), reference counts updated. Default behavior: fail on collision (safe).","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-16T20:46:08.971822-07:00","updated_at":"2025-10-17T01:32:00.645323-07:00","closed_at":"2025-10-16T10:07:34.003238-07:00","dependencies":[{"issue_id":"bd-14","depends_on_id":"bd-48","type":"parent-child","created_at":"2025-10-16T21:51:08.923374-07:00","created_by":"renumber"}]}
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
# golangci-lint configuration for beads
|
||||
# See https://golangci-lint.run/usage/configuration/
|
||||
|
||||
version: 2
|
||||
|
||||
run:
|
||||
timeout: 5m
|
||||
tests: true
|
||||
|
||||
195
cmd/bd/epic.go
Normal file
195
cmd/bd/epic.go
Normal file
@@ -0,0 +1,195 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/fatih/color"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/steveyegge/beads/internal/types"
|
||||
)
|
||||
|
||||
var epicCmd = &cobra.Command{
|
||||
Use: "epic",
|
||||
Short: "Epic management commands",
|
||||
}
|
||||
|
||||
var epicStatusCmd = &cobra.Command{
|
||||
Use: "status",
|
||||
Short: "Show epic completion status",
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
eligibleOnly, _ := cmd.Flags().GetBool("eligible-only")
|
||||
jsonOutput, _ := cmd.Flags().GetBool("json")
|
||||
|
||||
// TODO: Add RPC support when daemon is running
|
||||
if daemonClient != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error: epic commands not yet supported in daemon mode\n")
|
||||
fmt.Fprintf(os.Stderr, "Hint: use --no-daemon flag for direct mode\n")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
epics, err := store.GetEpicsEligibleForClosure(ctx)
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error getting epic status: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Filter if eligible-only flag is set
|
||||
if eligibleOnly {
|
||||
filtered := []*types.EpicStatus{}
|
||||
for _, epic := range epics {
|
||||
if epic.EligibleForClose {
|
||||
filtered = append(filtered, epic)
|
||||
}
|
||||
}
|
||||
epics = filtered
|
||||
}
|
||||
|
||||
if jsonOutput {
|
||||
enc := json.NewEncoder(os.Stdout)
|
||||
enc.SetIndent("", " ")
|
||||
if err := enc.Encode(epics); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error encoding JSON: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Human-readable output
|
||||
if len(epics) == 0 {
|
||||
fmt.Println("No open epics found")
|
||||
return
|
||||
}
|
||||
|
||||
cyan := color.New(color.FgCyan).SprintFunc()
|
||||
yellow := color.New(color.FgYellow).SprintFunc()
|
||||
green := color.New(color.FgGreen).SprintFunc()
|
||||
bold := color.New(color.Bold).SprintFunc()
|
||||
|
||||
for _, epicStatus := range epics {
|
||||
epic := epicStatus.Epic
|
||||
percentage := 0
|
||||
if epicStatus.TotalChildren > 0 {
|
||||
percentage = (epicStatus.ClosedChildren * 100) / epicStatus.TotalChildren
|
||||
}
|
||||
|
||||
statusIcon := ""
|
||||
if epicStatus.EligibleForClose {
|
||||
statusIcon = green("✓")
|
||||
} else if percentage > 0 {
|
||||
statusIcon = yellow("○")
|
||||
} else {
|
||||
statusIcon = "○"
|
||||
}
|
||||
|
||||
fmt.Printf("%s %s %s\n", statusIcon, cyan(epic.ID), bold(epic.Title))
|
||||
fmt.Printf(" Progress: %d/%d children closed (%d%%)\n",
|
||||
epicStatus.ClosedChildren, epicStatus.TotalChildren, percentage)
|
||||
if epicStatus.EligibleForClose {
|
||||
fmt.Printf(" %s\n", green("Eligible for closure"))
|
||||
}
|
||||
fmt.Println()
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
var closeEligibleEpicsCmd = &cobra.Command{
|
||||
Use: "close-eligible",
|
||||
Short: "Close epics where all children are complete",
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
dryRun, _ := cmd.Flags().GetBool("dry-run")
|
||||
jsonOutput, _ := cmd.Flags().GetBool("json")
|
||||
|
||||
// TODO: Add RPC support when daemon is running
|
||||
if daemonClient != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error: epic commands not yet supported in daemon mode\n")
|
||||
fmt.Fprintf(os.Stderr, "Hint: use --no-daemon flag for direct mode\n")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
epics, err := store.GetEpicsEligibleForClosure(ctx)
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error getting eligible epics: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Filter to only eligible ones
|
||||
eligibleEpics := []*types.EpicStatus{}
|
||||
for _, epic := range epics {
|
||||
if epic.EligibleForClose {
|
||||
eligibleEpics = append(eligibleEpics, epic)
|
||||
}
|
||||
}
|
||||
|
||||
if len(eligibleEpics) == 0 {
|
||||
if !jsonOutput {
|
||||
fmt.Println("No epics eligible for closure")
|
||||
} else {
|
||||
fmt.Println("[]")
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
if dryRun {
|
||||
if jsonOutput {
|
||||
enc := json.NewEncoder(os.Stdout)
|
||||
enc.SetIndent("", " ")
|
||||
if err := enc.Encode(eligibleEpics); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error encoding JSON: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
} else {
|
||||
fmt.Printf("Would close %d epic(s):\n", len(eligibleEpics))
|
||||
for _, epicStatus := range eligibleEpics {
|
||||
fmt.Printf(" - %s: %s\n", epicStatus.Epic.ID, epicStatus.Epic.Title)
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Actually close the epics
|
||||
closedIDs := []string{}
|
||||
for _, epicStatus := range eligibleEpics {
|
||||
err := store.CloseIssue(ctx, epicStatus.Epic.ID, "All children completed", "system")
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error closing %s: %v\n", epicStatus.Epic.ID, err)
|
||||
continue
|
||||
}
|
||||
closedIDs = append(closedIDs, epicStatus.Epic.ID)
|
||||
}
|
||||
|
||||
if jsonOutput {
|
||||
enc := json.NewEncoder(os.Stdout)
|
||||
enc.SetIndent("", " ")
|
||||
if err := enc.Encode(map[string]interface{}{
|
||||
"closed": closedIDs,
|
||||
"count": len(closedIDs),
|
||||
}); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error encoding JSON: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
} else {
|
||||
fmt.Printf("✓ Closed %d epic(s)\n", len(closedIDs))
|
||||
for _, id := range closedIDs {
|
||||
fmt.Printf(" - %s\n", id)
|
||||
}
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
func init() {
|
||||
epicCmd.AddCommand(epicStatusCmd)
|
||||
epicCmd.AddCommand(closeEligibleEpicsCmd)
|
||||
|
||||
epicStatusCmd.Flags().Bool("eligible-only", false, "Show only epics eligible for closure")
|
||||
epicStatusCmd.Flags().Bool("json", false, "Output in JSON format")
|
||||
|
||||
closeEligibleEpicsCmd.Flags().Bool("dry-run", false, "Preview what would be closed without making changes")
|
||||
closeEligibleEpicsCmd.Flags().Bool("json", false, "Output in JSON format")
|
||||
|
||||
rootCmd.AddCommand(epicCmd)
|
||||
}
|
||||
@@ -185,16 +185,19 @@ var statsCmd = &cobra.Command{
|
||||
yellow := color.New(color.FgYellow).SprintFunc()
|
||||
|
||||
fmt.Printf("\n%s Beads Statistics:\n\n", cyan("📊"))
|
||||
fmt.Printf("Total Issues: %d\n", stats.TotalIssues)
|
||||
fmt.Printf("Open: %s\n", green(fmt.Sprintf("%d", stats.OpenIssues)))
|
||||
fmt.Printf("In Progress: %s\n", yellow(fmt.Sprintf("%d", stats.InProgressIssues)))
|
||||
fmt.Printf("Closed: %d\n", stats.ClosedIssues)
|
||||
fmt.Printf("Blocked: %d\n", stats.BlockedIssues)
|
||||
fmt.Printf("Ready: %s\n", green(fmt.Sprintf("%d", stats.ReadyIssues)))
|
||||
if stats.AverageLeadTime > 0 {
|
||||
fmt.Printf("Avg Lead Time: %.1f hours\n", stats.AverageLeadTime)
|
||||
fmt.Printf("Total Issues: %d\n", stats.TotalIssues)
|
||||
fmt.Printf("Open: %s\n", green(fmt.Sprintf("%d", stats.OpenIssues)))
|
||||
fmt.Printf("In Progress: %s\n", yellow(fmt.Sprintf("%d", stats.InProgressIssues)))
|
||||
fmt.Printf("Closed: %d\n", stats.ClosedIssues)
|
||||
fmt.Printf("Blocked: %d\n", stats.BlockedIssues)
|
||||
fmt.Printf("Ready: %s\n", green(fmt.Sprintf("%d", stats.ReadyIssues)))
|
||||
if stats.EpicsEligibleForClosure > 0 {
|
||||
fmt.Printf("Epics Ready to Close: %s\n", green(fmt.Sprintf("%d", stats.EpicsEligibleForClosure)))
|
||||
}
|
||||
fmt.Println()
|
||||
if stats.AverageLeadTime > 0 {
|
||||
fmt.Printf("Avg Lead Time: %.1f hours\n", stats.AverageLeadTime)
|
||||
}
|
||||
fmt.Println()
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
79
internal/storage/sqlite/epics.go
Normal file
79
internal/storage/sqlite/epics.go
Normal file
@@ -0,0 +1,79 @@
|
||||
package sqlite
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/steveyegge/beads/internal/types"
|
||||
)
|
||||
|
||||
// GetEpicsEligibleForClosure returns all epics with their completion status
|
||||
func (s *SQLiteStorage) GetEpicsEligibleForClosure(ctx context.Context) ([]*types.EpicStatus, error) {
|
||||
query := `
|
||||
WITH epic_children AS (
|
||||
SELECT
|
||||
d.depends_on_id AS epic_id,
|
||||
i.id AS child_id,
|
||||
i.status AS child_status
|
||||
FROM dependencies d
|
||||
JOIN issues i ON i.id = d.issue_id
|
||||
WHERE d.type = 'parent-child'
|
||||
),
|
||||
epic_stats AS (
|
||||
SELECT
|
||||
epic_id,
|
||||
COUNT(*) AS total_children,
|
||||
SUM(CASE WHEN child_status = 'closed' THEN 1 ELSE 0 END) AS closed_children
|
||||
FROM epic_children
|
||||
GROUP BY epic_id
|
||||
)
|
||||
SELECT
|
||||
i.id, i.title, i.description, i.design, i.acceptance_criteria, i.notes,
|
||||
i.status, i.priority, i.issue_type, i.assignee, i.estimated_minutes,
|
||||
i.created_at, i.updated_at, i.closed_at, i.external_ref,
|
||||
COALESCE(es.total_children, 0) AS total_children,
|
||||
COALESCE(es.closed_children, 0) AS closed_children
|
||||
FROM issues i
|
||||
LEFT JOIN epic_stats es ON es.epic_id = i.id
|
||||
WHERE i.issue_type = 'epic'
|
||||
AND i.status != 'closed'
|
||||
ORDER BY i.priority ASC, i.created_at ASC
|
||||
`
|
||||
|
||||
rows, err := s.db.QueryContext(ctx, query)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var results []*types.EpicStatus
|
||||
for rows.Next() {
|
||||
var epic types.Issue
|
||||
var totalChildren, closedChildren int
|
||||
|
||||
err := rows.Scan(
|
||||
&epic.ID, &epic.Title, &epic.Description, &epic.Design,
|
||||
&epic.AcceptanceCriteria, &epic.Notes, &epic.Status,
|
||||
&epic.Priority, &epic.IssueType, &epic.Assignee,
|
||||
&epic.EstimatedMinutes, &epic.CreatedAt, &epic.UpdatedAt,
|
||||
&epic.ClosedAt, &epic.ExternalRef,
|
||||
&totalChildren, &closedChildren,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
eligibleForClose := false
|
||||
if totalChildren > 0 && closedChildren == totalChildren {
|
||||
eligibleForClose = true
|
||||
}
|
||||
|
||||
results = append(results, &types.EpicStatus{
|
||||
Epic: &epic,
|
||||
TotalChildren: totalChildren,
|
||||
ClosedChildren: closedChildren,
|
||||
EligibleForClose: eligibleForClose,
|
||||
})
|
||||
}
|
||||
|
||||
return results, rows.Err()
|
||||
}
|
||||
@@ -165,5 +165,35 @@ func (s *SQLiteStorage) GetStatistics(ctx context.Context) (*types.Statistics, e
|
||||
stats.AverageLeadTime = avgLeadTime.Float64
|
||||
}
|
||||
|
||||
// Get epics eligible for closure count
|
||||
err = s.db.QueryRowContext(ctx, `
|
||||
WITH epic_children AS (
|
||||
SELECT
|
||||
d.depends_on_id AS epic_id,
|
||||
i.status AS child_status
|
||||
FROM dependencies d
|
||||
JOIN issues i ON i.id = d.issue_id
|
||||
WHERE d.type = 'parent-child'
|
||||
),
|
||||
epic_stats AS (
|
||||
SELECT
|
||||
epic_id,
|
||||
COUNT(*) AS total_children,
|
||||
SUM(CASE WHEN child_status = 'closed' THEN 1 ELSE 0 END) AS closed_children
|
||||
FROM epic_children
|
||||
GROUP BY epic_id
|
||||
)
|
||||
SELECT COUNT(*)
|
||||
FROM issues i
|
||||
JOIN epic_stats es ON es.epic_id = i.id
|
||||
WHERE i.issue_type = 'epic'
|
||||
AND i.status != 'closed'
|
||||
AND es.total_children > 0
|
||||
AND es.closed_children = es.total_children
|
||||
`).Scan(&stats.EpicsEligibleForClosure)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get eligible epics count: %w", err)
|
||||
}
|
||||
|
||||
return &stats, nil
|
||||
}
|
||||
|
||||
@@ -36,6 +36,7 @@ type Storage interface {
|
||||
// Ready Work & Blocking
|
||||
GetReadyWork(ctx context.Context, filter types.WorkFilter) ([]*types.Issue, error)
|
||||
GetBlockedIssues(ctx context.Context) ([]*types.BlockedIssue, error)
|
||||
GetEpicsEligibleForClosure(ctx context.Context) ([]*types.EpicStatus, error)
|
||||
|
||||
// Events
|
||||
AddComment(ctx context.Context, issueID, actor, comment string) error
|
||||
|
||||
@@ -183,13 +183,14 @@ type TreeNode struct {
|
||||
|
||||
// Statistics provides aggregate metrics
|
||||
type Statistics struct {
|
||||
TotalIssues int `json:"total_issues"`
|
||||
OpenIssues int `json:"open_issues"`
|
||||
InProgressIssues int `json:"in_progress_issues"`
|
||||
ClosedIssues int `json:"closed_issues"`
|
||||
BlockedIssues int `json:"blocked_issues"`
|
||||
ReadyIssues int `json:"ready_issues"`
|
||||
AverageLeadTime float64 `json:"average_lead_time_hours"`
|
||||
TotalIssues int `json:"total_issues"`
|
||||
OpenIssues int `json:"open_issues"`
|
||||
InProgressIssues int `json:"in_progress_issues"`
|
||||
ClosedIssues int `json:"closed_issues"`
|
||||
BlockedIssues int `json:"blocked_issues"`
|
||||
ReadyIssues int `json:"ready_issues"`
|
||||
EpicsEligibleForClosure int `json:"epics_eligible_for_closure"`
|
||||
AverageLeadTime float64 `json:"average_lead_time_hours"`
|
||||
}
|
||||
|
||||
// IssueFilter is used to filter issue queries
|
||||
@@ -210,3 +211,11 @@ type WorkFilter struct {
|
||||
Assignee *string
|
||||
Limit int
|
||||
}
|
||||
|
||||
// EpicStatus represents an epic with its completion status
|
||||
type EpicStatus struct {
|
||||
Epic *Issue `json:"epic"`
|
||||
TotalChildren int `json:"total_children"`
|
||||
ClosedChildren int `json:"closed_children"`
|
||||
EligibleForClose bool `json:"eligible_for_close"`
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user