Implement bd stale command (bd-c01f, closes #184)

- Add bd stale command to find abandoned/forgotten issues
- Support --days (default 30), --status, --limit, --json flags
- Implement GetStaleIssues in SQLite and Memory storage
- Add full RPC/daemon support
- Comprehensive test suite (6 tests, all passing)
- Update AGENTS.md documentation

Resolves GitHub issue #184

Amp-Thread-ID: https://ampcode.com/threads/T-f021ddb8-54e3-41bf-ba7a-071749663c1d
Co-authored-by: Amp <amp@ampcode.com>
This commit is contained in:
Steve Yegge
2025-10-31 23:03:48 -07:00
parent 645af0b72d
commit cc7918daf4
12 changed files with 721 additions and 1 deletions

View File

@@ -117,6 +117,7 @@
{"id":"bd-b6b2","content_hash":"c7a641bdb4c98b14816e2d85ec09db6def89aa4918ad59f4c1f8f71c6a42c6d4","title":"Feature with design","description":"This is a description","design":"Use MVC pattern","acceptance_criteria":"All tests pass","status":"open","priority":2,"issue_type":"feature","created_at":"2025-10-31T21:40:34.612465-07:00","updated_at":"2025-10-31T21:40:34.612465-07:00"}
{"id":"bd-bc2c6191","content_hash":"533e56b8628e24229a4beb52f8683355f6ca699e34a73650bf092003d73c2957","title":"Audit Current Cache Usage","description":"Understand exactly what code depends on the storage cache","acceptance_criteria":"- Document showing all cache dependencies\n- Confirmation that removing cache won't break MCP\n- List of tests that need updating\n\nFiles to examine:\n- internal/rpc/server_cache_storage.go (cache implementation)\n- internal/rpc/client.go (how req.Cwd is set)\n- internal/rpc/server_*.go (all getStorageForRequest calls)\n- integrations/beads-mcp/ (MCP multi-repo logic)\n\nTasks:\n- Document all callers of getStorageForRequest()\n- Verify req.Cwd is only set by RPC client for database discovery\n- Confirm MCP server doesn't rely on multi-repo cache behavior\n- Check if any tests assume multi-repo routing\n- Review environment variables: BEADS_DAEMON_MAX_CACHE_SIZE, BEADS_DAEMON_CACHE_TTL, BEADS_DAEMON_MEMORY_THRESHOLD_MB","status":"open","priority":1,"issue_type":"task","created_at":"2025-10-27T23:02:43.506373-07:00","updated_at":"2025-10-31T20:36:49.334214-07:00"}
{"id":"bd-bdaf24d5","content_hash":"6ccdbf2362d22fbbe854fdc666695a7488353799e1a5c49e6095b34178c9bcb4","title":"Final validation test","description":"","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-27T18:27:28.310533-07:00","updated_at":"2025-10-31T12:00:43.184995-07:00","closed_at":"2025-10-31T12:00:43.184995-07:00"}
{"id":"bd-c01f","content_hash":"b183cc4d99f74e9314f67d927f1bd1255608632b47ce0352af97af5872275fec","title":"Implement bd stale command to find abandoned/forgotten issues","description":"Add bd stale command to surface issues that haven't been updated recently and may need attention.\n\nUse cases:\n- In-progress issues with no recent activity (may be abandoned)\n- Open issues that have been forgotten\n- Issues that might be outdated or no longer relevant\n\nQuery logic should find non-closed issues where updated_at exceeds a time threshold.\n\nShould support:\n- --days N flag (default 30-90 days)\n- --status filter (e.g., only in_progress)\n- --json output for automation\n\nReferences GitHub issue #184 where user expected this command to exist.","design":"Implementation approach:\n1. Add new command in cmd/bd/stale.go\n2. Query issues with: status != 'closed' AND updated_at \u003c (now - N days)\n3. Support filtering by status (open, in_progress, blocked)\n4. Default threshold: 30 days (configurable via --days)\n5. JSON output for agent consumption\n6. Order by updated_at ASC (oldest first)","status":"closed","priority":2,"issue_type":"epic","created_at":"2025-10-31T22:48:46.85435-07:00","updated_at":"2025-10-31T22:54:33.704492-07:00","closed_at":"2025-10-31T22:54:33.704492-07:00"}
{"id":"bd-c825f867","content_hash":"27cecaa2dc6cdabb2ae77fd65fbf8dca8f4c536bdf140a13b25cdd16376c9845","title":"Add docs/architecture/event_driven.md","description":"Copy event_driven_daemon.md into docs/ folder. Add to documentation index.","status":"open","priority":2,"issue_type":"task","created_at":"2025-10-28T16:20:02.431399-07:00","updated_at":"2025-10-30T17:12:58.221939-07:00"}
{"id":"bd-c947dd1b","content_hash":"79bd51b46b28bc16cfc19cd19a4dd4f57f45cd1e902b682788d355b03ec00b2a","title":"Remove Daemon Storage Cache","description":"The daemon's multi-repo storage cache is the root cause of stale data bugs. Since global daemon is deprecated, we only ever serve one repository, making the cache unnecessary complexity. This epic removes the cache entirely for simpler, more reliable direct storage access.","design":"For local daemon (single repository), eliminate the cache entirely:\n- Use s.storage field directly (opened at daemon startup)\n- Remove getStorageForRequest() routing logic\n- Remove server_cache_storage.go entirely (~300 lines)\n- Remove cache-related tests\n- Simplify Server struct\n\nBenefits:\n✅ No staleness bugs: Always using live SQLite connection\n✅ Simpler code: Remove ~300 lines of cache management\n✅ Easier debugging: Direct storage access, no cache indirection\n✅ Same performance: Cache was always 1 entry for local daemon anyway","acceptance_criteria":"- Daemon has no storage cache code\n- All tests pass\n- MCP integration works\n- No stale data bugs\n- Documentation updated\n- Performance validated","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-10-28T10:50:15.126939-07:00","updated_at":"2025-10-30T17:12:58.21743-07:00","closed_at":"2025-10-28T10:49:53.612049-07:00"}
{"id":"bd-c9a482db","content_hash":"35c1ad124187c21b4e8dae7db46ea5d00173d33234a9b815ded7dcf0ab51078e","title":"Add internal/ai package for AI-assisted repairs","description":"Add AI integration package to support AI-powered repair commands.\n\nProviders:\n- Anthropic (Claude)\n- OpenAI\n- Ollama (local)\n\nFeatures:\n- Conflict resolution analysis\n- Duplicate detection via embeddings\n- Configuration via env vars (BEADS_AI_PROVIDER, BEADS_AI_API_KEY, etc.)\n\nSee repair_commands.md lines 357-425 for design.","status":"open","priority":2,"issue_type":"task","created_at":"2025-10-28T19:37:55.722841-07:00","updated_at":"2025-10-30T17:12:58.180177-07:00"}

View File

@@ -135,6 +135,11 @@ bd info --json
# Find ready work (no blockers)
bd ready --json
# Find stale issues (not updated recently)
bd stale --days 30 --json # Default: 30 days
bd stale --days 90 --status in_progress --json # Filter by status
bd stale --limit 20 --json # Limit results
# Create new issue
bd create "Issue title" -t bug|feature|task -p 0-4 -d "Description" --json
@@ -327,7 +332,7 @@ bd daemons killall # Restart with default (poll) mode
### Workflow
1. **Check for ready work**: Run `bd ready` to see what's unblocked
1. **Check for ready work**: Run `bd ready` to see what's unblocked (or `bd stale` to find forgotten issues)
2. **Claim your task**: `bd update <id> --status in_progress`
3. **Work on it**: Implement, test, document
4. **Discover new work**: If you find bugs or TODOs, create issues:

124
cmd/bd/stale.go Normal file
View File

@@ -0,0 +1,124 @@
package main
import (
"context"
"encoding/json"
"fmt"
"os"
"time"
"github.com/fatih/color"
"github.com/spf13/cobra"
"github.com/steveyegge/beads/internal/rpc"
"github.com/steveyegge/beads/internal/types"
)
var staleCmd = &cobra.Command{
Use: "stale",
Short: "Show stale issues (not updated recently)",
Long: `Show issues that haven't been updated recently and may need attention.
This helps identify:
- In-progress issues with no recent activity (may be abandoned)
- Open issues that have been forgotten
- Issues that might be outdated or no longer relevant`,
Run: func(cmd *cobra.Command, args []string) {
days, _ := cmd.Flags().GetInt("days")
status, _ := cmd.Flags().GetString("status")
limit, _ := cmd.Flags().GetInt("limit")
jsonOutput, _ := cmd.Flags().GetBool("json")
// Validate status if provided
if status != "" && status != "open" && status != "in_progress" && status != "blocked" {
fmt.Fprintf(os.Stderr, "Error: invalid status '%s'. Valid values: open, in_progress, blocked\n", status)
os.Exit(1)
}
filter := types.StaleFilter{
Days: days,
Status: status,
Limit: limit,
}
// If daemon is running, use RPC
if daemonClient != nil {
staleArgs := &rpc.StaleArgs{
Days: days,
Status: status,
Limit: limit,
}
resp, err := daemonClient.Stale(staleArgs)
if err != nil {
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
os.Exit(1)
}
var issues []*types.Issue
if err := json.Unmarshal(resp.Data, &issues); err != nil {
fmt.Fprintf(os.Stderr, "Error parsing response: %v\n", err)
os.Exit(1)
}
if jsonOutput {
if issues == nil {
issues = []*types.Issue{}
}
outputJSON(issues)
return
}
displayStaleIssues(issues, days)
return
}
// Direct mode
ctx := context.Background()
issues, err := store.GetStaleIssues(ctx, filter)
if err != nil {
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
os.Exit(1)
}
if jsonOutput {
if issues == nil {
issues = []*types.Issue{}
}
outputJSON(issues)
return
}
displayStaleIssues(issues, days)
},
}
func displayStaleIssues(issues []*types.Issue, days int) {
if len(issues) == 0 {
green := color.New(color.FgGreen).SprintFunc()
fmt.Printf("\n%s No stale issues found (all active)\n\n", green("✨"))
return
}
yellow := color.New(color.FgYellow).SprintFunc()
fmt.Printf("\n%s Stale issues (%d not updated in %d+ days):\n\n", yellow("⏰"), len(issues), days)
now := time.Now()
for i, issue := range issues {
daysStale := int(now.Sub(issue.UpdatedAt).Hours() / 24)
fmt.Printf("%d. [P%d] %s: %s\n", i+1, issue.Priority, issue.ID, issue.Title)
fmt.Printf(" Status: %s, Last updated: %d days ago\n", issue.Status, daysStale)
if issue.Assignee != "" {
fmt.Printf(" Assignee: %s\n", issue.Assignee)
}
fmt.Println()
}
}
func init() {
staleCmd.Flags().IntP("days", "d", 30, "Issues not updated in this many days")
staleCmd.Flags().StringP("status", "s", "", "Filter by status (open|in_progress|blocked)")
staleCmd.Flags().IntP("limit", "n", 50, "Maximum issues to show")
staleCmd.Flags().Bool("json", false, "Output JSON format")
rootCmd.AddCommand(staleCmd)
}

408
cmd/bd/stale_test.go Normal file
View File

@@ -0,0 +1,408 @@
package main
import (
"context"
"path/filepath"
"testing"
"time"
"github.com/steveyegge/beads/internal/types"
)
func TestStaleIssues(t *testing.T) {
tmpDir := t.TempDir()
dbPath := filepath.Join(tmpDir, ".beads", "beads.db")
sqliteStore := newTestStore(t, dbPath)
ctx := context.Background()
now := time.Now()
oldTime := now.Add(-40 * 24 * time.Hour) // 40 days ago
recentTime := now.Add(-10 * 24 * time.Hour) // 10 days ago
// Create issues with different update times
issues := []*types.Issue{
{
ID: "test-stale-1",
Title: "Very stale issue",
Status: types.StatusOpen,
Priority: 1,
IssueType: types.TypeTask,
CreatedAt: oldTime,
UpdatedAt: oldTime,
},
{
ID: "test-stale-2",
Title: "Stale in-progress",
Status: types.StatusInProgress,
Priority: 2,
IssueType: types.TypeTask,
CreatedAt: oldTime,
UpdatedAt: oldTime,
},
{
ID: "test-recent",
Title: "Recently updated",
Status: types.StatusOpen,
Priority: 1,
IssueType: types.TypeTask,
CreatedAt: oldTime,
UpdatedAt: recentTime,
},
{
ID: "test-closed",
Title: "Closed issue",
Status: types.StatusClosed,
Priority: 2,
IssueType: types.TypeTask,
CreatedAt: oldTime,
UpdatedAt: oldTime,
ClosedAt: ptrTime(oldTime),
},
}
for _, issue := range issues {
if err := sqliteStore.CreateIssue(ctx, issue, "test"); err != nil {
t.Fatal(err)
}
}
// Update timestamps directly in DB (CreateIssue sets updated_at to now)
// Use datetime() function to compute old timestamps
db := sqliteStore.UnderlyingDB()
_, err := db.ExecContext(ctx, "UPDATE issues SET updated_at = datetime('now', '-40 days') WHERE id IN (?, ?)", "test-stale-1", "test-stale-2")
if err != nil {
t.Fatal(err)
}
_, err = db.ExecContext(ctx, "UPDATE issues SET updated_at = datetime('now', '-10 days') WHERE id = ?", "test-recent")
if err != nil {
t.Fatal(err)
}
// Test basic stale detection (30 days)
stale, err := sqliteStore.GetStaleIssues(ctx, types.StaleFilter{
Days: 30,
Limit: 50,
})
if err != nil {
t.Fatalf("GetStaleIssues failed: %v", err)
}
// Should have test-stale-1 and test-stale-2 (not test-recent or test-closed)
if len(stale) != 2 {
t.Errorf("Expected 2 stale issues, got %d", len(stale))
}
// Verify closed issues are excluded
for _, issue := range stale {
if issue.Status == types.StatusClosed {
t.Error("Closed issues should not appear in stale results")
}
if issue.ID == "test-closed" {
t.Error("test-closed should not be in stale results")
}
if issue.ID == "test-recent" {
t.Error("test-recent should not be in stale results (updated 10 days ago)")
}
}
// Verify issues are sorted by updated_at (oldest first)
for i := 0; i < len(stale)-1; i++ {
if stale[i].UpdatedAt.After(stale[i+1].UpdatedAt) {
t.Error("Stale issues should be sorted by updated_at ascending (oldest first)")
}
}
}
func TestStaleIssuesWithStatusFilter(t *testing.T) {
tmpDir := t.TempDir()
dbPath := filepath.Join(tmpDir, ".beads", "beads.db")
sqliteStore := newTestStore(t, dbPath)
ctx := context.Background()
oldTime := time.Now().Add(-40 * 24 * time.Hour)
// Create stale issues with different statuses
issues := []*types.Issue{
{
ID: "test-open",
Title: "Stale open",
Status: types.StatusOpen,
Priority: 1,
IssueType: types.TypeTask,
CreatedAt: oldTime,
UpdatedAt: oldTime,
},
{
ID: "test-in-progress",
Title: "Stale in-progress",
Status: types.StatusInProgress,
Priority: 1,
IssueType: types.TypeTask,
CreatedAt: oldTime,
UpdatedAt: oldTime,
},
{
ID: "test-blocked",
Title: "Stale blocked",
Status: types.StatusBlocked,
Priority: 1,
IssueType: types.TypeTask,
CreatedAt: oldTime,
UpdatedAt: oldTime,
},
}
for _, issue := range issues {
if err := sqliteStore.CreateIssue(ctx, issue, "test"); err != nil {
t.Fatal(err)
}
}
// Update timestamps directly in DB using datetime() function
db := sqliteStore.UnderlyingDB()
_, err := db.ExecContext(ctx, "UPDATE issues SET updated_at = datetime('now', '-40 days') WHERE id IN (?, ?, ?)",
"test-open", "test-in-progress", "test-blocked")
if err != nil {
t.Fatal(err)
}
// Test status filter: only in_progress
stale, err := sqliteStore.GetStaleIssues(ctx, types.StaleFilter{
Days: 30,
Status: "in_progress",
Limit: 50,
})
if err != nil {
t.Fatalf("GetStaleIssues with status filter failed: %v", err)
}
if len(stale) != 1 {
t.Errorf("Expected 1 in_progress stale issue, got %d", len(stale))
}
if len(stale) > 0 && stale[0].Status != types.StatusInProgress {
t.Errorf("Expected status=in_progress, got %s", stale[0].Status)
}
// Test status filter: only open
staleOpen, err := sqliteStore.GetStaleIssues(ctx, types.StaleFilter{
Days: 30,
Status: "open",
Limit: 50,
})
if err != nil {
t.Fatalf("GetStaleIssues with status=open failed: %v", err)
}
if len(staleOpen) != 1 {
t.Errorf("Expected 1 open stale issue, got %d", len(staleOpen))
}
if len(staleOpen) > 0 && staleOpen[0].Status != types.StatusOpen {
t.Errorf("Expected status=open, got %s", staleOpen[0].Status)
}
}
func TestStaleIssuesWithLimit(t *testing.T) {
tmpDir := t.TempDir()
dbPath := filepath.Join(tmpDir, ".beads", "beads.db")
sqliteStore := newTestStore(t, dbPath)
ctx := context.Background()
oldTime := time.Now().Add(-40 * 24 * time.Hour)
// Create multiple stale issues
for i := 1; i <= 5; i++ {
updatedAt := oldTime.Add(time.Duration(i) * time.Hour) // Slightly different times for sorting
issue := &types.Issue{
ID: "test-stale-" + string(rune('0'+i)),
Title: "Stale issue",
Status: types.StatusOpen,
Priority: 1,
IssueType: types.TypeTask,
CreatedAt: oldTime,
UpdatedAt: updatedAt,
}
if err := sqliteStore.CreateIssue(ctx, issue, "test"); err != nil {
t.Fatal(err)
}
}
// Update timestamps directly in DB using datetime() function
db := sqliteStore.UnderlyingDB()
for i := 1; i <= 5; i++ {
id := "test-stale-" + string(rune('0'+i))
// Make each slightly different (40 days ago + i hours)
_, err := db.ExecContext(ctx, "UPDATE issues SET updated_at = datetime('now', '-40 days', '+' || ? || ' hours') WHERE id = ?", i, id)
if err != nil {
t.Fatal(err)
}
}
// Test with limit
stale, err := sqliteStore.GetStaleIssues(ctx, types.StaleFilter{
Days: 30,
Limit: 2,
})
if err != nil {
t.Fatalf("GetStaleIssues with limit failed: %v", err)
}
if len(stale) != 2 {
t.Errorf("Expected 2 issues with limit=2, got %d", len(stale))
}
}
func TestStaleIssuesEmpty(t *testing.T) {
tmpDir := t.TempDir()
dbPath := filepath.Join(tmpDir, ".beads", "beads.db")
sqliteStore := newTestStore(t, dbPath)
ctx := context.Background()
recentTime := time.Now().Add(-10 * 24 * time.Hour)
// Create only recent issues
issue := &types.Issue{
ID: "test-recent",
Title: "Recent issue",
Status: types.StatusOpen,
Priority: 1,
IssueType: types.TypeTask,
CreatedAt: recentTime,
UpdatedAt: recentTime,
}
if err := sqliteStore.CreateIssue(ctx, issue, "test"); err != nil {
t.Fatal(err)
}
// Test stale detection with no stale issues
stale, err := sqliteStore.GetStaleIssues(ctx, types.StaleFilter{
Days: 30,
Limit: 50,
})
if err != nil {
t.Fatalf("GetStaleIssues failed: %v", err)
}
if len(stale) != 0 {
t.Errorf("Expected 0 stale issues, got %d", len(stale))
}
}
func TestStaleIssuesDifferentDaysThreshold(t *testing.T) {
tmpDir := t.TempDir()
dbPath := filepath.Join(tmpDir, ".beads", "beads.db")
sqliteStore := newTestStore(t, dbPath)
ctx := context.Background()
now := time.Now()
time20DaysAgo := now.Add(-20 * 24 * time.Hour)
time50DaysAgo := now.Add(-50 * 24 * time.Hour)
issues := []*types.Issue{
{
ID: "test-20-days",
Title: "20 days stale",
Status: types.StatusOpen,
Priority: 1,
IssueType: types.TypeTask,
CreatedAt: time20DaysAgo,
UpdatedAt: time20DaysAgo,
},
{
ID: "test-50-days",
Title: "50 days stale",
Status: types.StatusOpen,
Priority: 1,
IssueType: types.TypeTask,
CreatedAt: time50DaysAgo,
UpdatedAt: time50DaysAgo,
},
}
for _, issue := range issues {
if err := sqliteStore.CreateIssue(ctx, issue, "test"); err != nil {
t.Fatal(err)
}
}
// Update timestamps directly in DB using datetime() function
db := sqliteStore.UnderlyingDB()
_, err := db.ExecContext(ctx, "UPDATE issues SET updated_at = datetime('now', '-20 days') WHERE id = ?", "test-20-days")
if err != nil {
t.Fatal(err)
}
_, err = db.ExecContext(ctx, "UPDATE issues SET updated_at = datetime('now', '-50 days') WHERE id = ?", "test-50-days")
if err != nil {
t.Fatal(err)
}
// Test with 30 days threshold - should get both
stale30, err := sqliteStore.GetStaleIssues(ctx, types.StaleFilter{
Days: 30,
Limit: 50,
})
if err != nil {
t.Fatalf("GetStaleIssues(30 days) failed: %v", err)
}
if len(stale30) != 1 {
t.Errorf("Expected 1 issue stale for 30+ days, got %d", len(stale30))
}
// Test with 10 days threshold - should get both
stale10, err := sqliteStore.GetStaleIssues(ctx, types.StaleFilter{
Days: 10,
Limit: 50,
})
if err != nil {
t.Fatalf("GetStaleIssues(10 days) failed: %v", err)
}
if len(stale10) != 2 {
t.Errorf("Expected 2 issues stale for 10+ days, got %d", len(stale10))
}
// Test with 60 days threshold - should get only the 50-day old one
stale60, err := sqliteStore.GetStaleIssues(ctx, types.StaleFilter{
Days: 60,
Limit: 50,
})
if err != nil {
t.Fatalf("GetStaleIssues(60 days) failed: %v", err)
}
if len(stale60) != 0 {
t.Errorf("Expected 0 issues stale for 60+ days, got %d", len(stale60))
}
}
func TestStaleCommandInit(t *testing.T) {
if staleCmd == nil {
t.Fatal("staleCmd should be initialized")
}
if staleCmd.Use != "stale" {
t.Errorf("Expected Use='stale', got %q", staleCmd.Use)
}
if len(staleCmd.Short) == 0 {
t.Error("staleCmd should have Short description")
}
// Check flags are defined
flags := staleCmd.Flags()
if flags.Lookup("days") == nil {
t.Error("staleCmd should have --days flag")
}
if flags.Lookup("status") == nil {
t.Error("staleCmd should have --status flag")
}
if flags.Lookup("limit") == nil {
t.Error("staleCmd should have --limit flag")
}
if flags.Lookup("json") == nil {
t.Error("staleCmd should have --json flag")
}
}

View File

@@ -265,6 +265,11 @@ func (c *Client) Ready(args *ReadyArgs) (*Response, error) {
return c.Execute(OpReady, args)
}
// Stale gets stale issues via the daemon
func (c *Client) Stale(args *StaleArgs) (*Response, error) {
return c.Execute(OpStale, args)
}
// Stats gets statistics via the daemon
func (c *Client) Stats() (*Response, error) {
return c.Execute(OpStats, nil)

View File

@@ -16,6 +16,7 @@ const (
OpList = "list"
OpShow = "show"
OpReady = "ready"
OpStale = "stale"
OpStats = "stats"
OpDepAdd = "dep_add"
OpDepRemove = "dep_remove"
@@ -118,6 +119,13 @@ type ReadyArgs struct {
SortPolicy string `json:"sort_policy,omitempty"`
}
// StaleArgs represents arguments for the stale command
type StaleArgs struct {
Days int `json:"days,omitempty"`
Status string `json:"status,omitempty"`
Limit int `json:"limit,omitempty"`
}
// DepAddArgs represents arguments for adding a dependency
type DepAddArgs struct {
FromID string `json:"from_id"`

View File

@@ -429,6 +429,39 @@ func (s *Server) handleReady(req *Request) Response {
}
}
func (s *Server) handleStale(req *Request) Response {
var staleArgs StaleArgs
if err := json.Unmarshal(req.Args, &staleArgs); err != nil {
return Response{
Success: false,
Error: fmt.Sprintf("invalid stale args: %v", err),
}
}
store := s.storage
filter := types.StaleFilter{
Days: staleArgs.Days,
Status: staleArgs.Status,
Limit: staleArgs.Limit,
}
ctx := s.reqCtx(req)
issues, err := store.GetStaleIssues(ctx, filter)
if err != nil {
return Response{
Success: false,
Error: fmt.Sprintf("failed to get stale issues: %v", err),
}
}
data, _ := json.Marshal(issues)
return Response{
Success: true,
Data: data,
}
}
func (s *Server) handleStats(req *Request) Response {
store := s.storage

View File

@@ -172,6 +172,8 @@ func (s *Server) handleRequest(req *Request) Response {
resp = s.handleResolveID(req)
case OpReady:
resp = s.handleReady(req)
case OpStale:
resp = s.handleStale(req)
case OpStats:
resp = s.handleStats(req)
case OpDepAdd:

View File

@@ -731,6 +731,37 @@ func (m *MemoryStorage) GetEpicsEligibleForClosure(ctx context.Context) ([]*type
return nil, nil
}
func (m *MemoryStorage) GetStaleIssues(ctx context.Context, filter types.StaleFilter) ([]*types.Issue, error) {
m.mu.RLock()
defer m.mu.RUnlock()
cutoff := time.Now().AddDate(0, 0, -filter.Days)
var stale []*types.Issue
for _, issue := range m.issues {
if issue.Status == types.StatusClosed {
continue
}
if filter.Status != "" && string(issue.Status) != filter.Status {
continue
}
if issue.UpdatedAt.Before(cutoff) {
stale = append(stale, issue)
}
}
// Sort by updated_at ascending (oldest first)
sort.Slice(stale, func(i, j int) bool {
return stale[i].UpdatedAt.Before(stale[j].UpdatedAt)
})
if filter.Limit > 0 && len(stale) > filter.Limit {
stale = stale[:filter.Limit]
}
return stale, nil
}
func (m *MemoryStorage) AddComment(ctx context.Context, issueID, actor, comment string) error {
return nil
}

View File

@@ -106,6 +106,101 @@ func (s *SQLiteStorage) GetReadyWork(ctx context.Context, filter types.WorkFilte
return s.scanIssues(ctx, rows)
}
// GetStaleIssues returns issues that haven't been updated recently
func (s *SQLiteStorage) GetStaleIssues(ctx context.Context, filter types.StaleFilter) ([]*types.Issue, error) {
// Build query with optional status filter
query := `
SELECT
id, content_hash, title, description, design, acceptance_criteria, notes,
status, priority, issue_type, assignee, estimated_minutes,
created_at, updated_at, closed_at, external_ref,
compaction_level, compacted_at, compacted_at_commit, original_size
FROM issues
WHERE status != 'closed'
AND datetime(updated_at) < datetime('now', '-' || ? || ' days')
`
args := []interface{}{filter.Days}
// Add optional status filter
if filter.Status != "" {
query += " AND status = ?"
args = append(args, filter.Status)
}
query += " ORDER BY updated_at ASC"
// Add limit
if filter.Limit > 0 {
query += " LIMIT ?"
args = append(args, filter.Limit)
}
rows, err := s.db.QueryContext(ctx, query, args...)
if err != nil {
return nil, fmt.Errorf("failed to query stale issues: %w", err)
}
defer func() { _ = rows.Close() }()
var issues []*types.Issue
for rows.Next() {
var issue types.Issue
var closedAt sql.NullTime
var estimatedMinutes sql.NullInt64
var assignee sql.NullString
var externalRef sql.NullString
var contentHash sql.NullString
var compactionLevel sql.NullInt64
var compactedAt sql.NullTime
var compactedAtCommit sql.NullString
var originalSize sql.NullInt64
err := rows.Scan(
&issue.ID, &contentHash, &issue.Title, &issue.Description, &issue.Design,
&issue.AcceptanceCriteria, &issue.Notes, &issue.Status,
&issue.Priority, &issue.IssueType, &assignee, &estimatedMinutes,
&issue.CreatedAt, &issue.UpdatedAt, &closedAt, &externalRef,
&compactionLevel, &compactedAt, &compactedAtCommit, &originalSize,
)
if err != nil {
return nil, fmt.Errorf("failed to scan stale issue: %w", err)
}
if contentHash.Valid {
issue.ContentHash = contentHash.String
}
if closedAt.Valid {
issue.ClosedAt = &closedAt.Time
}
if estimatedMinutes.Valid {
mins := int(estimatedMinutes.Int64)
issue.EstimatedMinutes = &mins
}
if assignee.Valid {
issue.Assignee = assignee.String
}
if externalRef.Valid {
issue.ExternalRef = &externalRef.String
}
if compactionLevel.Valid {
issue.CompactionLevel = int(compactionLevel.Int64)
}
if compactedAt.Valid {
issue.CompactedAt = &compactedAt.Time
}
if compactedAtCommit.Valid {
issue.CompactedAtCommit = &compactedAtCommit.String
}
if originalSize.Valid {
issue.OriginalSize = int(originalSize.Int64)
}
issues = append(issues, &issue)
}
return issues, rows.Err()
}
// GetBlockedIssues returns issues that are blocked by dependencies
func (s *SQLiteStorage) GetBlockedIssues(ctx context.Context) ([]*types.BlockedIssue, error) {
// Use GROUP_CONCAT to get all blocker IDs in a single query (no N+1)

View File

@@ -38,6 +38,7 @@ type Storage interface {
GetReadyWork(ctx context.Context, filter types.WorkFilter) ([]*types.Issue, error)
GetBlockedIssues(ctx context.Context) ([]*types.BlockedIssue, error)
GetEpicsEligibleForClosure(ctx context.Context) ([]*types.EpicStatus, error)
GetStaleIssues(ctx context.Context, filter types.StaleFilter) ([]*types.Issue, error)
// Events
AddComment(ctx context.Context, issueID, actor, comment string) error

View File

@@ -289,6 +289,13 @@ type WorkFilter struct {
SortPolicy SortPolicy
}
// StaleFilter is used to filter stale issue queries
type StaleFilter struct {
Days int // Issues not updated in this many days
Status string // Filter by status (open|in_progress|blocked), empty = all non-closed
Limit int // Maximum issues to return
}
// EpicStatus represents an epic with its completion status
type EpicStatus struct {
Epic *Issue `json:"epic"`