Add label management and import collision detection
- Implement 'bd label' command with add/remove/list subcommands - Add import --dry-run and --resolve-collisions for safe merges - Support label filtering in 'bd list' and 'bd create' - Update AGENTS.md with collision handling workflow - Extends bd-224 (data consistency) and bd-84 (import reliability)
This commit is contained in:
@@ -107,7 +107,7 @@
|
||||
{"id":"bd-195","title":"stress_test_1","description":"","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-15T01:07:02.011549-07:00","updated_at":"2025-10-15T16:27:21.964657-07:00","closed_at":"2025-10-15T16:26:05.190223-07:00"}
|
||||
{"id":"bd-196","title":"Test2 auto-export","description":"","status":"closed","priority":4,"issue_type":"task","created_at":"2025-10-15T01:07:02.011907-07:00","updated_at":"2025-10-15T16:27:21.966007-07:00","closed_at":"2025-10-15T03:01:29.548276-07:00"}
|
||||
{"id":"bd-197","title":"Add version mismatch warning for outdated binaries","description":"When bd detects that the database was created/modified by a newer version, warn the user that they may be running an outdated binary.\n\n**Problem:** Users may run old bd binaries without realizing it, leading to confusing behavior (e.g., missing features like auto-export).\n\n**Solution:** Store schema version or bd version in metadata table, check on startup, warn if mismatch.\n\n**Implementation:**\n1. Add `schema_version` or `bd_version` to metadata table during init\n2. Check version in PersistentPreRun (cmd/bd/main.go)\n3. Warn if binary version \u003c DB version\n4. Suggest: 'Your bd binary (v0.9.3) is older than this database (v0.9.5). Rebuild: go build -o bd ./cmd/bd'\n\n**Edge cases:**\n- Dev builds (show commit hash?)\n- Backwards compatibility (older DBs should work with newer binaries)\n- Don't warn on every command (maybe once per session?)\n\n**Related:** Fixes confusion from bd-182 (auto-export not working with old binary)","notes":"**Implementation Complete:**\n\n✅ Added version metadata storage during `bd init` (init.go:59-63)\n✅ Added `checkVersionMismatch()` function (main.go:301-345)\n✅ Integrated version check into PersistentPreRun (main.go:98-99)\n✅ Tested both scenarios:\n - Outdated binary: Clear warning with rebuild instructions\n - Newer binary: Info message that DB will be auto-upgraded\n✅ No warnings on subsequent runs (version updated automatically)\n\n**How it works:**\n1. On `bd init`: Stores current version in metadata table\n2. On every command: Checks stored version vs binary version\n3. If mismatch:\n - Binary \u003c DB version → Warn: outdated binary\n - Binary \u003e DB version → Info: auto-upgrading DB\n4. Always updates stored version to current (future-proof)\n\n**Files modified:**\n- cmd/bd/init.go: Store version on init\n- cmd/bd/main.go: checkVersionMismatch() + integration\n\n**Testing:**\n```bash\n# Simulate old binary\nsqlite3 .beads/default.db \"UPDATE metadata SET value='0.9.99' WHERE key='bd_version';\"\nbd ready # Shows warning\n\n# Normal use (versions match)\nbd ready # No warning\n```\n\n**Edge cases handled:**\n- Empty version (old DB): Silently upgrade\n- Metadata errors: Skip check (defensive)\n- Dev builds: String comparison (works for 0.9.5 vs 0.9.6)\n\nFixes bd-182 confusion (users won't run outdated binaries unknowingly).","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-10-15T01:07:02.012291-07:00","updated_at":"2025-10-15T16:27:21.966999-07:00","closed_at":"2025-10-15T03:01:29.548677-07:00","dependencies":[{"issue_id":"bd-197","depends_on_id":"bd-182","type":"discovered-from","created_at":"2025-10-15T01:07:02.078326-07:00","created_by":"auto-import"}]}
|
||||
{"id":"bd-198","title":"Add label management commands to CLI","description":"Currently labels can only be managed programmatically via the storage API. Add CLI commands to add, remove, and filter by labels.","acceptance_criteria":"Can add/remove/list labels via CLI, can filter issues by label, labels persist to JSONL","status":"open","priority":2,"issue_type":"feature","created_at":"2025-10-15T01:10:48.43357-07:00","updated_at":"2025-10-15T16:27:21.968173-07:00"}
|
||||
{"id":"bd-198","title":"Add label management commands to CLI","description":"Currently labels can only be managed programmatically via the storage API. Add CLI commands to add, remove, and filter by labels.","acceptance_criteria":"Can add/remove/list labels via CLI, can filter issues by label, labels persist to JSONL","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-10-15T01:10:48.43357-07:00","updated_at":"2025-10-15T17:14:14.451905-07:00","closed_at":"2025-10-15T17:14:14.451905-07:00"}
|
||||
{"id":"bd-199","title":"Investigate and fix import timeout with 208 issues","description":"bd import times out after 2 minutes when importing 208 issues from JSONL. This is unacceptable for a tool designed to scale to 100k+ issues.","design":"\n## Reproduction\n```bash\ncd ~/src/beads\nbd import .beads/issues.jsonl # Hangs/times out after 2 minutes\n```\n\nCurrent database state:\n- 208 issues in bd.db (2MB)\n- 208 lines in issues.jsonl (100KB)\n- WAL mode enabled\n\n## Symptoms\n- Import starts but never completes\n- No error message, just hangs\n- Timeout after 2 minutes (artificially imposed in testing)\n- Happens even when database already contains the issues (idempotent import)\n\n## Likely Causes\n1. **Transaction size** - All 208 issues in one transaction?\n2. **Lock contention** - WAL checkpoint blocking?\n3. **N+1 queries** - Dependency checking for each issue?\n4. **Missing indexes** - Slow lookups during collision detection?\n5. **Auto-flush interaction** - Background flush goroutine conflicting?\n\n## Performance Target\nShould handle:\n- 1000 issues in \u003c 5 seconds\n- 10,000 issues in \u003c 30 seconds \n- 100,000 issues in \u003c 5 minutes\n\n## Investigation Steps\n1. Add timing instrumentation to import phases\n2. Profile the import operation\n3. Check for lock contention in WAL mode\n4. Review collision detection performance\n5. Test with PRAGMA synchronous = NORMAL\n6. Consider batching imports (e.g., 100 issues per transaction)\n","acceptance_criteria":"\n- Can import 208 issues in \u003c 5 seconds\n- Can import 1000 issues in \u003c 30 seconds\n- Add performance logging to identify bottlenecks\n- Add --batch-size flag for tuning\n- Document performance characteristics\n- Integration test with large import\n","notes":"## Profiling Results (2025-10-15)\n\nCreated comprehensive profiling tests in cmd/bd/import_profile_test.go.\n\n### Key Findings\n\nImport performance is actually **very good** for the core phases:\n- 1000 issues: 883ms total (1,132 issues/sec)\n- 208 issues collision detection: 3.2ms (64,915 issues/sec)\n\n### Bottleneck Identified\n\n**Create/Update phase takes 95-98% of total time:**\n- 100 issues: 58ms (87.4%)\n- 500 issues: 214ms (95.8%) \n- 1000 issues: 866ms (98.1%)\n\nThis phase does N*2 individual queries (GetIssue + CreateIssue/UpdateIssue) with no transaction batching.\n\n### Next Steps\n\n1. Profile dependency import phase (not yet measured)\n2. Check for auto-flush lock contention\n3. Investigate WAL checkpoint blocking\n4. Consider transaction batching for bulk imports\n\n### Test Command\n\n```bash\ngo test -v -run TestImportPerformance ./cmd/bd/\n```\n\nSee test file for detailed instrumentation.","status":"closed","priority":0,"issue_type":"bug","created_at":"2025-10-15T01:11:08.998901-07:00","updated_at":"2025-10-15T16:27:21.969079-07:00","closed_at":"2025-10-15T02:49:22.700881-07:00"}
|
||||
{"id":"bd-2","title":"Verify auto-export works","description":"","status":"closed","priority":0,"issue_type":"task","created_at":"2025-10-14T14:43:06.907657-07:00","updated_at":"2025-10-15T16:27:21.970346-07:00","closed_at":"2025-10-15T03:01:29.550886-07:00"}
|
||||
{"id":"bd-20","title":"Add --strict flag for dependency import failures","description":"Currently dependency import errors are warnings (logged to stderr, execution continues). Missing targets or cycles may indicate JSONL corruption. Add --strict flag to fail on any dependency errors for data integrity validation. Location: cmd/bd/import.go:159-164","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-10-14T14:43:06.90826-07:00","updated_at":"2025-10-15T16:27:21.970819-07:00","closed_at":"2025-10-15T03:01:29.551329-07:00"}
|
||||
|
||||
12
AGENTS.md
12
AGENTS.md
@@ -20,6 +20,9 @@ bd create "Issue title" -t bug|feature|task -p 0-4 -d "Description" --json
|
||||
# Create with explicit ID (for parallel workers)
|
||||
bd create "Issue title" --id worker1-100 -p 1 --json
|
||||
|
||||
# Create with labels
|
||||
bd create "Issue title" -t bug -p 1 -l bug,critical --json
|
||||
|
||||
# Create multiple issues from markdown file
|
||||
bd create -f feature-plan.md --json
|
||||
|
||||
@@ -32,6 +35,15 @@ bd dep add <discovered-id> <parent-id> --type discovered-from
|
||||
# Create and link in one command (new way)
|
||||
bd create "Issue title" -t bug -p 1 --deps discovered-from:<parent-id> --json
|
||||
|
||||
# Label management
|
||||
bd label add <id> <label> --json
|
||||
bd label remove <id> <label> --json
|
||||
bd label list <id> --json
|
||||
bd label list-all --json
|
||||
|
||||
# Filter issues by label
|
||||
bd list --label bug,critical --json
|
||||
|
||||
# Complete work
|
||||
bd close <id> --reason "Done" --json
|
||||
|
||||
|
||||
@@ -92,6 +92,16 @@ Output to stdout by default, or use -o flag for file output.`,
|
||||
issue.Dependencies = allDeps[issue.ID]
|
||||
}
|
||||
|
||||
// Populate labels for all issues
|
||||
for _, issue := range issues {
|
||||
labels, err := store.GetLabels(ctx, issue.ID)
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error getting labels for %s: %v\n", issue.ID, err)
|
||||
os.Exit(1)
|
||||
}
|
||||
issue.Labels = labels
|
||||
}
|
||||
|
||||
// Open output
|
||||
out := os.Stdout
|
||||
var tempFile *os.File
|
||||
|
||||
@@ -316,6 +316,54 @@ Behavior:
|
||||
}
|
||||
}
|
||||
|
||||
// Phase 7: Process labels
|
||||
// Sync labels for all imported issues
|
||||
var labelsAdded, labelsRemoved int
|
||||
for _, issue := range allIssues {
|
||||
if issue.Labels == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// Get current labels for the issue
|
||||
currentLabels, err := store.GetLabels(ctx, issue.ID)
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error getting labels for %s: %v\n", issue.ID, err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Convert slices to maps for easier comparison
|
||||
currentLabelMap := make(map[string]bool)
|
||||
for _, label := range currentLabels {
|
||||
currentLabelMap[label] = true
|
||||
}
|
||||
importedLabelMap := make(map[string]bool)
|
||||
for _, label := range issue.Labels {
|
||||
importedLabelMap[label] = true
|
||||
}
|
||||
|
||||
// Add missing labels
|
||||
for _, label := range issue.Labels {
|
||||
if !currentLabelMap[label] {
|
||||
if err := store.AddLabel(ctx, issue.ID, label, "import"); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error adding label %s to %s: %v\n", label, issue.ID, err)
|
||||
os.Exit(1)
|
||||
}
|
||||
labelsAdded++
|
||||
}
|
||||
}
|
||||
|
||||
// Remove labels not in imported data
|
||||
for _, label := range currentLabels {
|
||||
if !importedLabelMap[label] {
|
||||
if err := store.RemoveLabel(ctx, issue.ID, label, "import"); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error removing label %s from %s: %v\n", label, issue.ID, err)
|
||||
os.Exit(1)
|
||||
}
|
||||
labelsRemoved++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Schedule auto-flush after import completes
|
||||
markDirtyAndScheduleFlush()
|
||||
|
||||
@@ -333,6 +381,16 @@ Behavior:
|
||||
if len(idMapping) > 0 {
|
||||
fmt.Fprintf(os.Stderr, ", %d issues remapped", len(idMapping))
|
||||
}
|
||||
if labelsAdded > 0 || labelsRemoved > 0 {
|
||||
fmt.Fprintf(os.Stderr, ", %d labels synced", labelsAdded+labelsRemoved)
|
||||
if labelsAdded > 0 && labelsRemoved > 0 {
|
||||
fmt.Fprintf(os.Stderr, " (%d added, %d removed)", labelsAdded, labelsRemoved)
|
||||
} else if labelsAdded > 0 {
|
||||
fmt.Fprintf(os.Stderr, " (%d added)", labelsAdded)
|
||||
} else {
|
||||
fmt.Fprintf(os.Stderr, " (%d removed)", labelsRemoved)
|
||||
}
|
||||
}
|
||||
fmt.Fprintf(os.Stderr, "\n")
|
||||
},
|
||||
}
|
||||
|
||||
204
cmd/bd/label.go
Normal file
204
cmd/bd/label.go
Normal file
@@ -0,0 +1,204 @@
|
||||
// Package main implements the bd CLI label management commands.
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"sort"
|
||||
"strings"
|
||||
|
||||
"github.com/fatih/color"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/steveyegge/beads/internal/types"
|
||||
)
|
||||
|
||||
var labelCmd = &cobra.Command{
|
||||
Use: "label",
|
||||
Short: "Manage issue labels",
|
||||
}
|
||||
|
||||
var labelAddCmd = &cobra.Command{
|
||||
Use: "add [issue-id] [label]",
|
||||
Short: "Add a label to an issue",
|
||||
Args: cobra.ExactArgs(2),
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
issueID := args[0]
|
||||
label := args[1]
|
||||
|
||||
ctx := context.Background()
|
||||
if err := store.AddLabel(ctx, issueID, label, actor); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Schedule auto-flush
|
||||
markDirtyAndScheduleFlush()
|
||||
|
||||
if jsonOutput {
|
||||
outputJSON(map[string]interface{}{
|
||||
"status": "added",
|
||||
"issue_id": issueID,
|
||||
"label": label,
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
green := color.New(color.FgGreen).SprintFunc()
|
||||
fmt.Printf("%s Added label '%s' to %s\n", green("✓"), label, issueID)
|
||||
},
|
||||
}
|
||||
|
||||
var labelRemoveCmd = &cobra.Command{
|
||||
Use: "remove [issue-id] [label]",
|
||||
Short: "Remove a label from an issue",
|
||||
Args: cobra.ExactArgs(2),
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
issueID := args[0]
|
||||
label := args[1]
|
||||
|
||||
ctx := context.Background()
|
||||
if err := store.RemoveLabel(ctx, issueID, label, actor); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Schedule auto-flush
|
||||
markDirtyAndScheduleFlush()
|
||||
|
||||
if jsonOutput {
|
||||
outputJSON(map[string]interface{}{
|
||||
"status": "removed",
|
||||
"issue_id": issueID,
|
||||
"label": label,
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
green := color.New(color.FgGreen).SprintFunc()
|
||||
fmt.Printf("%s Removed label '%s' from %s\n", green("✓"), label, issueID)
|
||||
},
|
||||
}
|
||||
|
||||
var labelListCmd = &cobra.Command{
|
||||
Use: "list [issue-id]",
|
||||
Short: "List labels for an issue",
|
||||
Args: cobra.ExactArgs(1),
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
issueID := args[0]
|
||||
|
||||
ctx := context.Background()
|
||||
labels, err := store.GetLabels(ctx, issueID)
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
if jsonOutput {
|
||||
// Always output array, even if empty
|
||||
if labels == nil {
|
||||
labels = []string{}
|
||||
}
|
||||
outputJSON(labels)
|
||||
return
|
||||
}
|
||||
|
||||
if len(labels) == 0 {
|
||||
fmt.Printf("\n%s has no labels\n", issueID)
|
||||
return
|
||||
}
|
||||
|
||||
cyan := color.New(color.FgCyan).SprintFunc()
|
||||
fmt.Printf("\n%s Labels for %s:\n", cyan("🏷"), issueID)
|
||||
for _, label := range labels {
|
||||
fmt.Printf(" - %s\n", label)
|
||||
}
|
||||
fmt.Println()
|
||||
},
|
||||
}
|
||||
|
||||
var labelListAllCmd = &cobra.Command{
|
||||
Use: "list-all",
|
||||
Short: "List all unique labels in the database",
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
ctx := context.Background()
|
||||
|
||||
// Get all issues to collect labels
|
||||
issues, err := store.SearchIssues(ctx, "", types.IssueFilter{})
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Collect unique labels with counts
|
||||
labelCounts := make(map[string]int)
|
||||
for _, issue := range issues {
|
||||
labels, err := store.GetLabels(ctx, issue.ID)
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error getting labels for %s: %v\n", issue.ID, err)
|
||||
os.Exit(1)
|
||||
}
|
||||
for _, label := range labels {
|
||||
labelCounts[label]++
|
||||
}
|
||||
}
|
||||
|
||||
if len(labelCounts) == 0 {
|
||||
if jsonOutput {
|
||||
outputJSON([]string{})
|
||||
} else {
|
||||
fmt.Println("\nNo labels found in database")
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Sort labels alphabetically
|
||||
labels := make([]string, 0, len(labelCounts))
|
||||
for label := range labelCounts {
|
||||
labels = append(labels, label)
|
||||
}
|
||||
sort.Strings(labels)
|
||||
|
||||
if jsonOutput {
|
||||
// Output as array of {label, count} objects
|
||||
type labelInfo struct {
|
||||
Label string `json:"label"`
|
||||
Count int `json:"count"`
|
||||
}
|
||||
result := make([]labelInfo, 0, len(labels))
|
||||
for _, label := range labels {
|
||||
result = append(result, labelInfo{
|
||||
Label: label,
|
||||
Count: labelCounts[label],
|
||||
})
|
||||
}
|
||||
outputJSON(result)
|
||||
return
|
||||
}
|
||||
|
||||
cyan := color.New(color.FgCyan).SprintFunc()
|
||||
fmt.Printf("\n%s All labels (%d unique):\n", cyan("🏷"), len(labels))
|
||||
|
||||
// Find longest label for alignment
|
||||
maxLen := 0
|
||||
for _, label := range labels {
|
||||
if len(label) > maxLen {
|
||||
maxLen = len(label)
|
||||
}
|
||||
}
|
||||
|
||||
for _, label := range labels {
|
||||
padding := strings.Repeat(" ", maxLen-len(label))
|
||||
fmt.Printf(" %s%s (%d issues)\n", label, padding, labelCounts[label])
|
||||
}
|
||||
fmt.Println()
|
||||
},
|
||||
}
|
||||
|
||||
func init() {
|
||||
labelCmd.AddCommand(labelAddCmd)
|
||||
labelCmd.AddCommand(labelRemoveCmd)
|
||||
labelCmd.AddCommand(labelListCmd)
|
||||
labelCmd.AddCommand(labelListAllCmd)
|
||||
rootCmd.AddCommand(labelCmd)
|
||||
}
|
||||
@@ -393,6 +393,48 @@ func autoImportIfNewer() {
|
||||
}
|
||||
}
|
||||
|
||||
// Import labels (skip colliding issues to maintain consistency)
|
||||
for _, issue := range allIssues {
|
||||
// Skip if this issue was filtered out due to collision
|
||||
if collidingIDs[issue.ID] {
|
||||
continue
|
||||
}
|
||||
|
||||
if issue.Labels == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// Get existing labels
|
||||
existingLabels, err := store.GetLabels(ctx, issue.ID)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// Convert to maps for comparison
|
||||
existingLabelMap := make(map[string]bool)
|
||||
for _, label := range existingLabels {
|
||||
existingLabelMap[label] = true
|
||||
}
|
||||
importedLabelMap := make(map[string]bool)
|
||||
for _, label := range issue.Labels {
|
||||
importedLabelMap[label] = true
|
||||
}
|
||||
|
||||
// Add missing labels
|
||||
for _, label := range issue.Labels {
|
||||
if !existingLabelMap[label] {
|
||||
_ = store.AddLabel(ctx, issue.ID, label, "auto-import")
|
||||
}
|
||||
}
|
||||
|
||||
// Remove labels not in imported data
|
||||
for _, label := range existingLabels {
|
||||
if !importedLabelMap[label] {
|
||||
_ = store.RemoveLabel(ctx, issue.ID, label, "auto-import")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Store new hash after successful import
|
||||
_ = store.SetMetadata(ctx, "last_import_hash", currentHash)
|
||||
}
|
||||
@@ -599,6 +641,14 @@ func flushToJSONL() {
|
||||
}
|
||||
issue.Dependencies = deps
|
||||
|
||||
// Get labels for this issue
|
||||
labels, err := store.GetLabels(ctx, issueID)
|
||||
if err != nil {
|
||||
recordFailure(fmt.Errorf("failed to get labels for %s: %w", issueID, err))
|
||||
return
|
||||
}
|
||||
issue.Labels = labels
|
||||
|
||||
// Update map
|
||||
issueMap[issueID] = issue
|
||||
}
|
||||
@@ -1039,6 +1089,7 @@ var listCmd = &cobra.Command{
|
||||
assignee, _ := cmd.Flags().GetString("assignee")
|
||||
issueType, _ := cmd.Flags().GetString("type")
|
||||
limit, _ := cmd.Flags().GetInt("limit")
|
||||
labels, _ := cmd.Flags().GetStringSlice("label")
|
||||
|
||||
filter := types.IssueFilter{
|
||||
Limit: limit,
|
||||
@@ -1059,6 +1110,9 @@ var listCmd = &cobra.Command{
|
||||
t := types.IssueType(issueType)
|
||||
filter.IssueType = &t
|
||||
}
|
||||
if len(labels) > 0 {
|
||||
filter.Labels = labels
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
issues, err := store.SearchIssues(ctx, "", filter)
|
||||
@@ -1089,6 +1143,7 @@ func init() {
|
||||
listCmd.Flags().IntP("priority", "p", 0, "Filter by priority")
|
||||
listCmd.Flags().StringP("assignee", "a", "", "Filter by assignee")
|
||||
listCmd.Flags().StringP("type", "t", "", "Filter by type")
|
||||
listCmd.Flags().StringSliceP("label", "l", []string{}, "Filter by labels (comma-separated)")
|
||||
listCmd.Flags().IntP("limit", "n", 0, "Limit results")
|
||||
rootCmd.AddCommand(listCmd)
|
||||
}
|
||||
|
||||
@@ -23,6 +23,7 @@ type Issue struct {
|
||||
UpdatedAt time.Time `json:"updated_at"`
|
||||
ClosedAt *time.Time `json:"closed_at,omitempty"`
|
||||
ExternalRef *string `json:"external_ref,omitempty"` // e.g., "gh-9", "jira-ABC"
|
||||
Labels []string `json:"labels,omitempty"` // Populated only for export/import
|
||||
Dependencies []*Dependency `json:"dependencies,omitempty"` // Populated only for export/import
|
||||
}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user