Phase 2: Add client auto-detection in bd commands (bd-112)
- Add daemon client infrastructure to main.go with TryConnect logic - Update PersistentPreRun to detect daemon socket and route through RPC - Add --no-daemon flag to force direct storage mode - Update all commands (create, update, close, show, list, ready) to use daemon when available - Maintain backward compatibility with graceful fallback to direct mode - All commands work identically in both daemon and direct modes Part of bd-110 daemon architecture implementation. Amp-Thread-ID: https://ampcode.com/threads/T-bfe2c083-be7c-4064-8673-fa69c22a730e Co-authored-by: Amp <amp@ampcode.com>
This commit is contained in:
@@ -12,9 +12,10 @@
|
|||||||
{"id":"bd-11","title":"Document or automate JSONL sync workflow for git collaboration","description":"When using beads across multiple machines/environments via git, there's a workflow gap:\n\n1. Machine A: Create issues → stored in .beads/project.db\n2. Machine A: bd export -o .beads/issues.jsonl\n3. Machine A: git add .beads/issues.jsonl \u0026\u0026 git commit \u0026\u0026 git push\n4. Machine B: git pull\n5. Machine B: ??? issues.jsonl exists but project.db is empty/stale\n\nThe missing step is: bd import --db .beads/project.db -i .beads/issues.jsonl\n\nThis needs to be either:\na) Documented clearly in workflow docs\nb) Automated (e.g., git hook, or bd auto-imports if jsonl is newer than db)\nc) Both\n\nReal-world impact: User had Claude Code on GCP VM create vc issues from BOOTSTRAP.md. They were exported to issues.jsonl and committed. But on local machine, vc.db was empty until manual import was run.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-16T20:46:08.971822-07:00","updated_at":"2025-10-16T21:51:08.743025-07:00","closed_at":"2025-10-14T02:51:52.199766-07:00"}
|
{"id":"bd-11","title":"Document or automate JSONL sync workflow for git collaboration","description":"When using beads across multiple machines/environments via git, there's a workflow gap:\n\n1. Machine A: Create issues → stored in .beads/project.db\n2. Machine A: bd export -o .beads/issues.jsonl\n3. Machine A: git add .beads/issues.jsonl \u0026\u0026 git commit \u0026\u0026 git push\n4. Machine B: git pull\n5. Machine B: ??? issues.jsonl exists but project.db is empty/stale\n\nThe missing step is: bd import --db .beads/project.db -i .beads/issues.jsonl\n\nThis needs to be either:\na) Documented clearly in workflow docs\nb) Automated (e.g., git hook, or bd auto-imports if jsonl is newer than db)\nc) Both\n\nReal-world impact: User had Claude Code on GCP VM create vc issues from BOOTSTRAP.md. They were exported to issues.jsonl and committed. But on local machine, vc.db was empty until manual import was run.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-16T20:46:08.971822-07:00","updated_at":"2025-10-16T21:51:08.743025-07:00","closed_at":"2025-10-14T02:51:52.199766-07:00"}
|
||||||
{"id":"bd-110","title":"Implement daemon architecture for concurrent access","description":"Multiple AI agents running concurrently cause database corruption, git lock contention, and data loss. Implement a daemon-based architecture where bd daemon owns SQLite (single writer) and all bd commands become RPC clients when daemon is running. Batches git operations to prevent index.lock contention. Maintains backward compatibility with graceful fallback to direct mode. See DAEMON_DESIGN.md for full details.","design":"Architecture: Unix socket RPC with JSON payloads. bd commands auto-detect daemon socket, fall back to direct mode if not present. Daemon serializes all SQLite writes and batches git exports every 5 seconds. Per-repo daemon using .beads/bd.sock location.\n\nImplementation phases:\n1. RPC protocol infrastructure (protocol.go, server.go, client.go)\n2. Client auto-detection and fallback\n3. Daemon SQLite ownership and git batching\n4. Atomic operations and transactions","acceptance_criteria":"- 4 concurrent agents can run without errors\n- No UNIQUE constraint failures on ID generation\n- No git index.lock errors \n- SQLite counter stays in sync with actual issues\n- Graceful fallback when daemon not running\n- All existing tests pass\n- Documentation updated","status":"open","priority":0,"issue_type":"epic","created_at":"2025-10-16T21:54:48.794119-07:00","updated_at":"2025-10-16T21:54:48.794119-07:00","dependencies":[{"issue_id":"bd-110","depends_on_id":"bd-111","type":"parent-child","created_at":"2025-10-16T21:54:56.032869-07:00","created_by":"stevey"}]}
|
{"id":"bd-110","title":"Implement daemon architecture for concurrent access","description":"Multiple AI agents running concurrently cause database corruption, git lock contention, and data loss. Implement a daemon-based architecture where bd daemon owns SQLite (single writer) and all bd commands become RPC clients when daemon is running. Batches git operations to prevent index.lock contention. Maintains backward compatibility with graceful fallback to direct mode. See DAEMON_DESIGN.md for full details.","design":"Architecture: Unix socket RPC with JSON payloads. bd commands auto-detect daemon socket, fall back to direct mode if not present. Daemon serializes all SQLite writes and batches git exports every 5 seconds. Per-repo daemon using .beads/bd.sock location.\n\nImplementation phases:\n1. RPC protocol infrastructure (protocol.go, server.go, client.go)\n2. Client auto-detection and fallback\n3. Daemon SQLite ownership and git batching\n4. Atomic operations and transactions","acceptance_criteria":"- 4 concurrent agents can run without errors\n- No UNIQUE constraint failures on ID generation\n- No git index.lock errors \n- SQLite counter stays in sync with actual issues\n- Graceful fallback when daemon not running\n- All existing tests pass\n- Documentation updated","status":"open","priority":0,"issue_type":"epic","created_at":"2025-10-16T21:54:48.794119-07:00","updated_at":"2025-10-16T21:54:48.794119-07:00","dependencies":[{"issue_id":"bd-110","depends_on_id":"bd-111","type":"parent-child","created_at":"2025-10-16T21:54:56.032869-07:00","created_by":"stevey"}]}
|
||||||
{"id":"bd-111","title":"Phase 1: Implement RPC protocol infrastructure","description":"Create the foundation for daemon-client communication using Unix sockets and JSON.\n\nNew files to create:\n- internal/rpc/protocol.go - Request/response types, operations enum\n- internal/rpc/server.go - Unix socket server that daemon runs\n- internal/rpc/client.go - Client library for bd commands to use\n\nSocket location: .beads/bd.sock (per-repo)\n\nOperations to support initially: create, update, list, show, close, ready, stats","design":"protocol.go defines:\n- Request struct with Operation string and Args json.RawMessage\n- Response struct with Success bool, Data json.RawMessage, Error string\n- Operation constants for all bd commands\n\nserver.go implements:\n- Unix socket listener on .beads/bd.sock\n- Request handler that dispatches to storage layer\n- Graceful shutdown on signals\n\nclient.go implements:\n- TryConnect() to detect running daemon\n- Execute(operation, args) to send RPC request\n- Connection pooling/reuse for performance","acceptance_criteria":"- internal/rpc package compiles without errors\n- Server can accept and respond to simple ping request\n- Client can connect to socket and receive response\n- Unit tests for protocol serialization/deserialization\n- Socket cleanup on server shutdown","status":"closed","priority":0,"issue_type":"task","created_at":"2025-10-16T21:54:48.83081-07:00","updated_at":"2025-10-16T22:02:40.675096-07:00","closed_at":"2025-10-16T22:02:40.675096-07:00"}
|
{"id":"bd-111","title":"Phase 1: Implement RPC protocol infrastructure","description":"Create the foundation for daemon-client communication using Unix sockets and JSON.\n\nNew files to create:\n- internal/rpc/protocol.go - Request/response types, operations enum\n- internal/rpc/server.go - Unix socket server that daemon runs\n- internal/rpc/client.go - Client library for bd commands to use\n\nSocket location: .beads/bd.sock (per-repo)\n\nOperations to support initially: create, update, list, show, close, ready, stats","design":"protocol.go defines:\n- Request struct with Operation string and Args json.RawMessage\n- Response struct with Success bool, Data json.RawMessage, Error string\n- Operation constants for all bd commands\n\nserver.go implements:\n- Unix socket listener on .beads/bd.sock\n- Request handler that dispatches to storage layer\n- Graceful shutdown on signals\n\nclient.go implements:\n- TryConnect() to detect running daemon\n- Execute(operation, args) to send RPC request\n- Connection pooling/reuse for performance","acceptance_criteria":"- internal/rpc package compiles without errors\n- Server can accept and respond to simple ping request\n- Client can connect to socket and receive response\n- Unit tests for protocol serialization/deserialization\n- Socket cleanup on server shutdown","status":"closed","priority":0,"issue_type":"task","created_at":"2025-10-16T21:54:48.83081-07:00","updated_at":"2025-10-16T22:02:40.675096-07:00","closed_at":"2025-10-16T22:02:40.675096-07:00"}
|
||||||
{"id":"bd-112","title":"Phase 2: Add client auto-detection in bd commands","description":"Modify all bd commands to detect if daemon is running and route through RPC client if available, otherwise fall back to direct storage access.\n\nChanges needed:\n- Update cmd/bd/main.go to check for daemon socket on startup\n- Wrap storage calls with TryConnect logic\n- Ensure all commands work identically in both modes\n- Add --no-daemon flag to force direct mode\n\nThis maintains backward compatibility while enabling daemon mode.","status":"open","priority":0,"issue_type":"task","created_at":"2025-10-16T22:47:36.185502-07:00","updated_at":"2025-10-16T22:47:36.185502-07:00","dependencies":[{"issue_id":"bd-112","depends_on_id":"bd-110","type":"parent-child","created_at":"2025-10-16T22:47:36.190931-07:00","created_by":"stevey"}]}
|
{"id":"bd-112","title":"Phase 2: Add client auto-detection in bd commands","description":"Modify all bd commands to detect if daemon is running and route through RPC client if available, otherwise fall back to direct storage access.\n\nChanges needed:\n- Update cmd/bd/main.go to check for daemon socket on startup\n- Wrap storage calls with TryConnect logic\n- Ensure all commands work identically in both modes\n- Add --no-daemon flag to force direct mode\n\nThis maintains backward compatibility while enabling daemon mode.","status":"closed","priority":0,"issue_type":"task","created_at":"2025-10-16T22:47:36.185502-07:00","updated_at":"2025-10-16T23:05:11.299018-07:00","closed_at":"2025-10-16T23:05:11.299018-07:00","dependencies":[{"issue_id":"bd-112","depends_on_id":"bd-110","type":"parent-child","created_at":"2025-10-16T22:47:36.190931-07:00","created_by":"stevey"}]}
|
||||||
{"id":"bd-113","title":"Phase 3: Implement daemon command with SQLite ownership","description":"Create 'bd daemon' command that starts the RPC server and owns the SQLite database.\n\nImplementation:\n- Add cmd/bd/daemon.go with start/stop/status subcommands\n- Daemon holds exclusive SQLite connection\n- Integrates git sync loop (batch exports every 5 seconds)\n- PID file management for daemon lifecycle\n- Logging for daemon operations\n\nSocket location: .beads/bd.sock per repository","status":"open","priority":0,"issue_type":"task","created_at":"2025-10-16T22:47:42.86546-07:00","updated_at":"2025-10-16T22:47:42.86546-07:00","dependencies":[{"issue_id":"bd-113","depends_on_id":"bd-110","type":"parent-child","created_at":"2025-10-16T22:47:42.874284-07:00","created_by":"stevey"}]}
|
{"id":"bd-113","title":"Phase 3: Implement daemon command with SQLite ownership","description":"Create 'bd daemon' command that starts the RPC server and owns the SQLite database.\n\nImplementation:\n- Add cmd/bd/daemon.go with start/stop/status subcommands\n- Daemon holds exclusive SQLite connection\n- Integrates git sync loop (batch exports every 5 seconds)\n- PID file management for daemon lifecycle\n- Logging for daemon operations\n\nSocket location: .beads/bd.sock per repository","status":"open","priority":0,"issue_type":"task","created_at":"2025-10-16T22:47:42.86546-07:00","updated_at":"2025-10-16T22:47:42.86546-07:00","dependencies":[{"issue_id":"bd-113","depends_on_id":"bd-110","type":"parent-child","created_at":"2025-10-16T22:47:42.874284-07:00","created_by":"stevey"}]}
|
||||||
{"id":"bd-114","title":"Phase 4: Add atomic operations and stress testing","description":"Implement atomic multi-operation support and test under concurrent load.\n\nFeatures:\n- Batch/transaction API for multi-step operations\n- Request timeout and cancellation support\n- Connection pooling optimization\n- Stress tests with 4+ concurrent agents\n- Performance benchmarks vs direct mode\n- Documentation updates\n\nValidates all acceptance criteria for bd-110.","status":"open","priority":0,"issue_type":"task","created_at":"2025-10-16T22:47:49.785525-07:00","updated_at":"2025-10-16T22:47:49.785525-07:00","dependencies":[{"issue_id":"bd-114","depends_on_id":"bd-110","type":"parent-child","created_at":"2025-10-16T22:47:49.787472-07:00","created_by":"stevey"}]}
|
{"id":"bd-114","title":"Phase 4: Add atomic operations and stress testing","description":"Implement atomic multi-operation support and test under concurrent load.\n\nFeatures:\n- Batch/transaction API for multi-step operations\n- Request timeout and cancellation support\n- Connection pooling optimization\n- Stress tests with 4+ concurrent agents\n- Performance benchmarks vs direct mode\n- Documentation updates\n\nValidates all acceptance criteria for bd-110.","status":"open","priority":0,"issue_type":"task","created_at":"2025-10-16T22:47:49.785525-07:00","updated_at":"2025-10-16T22:47:49.785525-07:00","dependencies":[{"issue_id":"bd-114","depends_on_id":"bd-110","type":"parent-child","created_at":"2025-10-16T22:47:49.787472-07:00","created_by":"stevey"}]}
|
||||||
|
{"id":"bd-115","title":"Test daemon auto-detection","description":"","status":"closed","priority":3,"issue_type":"task","created_at":"2025-10-16T23:04:51.334824-07:00","updated_at":"2025-10-16T23:04:55.769268-07:00","closed_at":"2025-10-16T23:04:55.769268-07:00"}
|
||||||
{"id":"bd-12","title":"Root issue for dep tree test","description":"","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-16T20:46:08.971822-07:00","updated_at":"2025-10-16T21:51:08.743864-07:00","closed_at":"2025-10-16T10:07:34.1266-07:00"}
|
{"id":"bd-12","title":"Root issue for dep tree test","description":"","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-16T20:46:08.971822-07:00","updated_at":"2025-10-16T21:51:08.743864-07:00","closed_at":"2025-10-16T10:07:34.1266-07:00"}
|
||||||
{"id":"bd-13","title":"Dependency A","description":"","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-16T20:46:08.971822-07:00","updated_at":"2025-10-16T21:51:08.74444-07:00","closed_at":"2025-10-16T10:07:34.126732-07:00"}
|
{"id":"bd-13","title":"Dependency A","description":"","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-16T20:46:08.971822-07:00","updated_at":"2025-10-16T21:51:08.74444-07:00","closed_at":"2025-10-16T10:07:34.126732-07:00"}
|
||||||
{"id":"bd-14","title":"Dependency B","description":"","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-16T20:46:08.971822-07:00","updated_at":"2025-10-16T21:51:08.745041-07:00","closed_at":"2025-10-16T10:07:34.126858-07:00"}
|
{"id":"bd-14","title":"Dependency B","description":"","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-16T20:46:08.971822-07:00","updated_at":"2025-10-16T21:51:08.745041-07:00","closed_at":"2025-10-16T10:07:34.126858-07:00"}
|
||||||
|
|||||||
@@ -3,11 +3,13 @@ package main
|
|||||||
import (
|
import (
|
||||||
"bytes"
|
"bytes"
|
||||||
"context"
|
"context"
|
||||||
|
"encoding/json"
|
||||||
"fmt"
|
"fmt"
|
||||||
"os"
|
"os"
|
||||||
"text/template"
|
"text/template"
|
||||||
|
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
|
"github.com/steveyegge/beads/internal/rpc"
|
||||||
"github.com/steveyegge/beads/internal/storage"
|
"github.com/steveyegge/beads/internal/storage"
|
||||||
"github.com/steveyegge/beads/internal/types"
|
"github.com/steveyegge/beads/internal/types"
|
||||||
)
|
)
|
||||||
@@ -50,6 +52,51 @@ var listCmd = &cobra.Command{
|
|||||||
filter.TitleSearch = titleSearch
|
filter.TitleSearch = titleSearch
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// If daemon is running, use RPC
|
||||||
|
if daemonClient != nil {
|
||||||
|
listArgs := &rpc.ListArgs{
|
||||||
|
Status: status,
|
||||||
|
IssueType: issueType,
|
||||||
|
Assignee: assignee,
|
||||||
|
Limit: limit,
|
||||||
|
}
|
||||||
|
if cmd.Flags().Changed("priority") {
|
||||||
|
priority, _ := cmd.Flags().GetInt("priority")
|
||||||
|
listArgs.Priority = &priority
|
||||||
|
}
|
||||||
|
if len(labels) > 0 {
|
||||||
|
listArgs.Label = labels[0] // TODO: daemon protocol needs to support multiple labels
|
||||||
|
}
|
||||||
|
|
||||||
|
resp, err := daemonClient.List(listArgs)
|
||||||
|
if err != nil {
|
||||||
|
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
var issues []*types.Issue
|
||||||
|
if err := json.Unmarshal(resp.Data, &issues); err != nil {
|
||||||
|
fmt.Fprintf(os.Stderr, "Error parsing response: %v\n", err)
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
if jsonOutput {
|
||||||
|
outputJSON(issues)
|
||||||
|
} else {
|
||||||
|
fmt.Printf("\nFound %d issues:\n\n", len(issues))
|
||||||
|
for _, issue := range issues {
|
||||||
|
fmt.Printf("%s [P%d] [%s] %s\n", issue.ID, issue.Priority, issue.IssueType, issue.Status)
|
||||||
|
fmt.Printf(" %s\n", issue.Title)
|
||||||
|
if issue.Assignee != "" {
|
||||||
|
fmt.Printf(" Assignee: %s\n", issue.Assignee)
|
||||||
|
}
|
||||||
|
fmt.Println()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Direct mode
|
||||||
ctx := context.Background()
|
ctx := context.Background()
|
||||||
issues, err := store.SearchIssues(ctx, "", filter)
|
issues, err := store.SearchIssues(ctx, "", filter)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
|||||||
293
cmd/bd/main.go
293
cmd/bd/main.go
@@ -18,6 +18,7 @@ import (
|
|||||||
"github.com/fatih/color"
|
"github.com/fatih/color"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
"github.com/steveyegge/beads"
|
"github.com/steveyegge/beads"
|
||||||
|
"github.com/steveyegge/beads/internal/rpc"
|
||||||
"github.com/steveyegge/beads/internal/storage"
|
"github.com/steveyegge/beads/internal/storage"
|
||||||
"github.com/steveyegge/beads/internal/storage/sqlite"
|
"github.com/steveyegge/beads/internal/storage/sqlite"
|
||||||
"github.com/steveyegge/beads/internal/types"
|
"github.com/steveyegge/beads/internal/types"
|
||||||
@@ -28,6 +29,10 @@ var (
|
|||||||
actor string
|
actor string
|
||||||
store storage.Storage
|
store storage.Storage
|
||||||
jsonOutput bool
|
jsonOutput bool
|
||||||
|
|
||||||
|
// Daemon mode
|
||||||
|
daemonClient *rpc.Client // RPC client when daemon is running
|
||||||
|
noDaemon bool // Force direct mode (no daemon)
|
||||||
|
|
||||||
// Auto-flush state
|
// Auto-flush state
|
||||||
autoFlushEnabled = true // Can be disabled with --no-auto-flush
|
autoFlushEnabled = true // Can be disabled with --no-auto-flush
|
||||||
@@ -50,8 +55,8 @@ var rootCmd = &cobra.Command{
|
|||||||
Short: "bd - Dependency-aware issue tracker",
|
Short: "bd - Dependency-aware issue tracker",
|
||||||
Long: `Issues chained together like beads. A lightweight issue tracker with first-class dependency support.`,
|
Long: `Issues chained together like beads. A lightweight issue tracker with first-class dependency support.`,
|
||||||
PersistentPreRun: func(cmd *cobra.Command, args []string) {
|
PersistentPreRun: func(cmd *cobra.Command, args []string) {
|
||||||
// Skip database initialization for init command
|
// Skip database initialization for init and daemon commands
|
||||||
if cmd.Name() == "init" {
|
if cmd.Name() == "init" || cmd.Name() == "daemon" {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -61,7 +66,7 @@ var rootCmd = &cobra.Command{
|
|||||||
// Set auto-import based on flag (invert no-auto-import)
|
// Set auto-import based on flag (invert no-auto-import)
|
||||||
autoImportEnabled = !noAutoImport
|
autoImportEnabled = !noAutoImport
|
||||||
|
|
||||||
// Initialize storage
|
// Initialize database path
|
||||||
if dbPath == "" {
|
if dbPath == "" {
|
||||||
// Use public API to find database (same logic as extensions)
|
// Use public API to find database (same logic as extensions)
|
||||||
if foundDB := beads.FindDatabasePath(); foundDB != "" {
|
if foundDB := beads.FindDatabasePath(); foundDB != "" {
|
||||||
@@ -73,18 +78,6 @@ var rootCmd = &cobra.Command{
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
var err error
|
|
||||||
store, err = sqlite.New(dbPath)
|
|
||||||
if err != nil {
|
|
||||||
fmt.Fprintf(os.Stderr, "Error: failed to open database: %v\n", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Mark store as active for flush goroutine safety
|
|
||||||
storeMutex.Lock()
|
|
||||||
storeActive = true
|
|
||||||
storeMutex.Unlock()
|
|
||||||
|
|
||||||
// Set actor from flag, env, or default
|
// Set actor from flag, env, or default
|
||||||
// Priority: --actor flag > BD_ACTOR env > USER env > "unknown"
|
// Priority: --actor flag > BD_ACTOR env > USER env > "unknown"
|
||||||
if actor == "" {
|
if actor == "" {
|
||||||
@@ -97,6 +90,35 @@ var rootCmd = &cobra.Command{
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Try to connect to daemon first (unless --no-daemon flag is set)
|
||||||
|
if !noDaemon {
|
||||||
|
socketPath := getSocketPath()
|
||||||
|
client, err := rpc.TryConnect(socketPath)
|
||||||
|
if err == nil && client != nil {
|
||||||
|
daemonClient = client
|
||||||
|
if os.Getenv("BD_DEBUG") != "" {
|
||||||
|
fmt.Fprintf(os.Stderr, "Debug: connected to daemon at %s\n", socketPath)
|
||||||
|
}
|
||||||
|
return // Skip direct storage initialization
|
||||||
|
}
|
||||||
|
if os.Getenv("BD_DEBUG") != "" {
|
||||||
|
fmt.Fprintf(os.Stderr, "Debug: daemon not available, using direct mode\n")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fall back to direct storage access
|
||||||
|
var err error
|
||||||
|
store, err = sqlite.New(dbPath)
|
||||||
|
if err != nil {
|
||||||
|
fmt.Fprintf(os.Stderr, "Error: failed to open database: %v\n", err)
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mark store as active for flush goroutine safety
|
||||||
|
storeMutex.Lock()
|
||||||
|
storeActive = true
|
||||||
|
storeMutex.Unlock()
|
||||||
|
|
||||||
// Check for version mismatch (warn if binary is older than DB)
|
// Check for version mismatch (warn if binary is older than DB)
|
||||||
checkVersionMismatch()
|
checkVersionMismatch()
|
||||||
|
|
||||||
@@ -107,6 +129,13 @@ var rootCmd = &cobra.Command{
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
PersistentPostRun: func(cmd *cobra.Command, args []string) {
|
PersistentPostRun: func(cmd *cobra.Command, args []string) {
|
||||||
|
// Close daemon client if we're using it
|
||||||
|
if daemonClient != nil {
|
||||||
|
_ = daemonClient.Close()
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Otherwise, handle direct mode cleanup
|
||||||
// Flush any pending changes before closing
|
// Flush any pending changes before closing
|
||||||
flushMutex.Lock()
|
flushMutex.Lock()
|
||||||
needsFlush := isDirty && autoFlushEnabled
|
needsFlush := isDirty && autoFlushEnabled
|
||||||
@@ -136,6 +165,12 @@ var rootCmd = &cobra.Command{
|
|||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// getSocketPath returns the daemon socket path based on the database location
|
||||||
|
func getSocketPath() string {
|
||||||
|
// Socket lives in same directory as database: .beads/bd.sock
|
||||||
|
return filepath.Join(filepath.Dir(dbPath), "bd.sock")
|
||||||
|
}
|
||||||
|
|
||||||
// outputJSON outputs data as pretty-printed JSON
|
// outputJSON outputs data as pretty-printed JSON
|
||||||
func outputJSON(v interface{}) {
|
func outputJSON(v interface{}) {
|
||||||
encoder := json.NewEncoder(os.Stdout)
|
encoder := json.NewEncoder(os.Stdout)
|
||||||
@@ -766,6 +801,7 @@ func init() {
|
|||||||
rootCmd.PersistentFlags().StringVar(&dbPath, "db", "", "Database path (default: auto-discover .beads/*.db or ~/.beads/default.db)")
|
rootCmd.PersistentFlags().StringVar(&dbPath, "db", "", "Database path (default: auto-discover .beads/*.db or ~/.beads/default.db)")
|
||||||
rootCmd.PersistentFlags().StringVar(&actor, "actor", "", "Actor name for audit trail (default: $BD_ACTOR or $USER)")
|
rootCmd.PersistentFlags().StringVar(&actor, "actor", "", "Actor name for audit trail (default: $BD_ACTOR or $USER)")
|
||||||
rootCmd.PersistentFlags().BoolVar(&jsonOutput, "json", false, "Output in JSON format")
|
rootCmd.PersistentFlags().BoolVar(&jsonOutput, "json", false, "Output in JSON format")
|
||||||
|
rootCmd.PersistentFlags().BoolVar(&noDaemon, "no-daemon", false, "Force direct storage mode, bypass daemon if running")
|
||||||
rootCmd.PersistentFlags().BoolVar(&noAutoFlush, "no-auto-flush", false, "Disable automatic JSONL sync after CRUD operations")
|
rootCmd.PersistentFlags().BoolVar(&noAutoFlush, "no-auto-flush", false, "Disable automatic JSONL sync after CRUD operations")
|
||||||
rootCmd.PersistentFlags().BoolVar(&noAutoImport, "no-auto-import", false, "Disable automatic JSONL import when newer than DB")
|
rootCmd.PersistentFlags().BoolVar(&noAutoImport, "no-auto-import", false, "Disable automatic JSONL import when newer than DB")
|
||||||
}
|
}
|
||||||
@@ -936,6 +972,45 @@ var createCmd = &cobra.Command{
|
|||||||
externalRefPtr = &externalRef
|
externalRefPtr = &externalRef
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// If daemon is running, use RPC
|
||||||
|
if daemonClient != nil {
|
||||||
|
createArgs := &rpc.CreateArgs{
|
||||||
|
ID: explicitID,
|
||||||
|
Title: title,
|
||||||
|
Description: description,
|
||||||
|
IssueType: issueType,
|
||||||
|
Priority: priority,
|
||||||
|
Design: design,
|
||||||
|
AcceptanceCriteria: acceptance,
|
||||||
|
Assignee: assignee,
|
||||||
|
Labels: labels,
|
||||||
|
Dependencies: deps,
|
||||||
|
}
|
||||||
|
|
||||||
|
resp, err := daemonClient.Create(createArgs)
|
||||||
|
if err != nil {
|
||||||
|
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
if jsonOutput {
|
||||||
|
fmt.Println(string(resp.Data))
|
||||||
|
} else {
|
||||||
|
var issue types.Issue
|
||||||
|
if err := json.Unmarshal(resp.Data, &issue); err != nil {
|
||||||
|
fmt.Fprintf(os.Stderr, "Error parsing response: %v\n", err)
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
green := color.New(color.FgGreen).SprintFunc()
|
||||||
|
fmt.Printf("%s Created issue: %s\n", green("✓"), issue.ID)
|
||||||
|
fmt.Printf(" Title: %s\n", issue.Title)
|
||||||
|
fmt.Printf(" Priority: P%d\n", issue.Priority)
|
||||||
|
fmt.Printf(" Status: %s\n", issue.Status)
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Direct mode
|
||||||
issue := &types.Issue{
|
issue := &types.Issue{
|
||||||
ID: explicitID, // Set explicit ID if provided (empty string if not)
|
ID: explicitID, // Set explicit ID if provided (empty string if not)
|
||||||
Title: title,
|
Title: title,
|
||||||
@@ -1040,6 +1115,119 @@ var showCmd = &cobra.Command{
|
|||||||
Short: "Show issue details",
|
Short: "Show issue details",
|
||||||
Args: cobra.ExactArgs(1),
|
Args: cobra.ExactArgs(1),
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
|
// If daemon is running, use RPC
|
||||||
|
if daemonClient != nil {
|
||||||
|
showArgs := &rpc.ShowArgs{ID: args[0]}
|
||||||
|
resp, err := daemonClient.Show(showArgs)
|
||||||
|
if err != nil {
|
||||||
|
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
if jsonOutput {
|
||||||
|
fmt.Println(string(resp.Data))
|
||||||
|
} else {
|
||||||
|
// Parse response and use existing formatting code
|
||||||
|
type IssueDetails struct {
|
||||||
|
*types.Issue
|
||||||
|
Labels []string `json:"labels,omitempty"`
|
||||||
|
Dependencies []*types.Issue `json:"dependencies,omitempty"`
|
||||||
|
Dependents []*types.Issue `json:"dependents,omitempty"`
|
||||||
|
}
|
||||||
|
var details IssueDetails
|
||||||
|
if err := json.Unmarshal(resp.Data, &details); err != nil {
|
||||||
|
fmt.Fprintf(os.Stderr, "Error parsing response: %v\n", err)
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
issue := details.Issue
|
||||||
|
|
||||||
|
cyan := color.New(color.FgCyan).SprintFunc()
|
||||||
|
|
||||||
|
// Format output (same as direct mode below)
|
||||||
|
tierEmoji := ""
|
||||||
|
statusSuffix := ""
|
||||||
|
if issue.CompactionLevel == 1 {
|
||||||
|
tierEmoji = " 🗜️"
|
||||||
|
} else if issue.CompactionLevel == 2 {
|
||||||
|
tierEmoji = " 📦"
|
||||||
|
}
|
||||||
|
if issue.CompactionLevel > 0 {
|
||||||
|
statusSuffix = fmt.Sprintf(" (compacted L%d)", issue.CompactionLevel)
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("\n%s: %s%s\n", cyan(issue.ID), issue.Title, tierEmoji)
|
||||||
|
fmt.Printf("Status: %s%s\n", issue.Status, statusSuffix)
|
||||||
|
fmt.Printf("Priority: P%d\n", issue.Priority)
|
||||||
|
fmt.Printf("Type: %s\n", issue.IssueType)
|
||||||
|
if issue.Assignee != "" {
|
||||||
|
fmt.Printf("Assignee: %s\n", issue.Assignee)
|
||||||
|
}
|
||||||
|
if issue.EstimatedMinutes != nil {
|
||||||
|
fmt.Printf("Estimated: %d minutes\n", *issue.EstimatedMinutes)
|
||||||
|
}
|
||||||
|
fmt.Printf("Created: %s\n", issue.CreatedAt.Format("2006-01-02 15:04"))
|
||||||
|
fmt.Printf("Updated: %s\n", issue.UpdatedAt.Format("2006-01-02 15:04"))
|
||||||
|
|
||||||
|
// Show compaction status
|
||||||
|
if issue.CompactionLevel > 0 {
|
||||||
|
fmt.Println()
|
||||||
|
if issue.OriginalSize > 0 {
|
||||||
|
currentSize := len(issue.Description) + len(issue.Design) + len(issue.Notes) + len(issue.AcceptanceCriteria)
|
||||||
|
saved := issue.OriginalSize - currentSize
|
||||||
|
if saved > 0 {
|
||||||
|
reduction := float64(saved) / float64(issue.OriginalSize) * 100
|
||||||
|
fmt.Printf("📊 Original: %d bytes | Compressed: %d bytes (%.0f%% reduction)\n",
|
||||||
|
issue.OriginalSize, currentSize, reduction)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
tierEmoji2 := "🗜️"
|
||||||
|
if issue.CompactionLevel == 2 {
|
||||||
|
tierEmoji2 = "📦"
|
||||||
|
}
|
||||||
|
compactedDate := ""
|
||||||
|
if issue.CompactedAt != nil {
|
||||||
|
compactedDate = issue.CompactedAt.Format("2006-01-02")
|
||||||
|
}
|
||||||
|
fmt.Printf("%s Compacted: %s (Tier %d)\n", tierEmoji2, compactedDate, issue.CompactionLevel)
|
||||||
|
}
|
||||||
|
|
||||||
|
if issue.Description != "" {
|
||||||
|
fmt.Printf("\nDescription:\n%s\n", issue.Description)
|
||||||
|
}
|
||||||
|
if issue.Design != "" {
|
||||||
|
fmt.Printf("\nDesign:\n%s\n", issue.Design)
|
||||||
|
}
|
||||||
|
if issue.Notes != "" {
|
||||||
|
fmt.Printf("\nNotes:\n%s\n", issue.Notes)
|
||||||
|
}
|
||||||
|
if issue.AcceptanceCriteria != "" {
|
||||||
|
fmt.Printf("\nAcceptance Criteria:\n%s\n", issue.AcceptanceCriteria)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(details.Labels) > 0 {
|
||||||
|
fmt.Printf("\nLabels: %v\n", details.Labels)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(details.Dependencies) > 0 {
|
||||||
|
fmt.Printf("\nDepends on (%d):\n", len(details.Dependencies))
|
||||||
|
for _, dep := range details.Dependencies {
|
||||||
|
fmt.Printf(" → %s: %s [P%d]\n", dep.ID, dep.Title, dep.Priority)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(details.Dependents) > 0 {
|
||||||
|
fmt.Printf("\nBlocks (%d):\n", len(details.Dependents))
|
||||||
|
for _, dep := range details.Dependents {
|
||||||
|
fmt.Printf(" ← %s: %s [P%d]\n", dep.ID, dep.Title, dep.Priority)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Println()
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Direct mode
|
||||||
ctx := context.Background()
|
ctx := context.Background()
|
||||||
issue, err := store.GetIssue(ctx, args[0])
|
issue, err := store.GetIssue(ctx, args[0])
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -1209,6 +1397,49 @@ var updateCmd = &cobra.Command{
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// If daemon is running, use RPC
|
||||||
|
if daemonClient != nil {
|
||||||
|
updateArgs := &rpc.UpdateArgs{ID: args[0]}
|
||||||
|
|
||||||
|
// Map updates to RPC args
|
||||||
|
if status, ok := updates["status"].(string); ok {
|
||||||
|
updateArgs.Status = &status
|
||||||
|
}
|
||||||
|
if priority, ok := updates["priority"].(int); ok {
|
||||||
|
updateArgs.Priority = &priority
|
||||||
|
}
|
||||||
|
if title, ok := updates["title"].(string); ok {
|
||||||
|
updateArgs.Title = &title
|
||||||
|
}
|
||||||
|
if assignee, ok := updates["assignee"].(string); ok {
|
||||||
|
updateArgs.Assignee = &assignee
|
||||||
|
}
|
||||||
|
if design, ok := updates["design"].(string); ok {
|
||||||
|
updateArgs.Design = &design
|
||||||
|
}
|
||||||
|
if notes, ok := updates["notes"].(string); ok {
|
||||||
|
updateArgs.Notes = ¬es
|
||||||
|
}
|
||||||
|
if acceptanceCriteria, ok := updates["acceptance_criteria"].(string); ok {
|
||||||
|
updateArgs.AcceptanceCriteria = &acceptanceCriteria
|
||||||
|
}
|
||||||
|
|
||||||
|
resp, err := daemonClient.Update(updateArgs)
|
||||||
|
if err != nil {
|
||||||
|
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
if jsonOutput {
|
||||||
|
fmt.Println(string(resp.Data))
|
||||||
|
} else {
|
||||||
|
green := color.New(color.FgGreen).SprintFunc()
|
||||||
|
fmt.Printf("%s Updated issue: %s\n", green("✓"), args[0])
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Direct mode
|
||||||
ctx := context.Background()
|
ctx := context.Background()
|
||||||
if err := store.UpdateIssue(ctx, args[0], updates, actor); err != nil {
|
if err := store.UpdateIssue(ctx, args[0], updates, actor); err != nil {
|
||||||
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
||||||
@@ -1251,6 +1482,38 @@ var closeCmd = &cobra.Command{
|
|||||||
reason = "Closed"
|
reason = "Closed"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// If daemon is running, use RPC
|
||||||
|
if daemonClient != nil {
|
||||||
|
closedIssues := []*types.Issue{}
|
||||||
|
for _, id := range args {
|
||||||
|
closeArgs := &rpc.CloseArgs{
|
||||||
|
ID: id,
|
||||||
|
Reason: reason,
|
||||||
|
}
|
||||||
|
resp, err := daemonClient.CloseIssue(closeArgs)
|
||||||
|
if err != nil {
|
||||||
|
fmt.Fprintf(os.Stderr, "Error closing %s: %v\n", id, err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
if jsonOutput {
|
||||||
|
var issue types.Issue
|
||||||
|
if err := json.Unmarshal(resp.Data, &issue); err == nil {
|
||||||
|
closedIssues = append(closedIssues, &issue)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
green := color.New(color.FgGreen).SprintFunc()
|
||||||
|
fmt.Printf("%s Closed %s: %s\n", green("✓"), id, reason)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if jsonOutput && len(closedIssues) > 0 {
|
||||||
|
outputJSON(closedIssues)
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Direct mode
|
||||||
ctx := context.Background()
|
ctx := context.Background()
|
||||||
closedIssues := []*types.Issue{}
|
closedIssues := []*types.Issue{}
|
||||||
for _, id := range args {
|
for _, id := range args {
|
||||||
|
|||||||
@@ -2,11 +2,13 @@ package main
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
|
"encoding/json"
|
||||||
"fmt"
|
"fmt"
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
"github.com/fatih/color"
|
"github.com/fatih/color"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
|
"github.com/steveyegge/beads/internal/rpc"
|
||||||
"github.com/steveyegge/beads/internal/types"
|
"github.com/steveyegge/beads/internal/types"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -30,6 +32,61 @@ var readyCmd = &cobra.Command{
|
|||||||
filter.Assignee = &assignee
|
filter.Assignee = &assignee
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// If daemon is running, use RPC
|
||||||
|
if daemonClient != nil {
|
||||||
|
readyArgs := &rpc.ReadyArgs{
|
||||||
|
Assignee: assignee,
|
||||||
|
Limit: limit,
|
||||||
|
}
|
||||||
|
if cmd.Flags().Changed("priority") {
|
||||||
|
priority, _ := cmd.Flags().GetInt("priority")
|
||||||
|
readyArgs.Priority = &priority
|
||||||
|
}
|
||||||
|
|
||||||
|
resp, err := daemonClient.Ready(readyArgs)
|
||||||
|
if err != nil {
|
||||||
|
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
var issues []*types.Issue
|
||||||
|
if err := json.Unmarshal(resp.Data, &issues); err != nil {
|
||||||
|
fmt.Fprintf(os.Stderr, "Error parsing response: %v\n", err)
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
if jsonOutput {
|
||||||
|
if issues == nil {
|
||||||
|
issues = []*types.Issue{}
|
||||||
|
}
|
||||||
|
outputJSON(issues)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(issues) == 0 {
|
||||||
|
yellow := color.New(color.FgYellow).SprintFunc()
|
||||||
|
fmt.Printf("\n%s No ready work found (all issues have blocking dependencies)\n\n",
|
||||||
|
yellow("✨"))
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
cyan := color.New(color.FgCyan).SprintFunc()
|
||||||
|
fmt.Printf("\n%s Ready work (%d issues with no blockers):\n\n", cyan("📋"), len(issues))
|
||||||
|
|
||||||
|
for i, issue := range issues {
|
||||||
|
fmt.Printf("%d. [P%d] %s: %s\n", i+1, issue.Priority, issue.ID, issue.Title)
|
||||||
|
if issue.EstimatedMinutes != nil {
|
||||||
|
fmt.Printf(" Estimate: %d min\n", *issue.EstimatedMinutes)
|
||||||
|
}
|
||||||
|
if issue.Assignee != "" {
|
||||||
|
fmt.Printf(" Assignee: %s\n", issue.Assignee)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
fmt.Println()
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Direct mode
|
||||||
ctx := context.Background()
|
ctx := context.Background()
|
||||||
issues, err := store.GetReadyWork(ctx, filter)
|
issues, err := store.GetReadyWork(ctx, filter)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
|||||||
Reference in New Issue
Block a user