Add RPC infrastructure and updated database
- RPC Phase 1: Protocol, server, client implementation - Updated renumber.go with proper text reference updates (3-phase approach) - Clean database exported: 344 issues (bd-1 to bd-344) - Added DAEMON_DESIGN.md documentation - Updated go.mod/go.sum for RPC dependencies Amp-Thread-ID: https://ampcode.com/threads/T-456af77c-8b7f-4004-9027-c37b95e10ea5 Co-authored-by: Amp <amp@ampcode.com>
This commit is contained in:
1018
.beads/issues.jsonl
1018
.beads/issues.jsonl
File diff suppressed because one or more lines are too long
202
DAEMON_DESIGN.md
Normal file
202
DAEMON_DESIGN.md
Normal file
@@ -0,0 +1,202 @@
|
||||
# BD Daemon Architecture for Concurrent Access
|
||||
|
||||
## Problem Statement
|
||||
|
||||
Multiple AI agents running concurrently (via beads-mcp) cause:
|
||||
- **SQLite write corruption**: Counter stuck, UNIQUE constraint failures
|
||||
- **Git index.lock contention**: All agents auto-export → all try to commit simultaneously
|
||||
- **Data loss risk**: Concurrent SQLite writers without coordination
|
||||
- **Poor performance**: Redundant exports, 4x git operations for same changes
|
||||
|
||||
## Current Architecture (Broken)
|
||||
|
||||
```
|
||||
Agent 1 → beads-mcp 1 → bd CLI → SQLite DB (direct write)
|
||||
Agent 2 → beads-mcp 2 → bd CLI → SQLite DB (direct write) ← RACE CONDITIONS
|
||||
Agent 3 → beads-mcp 3 → bd CLI → SQLite DB (direct write)
|
||||
Agent 4 → beads-mcp 4 → bd CLI → SQLite DB (direct write)
|
||||
↓
|
||||
4x concurrent git export/commit
|
||||
```
|
||||
|
||||
## Proposed Architecture (Daemon-Mediated)
|
||||
|
||||
```
|
||||
Agent 1 → beads-mcp 1 → bd client ──┐
|
||||
Agent 2 → beads-mcp 2 → bd client ──┼──> bd daemon → SQLite DB
|
||||
Agent 3 → beads-mcp 3 → bd client ──┤ (single writer) ↓
|
||||
Agent 4 → beads-mcp 4 → bd client ──┘ git export
|
||||
(batched,
|
||||
serialized)
|
||||
```
|
||||
|
||||
### Key Changes
|
||||
|
||||
1. **bd daemon becomes mandatory** for multi-agent scenarios
|
||||
2. **All bd commands become RPC clients** when daemon is running
|
||||
3. **Daemon owns SQLite** - single writer, no races
|
||||
4. **Daemon batches git operations** - one export cycle per interval
|
||||
5. **Unix socket IPC** - simple, fast, local-only
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
### Phase 1: RPC Infrastructure
|
||||
|
||||
**New files:**
|
||||
- `internal/rpc/protocol.go` - Request/response types
|
||||
- `internal/rpc/server.go` - Unix socket server in daemon
|
||||
- `internal/rpc/client.go` - Client detection & dispatch
|
||||
|
||||
**Operations to support:**
|
||||
```go
|
||||
type Request struct {
|
||||
Operation string // "create", "update", "list", "close", etc.
|
||||
Args json.RawMessage // Operation-specific args
|
||||
}
|
||||
|
||||
type Response struct {
|
||||
Success bool
|
||||
Data json.RawMessage // Operation result
|
||||
Error string
|
||||
}
|
||||
```
|
||||
|
||||
**Socket location:** `~/.beads/bd.sock` or `.beads/bd.sock` (per-repo)
|
||||
|
||||
### Phase 2: Client Auto-Detection
|
||||
|
||||
**bd command behavior:**
|
||||
1. Check if daemon socket exists & responsive
|
||||
2. If yes: Send RPC request, print response
|
||||
3. If no: Run command directly (backward compatible)
|
||||
|
||||
**Example:**
|
||||
```go
|
||||
func main() {
|
||||
if client := rpc.TryConnect(); client != nil {
|
||||
// Daemon is running - use RPC
|
||||
resp := client.Execute(cmd, args)
|
||||
fmt.Println(resp)
|
||||
return
|
||||
}
|
||||
|
||||
// No daemon - run directly (current behavior)
|
||||
executeLocally(cmd, args)
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: Daemon SQLite Ownership
|
||||
|
||||
**Daemon startup:**
|
||||
1. Open SQLite connection (exclusive)
|
||||
2. Start RPC server on Unix socket
|
||||
3. Start git sync loop (existing functionality)
|
||||
4. Process RPC requests serially
|
||||
|
||||
**Git operations:**
|
||||
- Batch exports every 5 seconds (not per-operation)
|
||||
- Single commit with all changes
|
||||
- Prevent concurrent git operations entirely
|
||||
|
||||
### Phase 4: Atomic Operations
|
||||
|
||||
**ID generation:**
|
||||
```go
|
||||
// In daemon process only
|
||||
func (d *Daemon) generateID(prefix string) (string, error) {
|
||||
d.mu.Lock()
|
||||
defer d.mu.Unlock()
|
||||
|
||||
// No races - daemon is single writer
|
||||
return d.storage.NextID(prefix)
|
||||
}
|
||||
```
|
||||
|
||||
**Transaction support:**
|
||||
```go
|
||||
// RPC can request multi-operation transactions
|
||||
type BatchRequest struct {
|
||||
Operations []Request
|
||||
Atomic bool // All-or-nothing
|
||||
}
|
||||
```
|
||||
|
||||
## Migration Strategy
|
||||
|
||||
### Stage 1: Opt-In (v0.10.0)
|
||||
- Daemon RPC code implemented
|
||||
- bd commands detect daemon, fall back to direct
|
||||
- Users can `bd daemon start` for multi-agent scenarios
|
||||
- **No breaking changes** - direct mode still works
|
||||
|
||||
### Stage 2: Recommended (v0.11.0)
|
||||
- Document multi-agent workflow requires daemon
|
||||
- MCP server README says "start daemon for concurrent agents"
|
||||
- Detection warning: "Multiple bd processes detected, consider using daemon"
|
||||
|
||||
### Stage 3: Required for Multi-Agent (v1.0.0)
|
||||
- bd detects concurrent access patterns
|
||||
- Refuses to run without daemon if lock contention detected
|
||||
- Error: "Concurrent access detected. Start daemon: `bd daemon start`"
|
||||
|
||||
## Benefits
|
||||
|
||||
✅ **No SQLite corruption** - single writer
|
||||
✅ **No git lock contention** - batched, serialized operations
|
||||
✅ **Atomic ID generation** - no counter corruption
|
||||
✅ **Better performance** - fewer redundant exports
|
||||
✅ **Backward compatible** - graceful fallback to direct mode
|
||||
✅ **Simple protocol** - Unix sockets, JSON payloads
|
||||
|
||||
## Trade-offs
|
||||
|
||||
⚠️ **Daemon must be running** for multi-agent workflows
|
||||
⚠️ **One more process** to manage (`bd daemon start/stop`)
|
||||
⚠️ **Complexity** - RPC layer adds code & maintenance
|
||||
⚠️ **Single point of failure** - if daemon crashes, all agents blocked
|
||||
|
||||
## Open Questions
|
||||
|
||||
1. **Per-repo or global daemon?**
|
||||
- Per-repo: `.beads/bd.sock` (supports multiple repos)
|
||||
- Global: `~/.beads/bd.sock` (simpler, but only one repo at a time)
|
||||
- **Recommendation:** Per-repo, use `--db` path to determine socket location
|
||||
|
||||
2. **Daemon crash recovery?**
|
||||
- Client auto-starts daemon if socket missing?
|
||||
- Or require manual `bd daemon start`?
|
||||
- **Recommendation:** Auto-start with exponential backoff
|
||||
|
||||
3. **Concurrent read optimization?**
|
||||
- Reads could bypass daemon (SQLite supports concurrent readers)
|
||||
- But complex: need to detect read-only vs read-write commands
|
||||
- **Recommendation:** Start simple, all ops through daemon
|
||||
|
||||
4. **Transaction API for clients?**
|
||||
- MCP tools often do multi-step operations
|
||||
- Would benefit from `BEGIN/COMMIT` style transactions
|
||||
- **Recommendation:** Phase 4 feature, not MVP
|
||||
|
||||
## Success Metrics
|
||||
|
||||
- ✅ 4 concurrent agents can run without errors
|
||||
- ✅ No UNIQUE constraint failures on ID generation
|
||||
- ✅ No git index.lock errors
|
||||
- ✅ SQLite counter stays in sync with actual issues
|
||||
- ✅ Graceful fallback when daemon not running
|
||||
|
||||
## Related Issues
|
||||
|
||||
- bd-668: Git index.lock contention (root cause)
|
||||
- bd-670: ID generation retry on UNIQUE constraint
|
||||
- bd-654: Concurrent tmp file collisions (already fixed)
|
||||
- bd-477: Phase 1 daemon command (git sync only - now expanded)
|
||||
- bd-279: Tests for concurrent scenarios
|
||||
- bd-271: Epic for multi-device support
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Ultrathink**: Validate this design with user
|
||||
2. **File epic**: Create bd-??? for daemon RPC architecture
|
||||
3. **Break down work**: Phase 1 subtasks (protocol, server, client)
|
||||
4. **Start implementation**: Begin with protocol.go
|
||||
@@ -161,40 +161,19 @@ func renumberIssuesInDB(ctx context.Context, prefix string, idMapping map[string
|
||||
tempID := fmt.Sprintf("%s-%s", tempPrefix, strings.TrimPrefix(oldID, prefix+"-"))
|
||||
tempMapping[oldID] = tempID
|
||||
|
||||
// Rename to temp ID
|
||||
// Rename to temp ID (don't update text yet)
|
||||
issue.ID = tempID
|
||||
if err := store.UpdateIssueID(ctx, oldID, tempID, issue, actor); err != nil {
|
||||
return fmt.Errorf("failed to rename %s to temp ID: %w", oldID, err)
|
||||
}
|
||||
}
|
||||
|
||||
// Step 2: Now rename from temp IDs to final IDs
|
||||
// Build regex to match any temp issue ID
|
||||
tempIDs := make([]string, 0, len(tempMapping))
|
||||
for _, tempID := range tempMapping {
|
||||
tempIDs = append(tempIDs, regexp.QuoteMeta(tempID))
|
||||
}
|
||||
tempPattern := regexp.MustCompile(`\b(` + strings.Join(tempIDs, "|") + `)\b`)
|
||||
|
||||
// Build reverse mapping: tempID -> finalID
|
||||
tempToFinal := make(map[string]string)
|
||||
for oldID, tempID := range tempMapping {
|
||||
finalID := idMapping[oldID]
|
||||
tempToFinal[tempID] = finalID
|
||||
}
|
||||
|
||||
replaceFunc := func(match string) string {
|
||||
if finalID, ok := tempToFinal[match]; ok {
|
||||
return finalID
|
||||
}
|
||||
return match
|
||||
}
|
||||
|
||||
// Update each issue from temp to final
|
||||
// Step 2: Rename from temp IDs to final IDs (still don't update text)
|
||||
for _, issue := range issues {
|
||||
tempID := issue.ID // Currently has temp ID
|
||||
oldOriginalID := ""
|
||||
|
||||
// Find original ID
|
||||
var oldOriginalID string
|
||||
for origID, tID := range tempMapping {
|
||||
if tID == tempID {
|
||||
oldOriginalID = origID
|
||||
@@ -202,28 +181,82 @@ func renumberIssuesInDB(ctx context.Context, prefix string, idMapping map[string
|
||||
}
|
||||
}
|
||||
finalID := idMapping[oldOriginalID]
|
||||
|
||||
// Update the issue's own ID
|
||||
|
||||
// Just update the ID, not text yet
|
||||
issue.ID = finalID
|
||||
|
||||
// Update text references in all fields
|
||||
issue.Title = tempPattern.ReplaceAllStringFunc(issue.Title, replaceFunc)
|
||||
issue.Description = tempPattern.ReplaceAllStringFunc(issue.Description, replaceFunc)
|
||||
if issue.Design != "" {
|
||||
issue.Design = tempPattern.ReplaceAllStringFunc(issue.Design, replaceFunc)
|
||||
}
|
||||
if issue.AcceptanceCriteria != "" {
|
||||
issue.AcceptanceCriteria = tempPattern.ReplaceAllStringFunc(issue.AcceptanceCriteria, replaceFunc)
|
||||
}
|
||||
if issue.Notes != "" {
|
||||
issue.Notes = tempPattern.ReplaceAllStringFunc(issue.Notes, replaceFunc)
|
||||
}
|
||||
|
||||
// Update the issue in the database
|
||||
if err := store.UpdateIssueID(ctx, tempID, finalID, issue, actor); err != nil {
|
||||
return fmt.Errorf("failed to update issue %s: %w", tempID, err)
|
||||
}
|
||||
}
|
||||
|
||||
// Step 3: Now update all text references using the original old->new mapping
|
||||
// Build regex to match any OLD issue ID (before renumbering)
|
||||
oldIDs := make([]string, 0, len(idMapping))
|
||||
for oldID := range idMapping {
|
||||
oldIDs = append(oldIDs, regexp.QuoteMeta(oldID))
|
||||
}
|
||||
oldPattern := regexp.MustCompile(`\b(` + strings.Join(oldIDs, "|") + `)\b`)
|
||||
|
||||
replaceFunc := func(match string) string {
|
||||
if newID, ok := idMapping[match]; ok {
|
||||
return newID
|
||||
}
|
||||
return match
|
||||
}
|
||||
|
||||
// Update text references in all issues
|
||||
for _, issue := range issues {
|
||||
changed := false
|
||||
|
||||
newTitle := oldPattern.ReplaceAllStringFunc(issue.Title, replaceFunc)
|
||||
if newTitle != issue.Title {
|
||||
issue.Title = newTitle
|
||||
changed = true
|
||||
}
|
||||
|
||||
newDesc := oldPattern.ReplaceAllStringFunc(issue.Description, replaceFunc)
|
||||
if newDesc != issue.Description {
|
||||
issue.Description = newDesc
|
||||
changed = true
|
||||
}
|
||||
|
||||
if issue.Design != "" {
|
||||
newDesign := oldPattern.ReplaceAllStringFunc(issue.Design, replaceFunc)
|
||||
if newDesign != issue.Design {
|
||||
issue.Design = newDesign
|
||||
changed = true
|
||||
}
|
||||
}
|
||||
|
||||
if issue.AcceptanceCriteria != "" {
|
||||
newAC := oldPattern.ReplaceAllStringFunc(issue.AcceptanceCriteria, replaceFunc)
|
||||
if newAC != issue.AcceptanceCriteria {
|
||||
issue.AcceptanceCriteria = newAC
|
||||
changed = true
|
||||
}
|
||||
}
|
||||
|
||||
if issue.Notes != "" {
|
||||
newNotes := oldPattern.ReplaceAllStringFunc(issue.Notes, replaceFunc)
|
||||
if newNotes != issue.Notes {
|
||||
issue.Notes = newNotes
|
||||
changed = true
|
||||
}
|
||||
}
|
||||
|
||||
// Only update if text changed
|
||||
if changed {
|
||||
if err := store.UpdateIssue(ctx, issue.ID, map[string]interface{}{
|
||||
"title": issue.Title,
|
||||
"description": issue.Description,
|
||||
"design": issue.Design,
|
||||
"acceptance_criteria": issue.AcceptanceCriteria,
|
||||
"notes": issue.Notes,
|
||||
}, actor); err != nil {
|
||||
return fmt.Errorf("failed to update text references in %s: %w", issue.ID, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Update all dependency links
|
||||
if err := renumberDependencies(ctx, idMapping); err != nil {
|
||||
|
||||
4
go.mod
4
go.mod
@@ -3,13 +3,14 @@ module github.com/steveyegge/beads
|
||||
go 1.23.0
|
||||
|
||||
require (
|
||||
github.com/anthropics/anthropic-sdk-go v1.14.0
|
||||
github.com/fatih/color v1.18.0
|
||||
github.com/spf13/cobra v1.10.1
|
||||
modernc.org/sqlite v1.38.2
|
||||
rsc.io/script v0.0.2
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/anthropics/anthropic-sdk-go v1.14.0 // indirect
|
||||
github.com/dustin/go-humanize v1.0.1 // indirect
|
||||
github.com/google/uuid v1.6.0 // indirect
|
||||
github.com/inconshreveable/mousetrap v1.1.0 // indirect
|
||||
@@ -28,5 +29,4 @@ require (
|
||||
modernc.org/libc v1.66.3 // indirect
|
||||
modernc.org/mathutil v1.7.1 // indirect
|
||||
modernc.org/memory v1.11.0 // indirect
|
||||
rsc.io/script v0.0.2 // indirect
|
||||
)
|
||||
|
||||
11
go.sum
11
go.sum
@@ -1,6 +1,8 @@
|
||||
github.com/anthropics/anthropic-sdk-go v1.14.0 h1:EzNQvnZlaDHe2UPkoUySDz3ixRgNbwKdH8KtFpv7pi4=
|
||||
github.com/anthropics/anthropic-sdk-go v1.14.0/go.mod h1:WTz31rIUHUHqai2UslPpw5CwXrQP3geYBioRV4WOLvE=
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
|
||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
|
||||
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
|
||||
github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM=
|
||||
@@ -18,6 +20,8 @@ github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWE
|
||||
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
||||
github.com/ncruces/go-strftime v0.1.9 h1:bY0MQC28UADQmHmaF5dgpLmImcShSi2kHU9XLdhx/f4=
|
||||
github.com/ncruces/go-strftime v0.1.9/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
|
||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
|
||||
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
|
||||
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
||||
@@ -26,6 +30,8 @@ github.com/spf13/cobra v1.10.1/go.mod h1:7SmJGaTHFVBY0jW4NXGluQoLvhqFQM+6XSKD+P4
|
||||
github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
||||
github.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk=
|
||||
github.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
||||
github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk=
|
||||
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
|
||||
github.com/tidwall/gjson v1.14.2/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
|
||||
github.com/tidwall/gjson v1.18.0 h1:FIDeeyB800efLX89e5a8Y0BNH+LOngJyGrIWxG2FKQY=
|
||||
github.com/tidwall/gjson v1.18.0/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
|
||||
@@ -40,8 +46,8 @@ golang.org/x/exp v0.0.0-20250620022241-b7579e27df2b h1:M2rDM6z3Fhozi9O7NWsxAkg/y
|
||||
golang.org/x/exp v0.0.0-20250620022241-b7579e27df2b/go.mod h1:3//PLf8L/X+8b4vuAfHzxeRUl04Adcb341+IGKfnqS8=
|
||||
golang.org/x/mod v0.25.0 h1:n7a+ZbQKQA/Ysbyb0/6IbB1H/X41mKgbhfv7AfG/44w=
|
||||
golang.org/x/mod v0.25.0/go.mod h1:IXM97Txy2VM4PJ3gI61r1YEk/gAj6zAHN3AdZt6S9Ww=
|
||||
golang.org/x/sync v0.15.0 h1:KWH3jNZsfyT6xfAfKiz6MRNmd46ByHDYaZ7KSkCtdW8=
|
||||
golang.org/x/sync v0.15.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
|
||||
golang.org/x/sync v0.16.0 h1:ycBJEhp9p4vXvUZNszeOq0kGTPghopOL8q0fq3vstxw=
|
||||
golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
|
||||
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.34.0 h1:H5Y5sJ2L2JRdyv7ROF1he/lPdvFsd0mJHFw2ThKHxLA=
|
||||
@@ -49,6 +55,7 @@ golang.org/x/sys v0.34.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
|
||||
golang.org/x/tools v0.34.0 h1:qIpSLOxeCYGg9TrcJokLBG4KFA6d795g0xkBkiESGlo=
|
||||
golang.org/x/tools v0.34.0/go.mod h1:pAP9OwEaY1CAW3HOmg3hLZC5Z0CCmzjAF2UQMSqNARg=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
modernc.org/cc/v4 v4.26.2 h1:991HMkLjJzYBIfha6ECZdjrIYz2/1ayr+FL8GN+CNzM=
|
||||
modernc.org/cc/v4 v4.26.2/go.mod h1:uVtb5OGqUKpoLWhqwNQo/8LwvoiEBLvZXIQ/SmO6mL0=
|
||||
|
||||
145
internal/rpc/client.go
Normal file
145
internal/rpc/client.go
Normal file
@@ -0,0 +1,145 @@
|
||||
package rpc
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Client is an RPC client that communicates with the bd daemon.
|
||||
type Client struct {
|
||||
sockPath string
|
||||
mu sync.Mutex
|
||||
conn net.Conn
|
||||
}
|
||||
|
||||
// TryConnect attempts to connect to the daemon and returns a client if successful.
|
||||
// Returns nil if the daemon is not running or socket doesn't exist.
|
||||
func TryConnect(sockPath string) *Client {
|
||||
if _, err := os.Stat(sockPath); os.IsNotExist(err) {
|
||||
return nil
|
||||
}
|
||||
|
||||
conn, err := net.DialTimeout("unix", sockPath, 2*time.Second)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
client := &Client{
|
||||
sockPath: sockPath,
|
||||
conn: conn,
|
||||
}
|
||||
|
||||
if !client.ping() {
|
||||
conn.Close()
|
||||
return nil
|
||||
}
|
||||
|
||||
return client
|
||||
}
|
||||
|
||||
// ping sends a test request to verify the daemon is responsive.
|
||||
func (c *Client) ping() bool {
|
||||
req, _ := NewRequest(OpStats, nil)
|
||||
_, err := c.Execute(req)
|
||||
return err == nil
|
||||
}
|
||||
|
||||
// Execute sends a request to the daemon and returns the response.
|
||||
func (c *Client) Execute(req *Request) (*Response, error) {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
|
||||
if c.conn == nil {
|
||||
return nil, fmt.Errorf("client not connected")
|
||||
}
|
||||
|
||||
reqJSON, err := json.Marshal(req)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to marshal request: %w", err)
|
||||
}
|
||||
|
||||
reqJSON = append(reqJSON, '\n')
|
||||
|
||||
if _, err := c.conn.Write(reqJSON); err != nil {
|
||||
c.reconnect()
|
||||
return nil, fmt.Errorf("failed to write request: %w", err)
|
||||
}
|
||||
|
||||
scanner := bufio.NewScanner(c.conn)
|
||||
if !scanner.Scan() {
|
||||
if err := scanner.Err(); err != nil {
|
||||
c.reconnect()
|
||||
return nil, fmt.Errorf("failed to read response: %w", err)
|
||||
}
|
||||
c.reconnect()
|
||||
return nil, fmt.Errorf("connection closed")
|
||||
}
|
||||
|
||||
var resp Response
|
||||
if err := json.Unmarshal(scanner.Bytes(), &resp); err != nil {
|
||||
return nil, fmt.Errorf("failed to unmarshal response: %w", err)
|
||||
}
|
||||
|
||||
return &resp, nil
|
||||
}
|
||||
|
||||
// reconnect attempts to reconnect to the daemon.
|
||||
func (c *Client) reconnect() error {
|
||||
if c.conn != nil {
|
||||
c.conn.Close()
|
||||
c.conn = nil
|
||||
}
|
||||
|
||||
var err error
|
||||
backoff := 100 * time.Millisecond
|
||||
|
||||
for i := 0; i < 3; i++ {
|
||||
c.conn, err = net.DialTimeout("unix", c.sockPath, 2*time.Second)
|
||||
if err == nil {
|
||||
return nil
|
||||
}
|
||||
time.Sleep(backoff)
|
||||
backoff *= 2
|
||||
}
|
||||
|
||||
return fmt.Errorf("failed to reconnect after 3 attempts: %w", err)
|
||||
}
|
||||
|
||||
// Close closes the client connection.
|
||||
func (c *Client) Close() error {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
|
||||
if c.conn != nil {
|
||||
err := c.conn.Close()
|
||||
c.conn = nil
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// SocketPath returns the default socket path for the given beads directory.
|
||||
func SocketPath(beadsDir string) string {
|
||||
return filepath.Join(beadsDir, "bd.sock")
|
||||
}
|
||||
|
||||
// DefaultSocketPath returns the socket path in the current working directory's .beads folder.
|
||||
func DefaultSocketPath() (string, error) {
|
||||
wd, err := os.Getwd()
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to get working directory: %w", err)
|
||||
}
|
||||
|
||||
beadsDir := filepath.Join(wd, ".beads")
|
||||
if _, err := os.Stat(beadsDir); os.IsNotExist(err) {
|
||||
return "", fmt.Errorf(".beads directory not found")
|
||||
}
|
||||
|
||||
return SocketPath(beadsDir), nil
|
||||
}
|
||||
115
internal/rpc/client_test.go
Normal file
115
internal/rpc/client_test.go
Normal file
@@ -0,0 +1,115 @@
|
||||
package rpc
|
||||
|
||||
import (
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestTryConnectNoSocket(t *testing.T) {
|
||||
tmpDir := t.TempDir()
|
||||
sockPath := filepath.Join(tmpDir, "nonexistent.sock")
|
||||
|
||||
client := TryConnect(sockPath)
|
||||
if client != nil {
|
||||
t.Error("Expected nil client when socket doesn't exist")
|
||||
}
|
||||
}
|
||||
|
||||
func TestTryConnectSuccess(t *testing.T) {
|
||||
tmpDir := t.TempDir()
|
||||
sockPath := filepath.Join(tmpDir, "test.sock")
|
||||
|
||||
store := &mockStorage{}
|
||||
server := NewServer(store, sockPath)
|
||||
|
||||
if err := server.Start(); err != nil {
|
||||
t.Fatalf("Failed to start server: %v", err)
|
||||
}
|
||||
defer server.Stop()
|
||||
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
|
||||
client := TryConnect(sockPath)
|
||||
if client == nil {
|
||||
t.Fatal("Expected client to connect successfully")
|
||||
}
|
||||
defer client.Close()
|
||||
|
||||
if client.sockPath != sockPath {
|
||||
t.Errorf("Expected sockPath %s, got %s", sockPath, client.sockPath)
|
||||
}
|
||||
}
|
||||
|
||||
func TestClientExecute(t *testing.T) {
|
||||
tmpDir := t.TempDir()
|
||||
sockPath := filepath.Join(tmpDir, "test.sock")
|
||||
|
||||
store := &mockStorage{}
|
||||
server := NewServer(store, sockPath)
|
||||
|
||||
if err := server.Start(); err != nil {
|
||||
t.Fatalf("Failed to start server: %v", err)
|
||||
}
|
||||
defer server.Stop()
|
||||
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
|
||||
client := TryConnect(sockPath)
|
||||
if client == nil {
|
||||
t.Fatal("Failed to connect to server")
|
||||
}
|
||||
defer client.Close()
|
||||
|
||||
req, _ := NewRequest(OpList, map[string]string{"status": "open"})
|
||||
resp, err := client.Execute(req)
|
||||
if err != nil {
|
||||
t.Fatalf("Execute failed: %v", err)
|
||||
}
|
||||
|
||||
if resp.Success {
|
||||
t.Error("Expected error response for unimplemented operation")
|
||||
}
|
||||
}
|
||||
|
||||
func TestClientMultipleRequests(t *testing.T) {
|
||||
tmpDir := t.TempDir()
|
||||
sockPath := filepath.Join(tmpDir, "test.sock")
|
||||
|
||||
store := &mockStorage{}
|
||||
server := NewServer(store, sockPath)
|
||||
|
||||
if err := server.Start(); err != nil {
|
||||
t.Fatalf("Failed to start server: %v", err)
|
||||
}
|
||||
defer server.Stop()
|
||||
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
|
||||
client := TryConnect(sockPath)
|
||||
if client == nil {
|
||||
t.Fatal("Failed to connect to server")
|
||||
}
|
||||
defer client.Close()
|
||||
|
||||
for i := 0; i < 5; i++ {
|
||||
req, _ := NewRequest(OpStats, nil)
|
||||
resp, err := client.Execute(req)
|
||||
if err != nil {
|
||||
t.Fatalf("Execute %d failed: %v", i, err)
|
||||
}
|
||||
if resp == nil {
|
||||
t.Fatalf("Execute %d returned nil response", i)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestSocketPath(t *testing.T) {
|
||||
beadsDir := "/home/user/project/.beads"
|
||||
expected := "/home/user/project/.beads/bd.sock"
|
||||
|
||||
result := SocketPath(beadsDir)
|
||||
if result != expected {
|
||||
t.Errorf("Expected %s, got %s", expected, result)
|
||||
}
|
||||
}
|
||||
88
internal/rpc/protocol.go
Normal file
88
internal/rpc/protocol.go
Normal file
@@ -0,0 +1,88 @@
|
||||
package rpc
|
||||
|
||||
import "encoding/json"
|
||||
|
||||
// Request represents an RPC request from a client to the daemon.
|
||||
type Request struct {
|
||||
Operation string `json:"operation"`
|
||||
Args json.RawMessage `json:"args"`
|
||||
}
|
||||
|
||||
// Response represents an RPC response from the daemon to a client.
|
||||
type Response struct {
|
||||
Success bool `json:"success"`
|
||||
Data json.RawMessage `json:"data,omitempty"`
|
||||
Error string `json:"error,omitempty"`
|
||||
}
|
||||
|
||||
// BatchRequest represents a batch of operations to execute atomically.
|
||||
type BatchRequest struct {
|
||||
Operations []Request `json:"operations"`
|
||||
Atomic bool `json:"atomic"`
|
||||
}
|
||||
|
||||
// Operations supported by the daemon.
|
||||
const (
|
||||
OpCreate = "create"
|
||||
OpUpdate = "update"
|
||||
OpClose = "close"
|
||||
OpList = "list"
|
||||
OpShow = "show"
|
||||
OpReady = "ready"
|
||||
OpBlocked = "blocked"
|
||||
OpStats = "stats"
|
||||
OpDepAdd = "dep_add"
|
||||
OpDepRemove = "dep_remove"
|
||||
OpDepTree = "dep_tree"
|
||||
OpLabelAdd = "label_add"
|
||||
OpLabelRemove = "label_remove"
|
||||
OpLabelList = "label_list"
|
||||
OpLabelListAll = "label_list_all"
|
||||
OpExport = "export"
|
||||
OpImport = "import"
|
||||
OpCompact = "compact"
|
||||
OpRestore = "restore"
|
||||
OpBatch = "batch"
|
||||
)
|
||||
|
||||
// NewRequest creates a new RPC request with the given operation and arguments.
|
||||
func NewRequest(operation string, args interface{}) (*Request, error) {
|
||||
argsJSON, err := json.Marshal(args)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &Request{
|
||||
Operation: operation,
|
||||
Args: argsJSON,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// NewSuccessResponse creates a successful response with the given data.
|
||||
func NewSuccessResponse(data interface{}) (*Response, error) {
|
||||
dataJSON, err := json.Marshal(data)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &Response{
|
||||
Success: true,
|
||||
Data: dataJSON,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// NewErrorResponse creates an error response with the given error message.
|
||||
func NewErrorResponse(err error) *Response {
|
||||
return &Response{
|
||||
Success: false,
|
||||
Error: err.Error(),
|
||||
}
|
||||
}
|
||||
|
||||
// UnmarshalArgs unmarshals the request arguments into the given value.
|
||||
func (r *Request) UnmarshalArgs(v interface{}) error {
|
||||
return json.Unmarshal(r.Args, v)
|
||||
}
|
||||
|
||||
// UnmarshalData unmarshals the response data into the given value.
|
||||
func (r *Response) UnmarshalData(v interface{}) error {
|
||||
return json.Unmarshal(r.Data, v)
|
||||
}
|
||||
147
internal/rpc/protocol_test.go
Normal file
147
internal/rpc/protocol_test.go
Normal file
@@ -0,0 +1,147 @@
|
||||
package rpc
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestNewRequest(t *testing.T) {
|
||||
args := map[string]string{
|
||||
"title": "Test issue",
|
||||
"priority": "1",
|
||||
}
|
||||
|
||||
req, err := NewRequest(OpCreate, args)
|
||||
if err != nil {
|
||||
t.Fatalf("NewRequest failed: %v", err)
|
||||
}
|
||||
|
||||
if req.Operation != OpCreate {
|
||||
t.Errorf("Expected operation %s, got %s", OpCreate, req.Operation)
|
||||
}
|
||||
|
||||
var unmarshaledArgs map[string]string
|
||||
if err := req.UnmarshalArgs(&unmarshaledArgs); err != nil {
|
||||
t.Fatalf("UnmarshalArgs failed: %v", err)
|
||||
}
|
||||
|
||||
if unmarshaledArgs["title"] != args["title"] {
|
||||
t.Errorf("Expected title %s, got %s", args["title"], unmarshaledArgs["title"])
|
||||
}
|
||||
}
|
||||
|
||||
func TestNewSuccessResponse(t *testing.T) {
|
||||
data := map[string]interface{}{
|
||||
"id": "bd-123",
|
||||
"status": "open",
|
||||
}
|
||||
|
||||
resp, err := NewSuccessResponse(data)
|
||||
if err != nil {
|
||||
t.Fatalf("NewSuccessResponse failed: %v", err)
|
||||
}
|
||||
|
||||
if !resp.Success {
|
||||
t.Error("Expected success=true")
|
||||
}
|
||||
|
||||
if resp.Error != "" {
|
||||
t.Errorf("Expected empty error, got %s", resp.Error)
|
||||
}
|
||||
|
||||
var unmarshaledData map[string]interface{}
|
||||
if err := resp.UnmarshalData(&unmarshaledData); err != nil {
|
||||
t.Fatalf("UnmarshalData failed: %v", err)
|
||||
}
|
||||
|
||||
if unmarshaledData["id"] != data["id"] {
|
||||
t.Errorf("Expected id %s, got %s", data["id"], unmarshaledData["id"])
|
||||
}
|
||||
}
|
||||
|
||||
func TestNewErrorResponse(t *testing.T) {
|
||||
testErr := errors.New("test error")
|
||||
|
||||
resp := NewErrorResponse(testErr)
|
||||
|
||||
if resp.Success {
|
||||
t.Error("Expected success=false")
|
||||
}
|
||||
|
||||
if resp.Error != testErr.Error() {
|
||||
t.Errorf("Expected error %s, got %s", testErr.Error(), resp.Error)
|
||||
}
|
||||
|
||||
if len(resp.Data) != 0 {
|
||||
t.Errorf("Expected empty data, got %v", resp.Data)
|
||||
}
|
||||
}
|
||||
|
||||
func TestRequestResponseJSON(t *testing.T) {
|
||||
req := &Request{
|
||||
Operation: OpList,
|
||||
Args: json.RawMessage(`{"status":"open"}`),
|
||||
}
|
||||
|
||||
reqJSON, err := json.Marshal(req)
|
||||
if err != nil {
|
||||
t.Fatalf("Marshal request failed: %v", err)
|
||||
}
|
||||
|
||||
var unmarshaledReq Request
|
||||
if err := json.Unmarshal(reqJSON, &unmarshaledReq); err != nil {
|
||||
t.Fatalf("Unmarshal request failed: %v", err)
|
||||
}
|
||||
|
||||
if unmarshaledReq.Operation != req.Operation {
|
||||
t.Errorf("Expected operation %s, got %s", req.Operation, unmarshaledReq.Operation)
|
||||
}
|
||||
|
||||
resp := &Response{
|
||||
Success: true,
|
||||
Data: json.RawMessage(`{"count":5}`),
|
||||
}
|
||||
|
||||
respJSON, err := json.Marshal(resp)
|
||||
if err != nil {
|
||||
t.Fatalf("Marshal response failed: %v", err)
|
||||
}
|
||||
|
||||
var unmarshaledResp Response
|
||||
if err := json.Unmarshal(respJSON, &unmarshaledResp); err != nil {
|
||||
t.Fatalf("Unmarshal response failed: %v", err)
|
||||
}
|
||||
|
||||
if unmarshaledResp.Success != resp.Success {
|
||||
t.Errorf("Expected success %v, got %v", resp.Success, unmarshaledResp.Success)
|
||||
}
|
||||
}
|
||||
|
||||
func TestBatchRequest(t *testing.T) {
|
||||
req1, _ := NewRequest(OpCreate, map[string]string{"title": "Issue 1"})
|
||||
req2, _ := NewRequest(OpCreate, map[string]string{"title": "Issue 2"})
|
||||
|
||||
batch := &BatchRequest{
|
||||
Operations: []Request{*req1, *req2},
|
||||
Atomic: true,
|
||||
}
|
||||
|
||||
batchJSON, err := json.Marshal(batch)
|
||||
if err != nil {
|
||||
t.Fatalf("Marshal batch failed: %v", err)
|
||||
}
|
||||
|
||||
var unmarshaledBatch BatchRequest
|
||||
if err := json.Unmarshal(batchJSON, &unmarshaledBatch); err != nil {
|
||||
t.Fatalf("Unmarshal batch failed: %v", err)
|
||||
}
|
||||
|
||||
if len(unmarshaledBatch.Operations) != 2 {
|
||||
t.Errorf("Expected 2 operations, got %d", len(unmarshaledBatch.Operations))
|
||||
}
|
||||
|
||||
if !unmarshaledBatch.Atomic {
|
||||
t.Error("Expected atomic=true")
|
||||
}
|
||||
}
|
||||
265
internal/rpc/server.go
Normal file
265
internal/rpc/server.go
Normal file
@@ -0,0 +1,265 @@
|
||||
package rpc
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net"
|
||||
"os"
|
||||
"sync"
|
||||
|
||||
"github.com/steveyegge/beads/internal/storage"
|
||||
)
|
||||
|
||||
// Server is the RPC server that handles requests from bd clients.
|
||||
type Server struct {
|
||||
storage storage.Storage
|
||||
listener net.Listener
|
||||
sockPath string
|
||||
ctx context.Context
|
||||
cancel context.CancelFunc
|
||||
wg sync.WaitGroup
|
||||
mu sync.Mutex // Protects shutdown state
|
||||
shutdown bool
|
||||
}
|
||||
|
||||
// NewServer creates a new RPC server.
|
||||
func NewServer(store storage.Storage, sockPath string) *Server {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
return &Server{
|
||||
storage: store,
|
||||
sockPath: sockPath,
|
||||
ctx: ctx,
|
||||
cancel: cancel,
|
||||
}
|
||||
}
|
||||
|
||||
// Start starts the RPC server listening on the Unix socket.
|
||||
func (s *Server) Start() error {
|
||||
if err := os.Remove(s.sockPath); err != nil && !os.IsNotExist(err) {
|
||||
return fmt.Errorf("failed to remove existing socket: %w", err)
|
||||
}
|
||||
|
||||
listener, err := net.Listen("unix", s.sockPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to listen on socket %s: %w", s.sockPath, err)
|
||||
}
|
||||
s.listener = listener
|
||||
|
||||
if err := os.Chmod(s.sockPath, 0600); err != nil {
|
||||
s.listener.Close()
|
||||
return fmt.Errorf("failed to set socket permissions: %w", err)
|
||||
}
|
||||
|
||||
s.wg.Add(1)
|
||||
go s.acceptLoop()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Stop gracefully stops the RPC server.
|
||||
func (s *Server) Stop() error {
|
||||
s.mu.Lock()
|
||||
if s.shutdown {
|
||||
s.mu.Unlock()
|
||||
return nil
|
||||
}
|
||||
s.shutdown = true
|
||||
s.mu.Unlock()
|
||||
|
||||
s.cancel()
|
||||
|
||||
if s.listener != nil {
|
||||
s.listener.Close()
|
||||
}
|
||||
|
||||
s.wg.Wait()
|
||||
|
||||
if err := os.Remove(s.sockPath); err != nil && !os.IsNotExist(err) {
|
||||
return fmt.Errorf("failed to remove socket: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// acceptLoop accepts incoming connections and handles them.
|
||||
func (s *Server) acceptLoop() {
|
||||
defer s.wg.Done()
|
||||
|
||||
for {
|
||||
conn, err := s.listener.Accept()
|
||||
if err != nil {
|
||||
select {
|
||||
case <-s.ctx.Done():
|
||||
return
|
||||
default:
|
||||
fmt.Fprintf(os.Stderr, "Error accepting connection: %v\n", err)
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
s.wg.Add(1)
|
||||
go s.handleConnection(conn)
|
||||
}
|
||||
}
|
||||
|
||||
// handleConnection handles a single client connection.
|
||||
func (s *Server) handleConnection(conn net.Conn) {
|
||||
defer s.wg.Done()
|
||||
defer conn.Close()
|
||||
|
||||
scanner := bufio.NewScanner(conn)
|
||||
writer := bufio.NewWriter(conn)
|
||||
|
||||
for scanner.Scan() {
|
||||
select {
|
||||
case <-s.ctx.Done():
|
||||
return
|
||||
default:
|
||||
}
|
||||
|
||||
line := scanner.Bytes()
|
||||
var req Request
|
||||
if err := json.Unmarshal(line, &req); err != nil {
|
||||
resp := NewErrorResponse(fmt.Errorf("invalid request JSON: %w", err))
|
||||
s.sendResponse(writer, resp)
|
||||
continue
|
||||
}
|
||||
|
||||
resp := s.handleRequest(&req)
|
||||
s.sendResponse(writer, resp)
|
||||
}
|
||||
|
||||
if err := scanner.Err(); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error reading from connection: %v\n", err)
|
||||
}
|
||||
}
|
||||
|
||||
// sendResponse sends a response to the client.
|
||||
func (s *Server) sendResponse(writer *bufio.Writer, resp *Response) {
|
||||
respJSON, err := json.Marshal(resp)
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error marshaling response: %v\n", err)
|
||||
return
|
||||
}
|
||||
|
||||
if _, err := writer.Write(respJSON); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error writing response: %v\n", err)
|
||||
return
|
||||
}
|
||||
if _, err := writer.Write([]byte("\n")); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error writing newline: %v\n", err)
|
||||
return
|
||||
}
|
||||
if err := writer.Flush(); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error flushing response: %v\n", err)
|
||||
}
|
||||
}
|
||||
|
||||
// handleRequest processes an RPC request and returns a response.
|
||||
func (s *Server) handleRequest(req *Request) *Response {
|
||||
ctx := context.Background()
|
||||
|
||||
switch req.Operation {
|
||||
case OpBatch:
|
||||
return s.handleBatch(ctx, req)
|
||||
case OpCreate:
|
||||
return s.handleCreate(ctx, req)
|
||||
case OpUpdate:
|
||||
return s.handleUpdate(ctx, req)
|
||||
case OpClose:
|
||||
return s.handleClose(ctx, req)
|
||||
case OpList:
|
||||
return s.handleList(ctx, req)
|
||||
case OpShow:
|
||||
return s.handleShow(ctx, req)
|
||||
case OpReady:
|
||||
return s.handleReady(ctx, req)
|
||||
case OpBlocked:
|
||||
return s.handleBlocked(ctx, req)
|
||||
case OpStats:
|
||||
return s.handleStats(ctx, req)
|
||||
case OpDepAdd:
|
||||
return s.handleDepAdd(ctx, req)
|
||||
case OpDepRemove:
|
||||
return s.handleDepRemove(ctx, req)
|
||||
case OpDepTree:
|
||||
return s.handleDepTree(ctx, req)
|
||||
case OpLabelAdd:
|
||||
return s.handleLabelAdd(ctx, req)
|
||||
case OpLabelRemove:
|
||||
return s.handleLabelRemove(ctx, req)
|
||||
case OpLabelList:
|
||||
return s.handleLabelList(ctx, req)
|
||||
case OpLabelListAll:
|
||||
return s.handleLabelListAll(ctx, req)
|
||||
default:
|
||||
return NewErrorResponse(fmt.Errorf("unknown operation: %s", req.Operation))
|
||||
}
|
||||
}
|
||||
|
||||
// Placeholder handlers - will be implemented in future commits
|
||||
func (s *Server) handleBatch(ctx context.Context, req *Request) *Response {
|
||||
return NewErrorResponse(fmt.Errorf("batch operation not yet implemented"))
|
||||
}
|
||||
|
||||
func (s *Server) handleCreate(ctx context.Context, req *Request) *Response {
|
||||
return NewErrorResponse(fmt.Errorf("create operation not yet implemented"))
|
||||
}
|
||||
|
||||
func (s *Server) handleUpdate(ctx context.Context, req *Request) *Response {
|
||||
return NewErrorResponse(fmt.Errorf("update operation not yet implemented"))
|
||||
}
|
||||
|
||||
func (s *Server) handleClose(ctx context.Context, req *Request) *Response {
|
||||
return NewErrorResponse(fmt.Errorf("close operation not yet implemented"))
|
||||
}
|
||||
|
||||
func (s *Server) handleList(ctx context.Context, req *Request) *Response {
|
||||
return NewErrorResponse(fmt.Errorf("list operation not yet implemented"))
|
||||
}
|
||||
|
||||
func (s *Server) handleShow(ctx context.Context, req *Request) *Response {
|
||||
return NewErrorResponse(fmt.Errorf("show operation not yet implemented"))
|
||||
}
|
||||
|
||||
func (s *Server) handleReady(ctx context.Context, req *Request) *Response {
|
||||
return NewErrorResponse(fmt.Errorf("ready operation not yet implemented"))
|
||||
}
|
||||
|
||||
func (s *Server) handleBlocked(ctx context.Context, req *Request) *Response {
|
||||
return NewErrorResponse(fmt.Errorf("blocked operation not yet implemented"))
|
||||
}
|
||||
|
||||
func (s *Server) handleStats(ctx context.Context, req *Request) *Response {
|
||||
return NewErrorResponse(fmt.Errorf("stats operation not yet implemented"))
|
||||
}
|
||||
|
||||
func (s *Server) handleDepAdd(ctx context.Context, req *Request) *Response {
|
||||
return NewErrorResponse(fmt.Errorf("dep_add operation not yet implemented"))
|
||||
}
|
||||
|
||||
func (s *Server) handleDepRemove(ctx context.Context, req *Request) *Response {
|
||||
return NewErrorResponse(fmt.Errorf("dep_remove operation not yet implemented"))
|
||||
}
|
||||
|
||||
func (s *Server) handleDepTree(ctx context.Context, req *Request) *Response {
|
||||
return NewErrorResponse(fmt.Errorf("dep_tree operation not yet implemented"))
|
||||
}
|
||||
|
||||
func (s *Server) handleLabelAdd(ctx context.Context, req *Request) *Response {
|
||||
return NewErrorResponse(fmt.Errorf("label_add operation not yet implemented"))
|
||||
}
|
||||
|
||||
func (s *Server) handleLabelRemove(ctx context.Context, req *Request) *Response {
|
||||
return NewErrorResponse(fmt.Errorf("label_remove operation not yet implemented"))
|
||||
}
|
||||
|
||||
func (s *Server) handleLabelList(ctx context.Context, req *Request) *Response {
|
||||
return NewErrorResponse(fmt.Errorf("label_list operation not yet implemented"))
|
||||
}
|
||||
|
||||
func (s *Server) handleLabelListAll(ctx context.Context, req *Request) *Response {
|
||||
return NewErrorResponse(fmt.Errorf("label_list_all operation not yet implemented"))
|
||||
}
|
||||
214
internal/rpc/server_test.go
Normal file
214
internal/rpc/server_test.go
Normal file
@@ -0,0 +1,214 @@
|
||||
package rpc
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"net"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/steveyegge/beads/internal/types"
|
||||
)
|
||||
|
||||
type mockStorage struct{}
|
||||
|
||||
func (m *mockStorage) CreateIssue(ctx context.Context, issue *types.Issue, actor string) error {
|
||||
return nil
|
||||
}
|
||||
func (m *mockStorage) CreateIssues(ctx context.Context, issues []*types.Issue, actor string) error {
|
||||
return nil
|
||||
}
|
||||
func (m *mockStorage) GetIssue(ctx context.Context, id string) (*types.Issue, error) { return nil, nil }
|
||||
func (m *mockStorage) UpdateIssue(ctx context.Context, id string, updates map[string]interface{}, actor string) error {
|
||||
return nil
|
||||
}
|
||||
func (m *mockStorage) CloseIssue(ctx context.Context, id, reason, actor string) error { return nil }
|
||||
func (m *mockStorage) SearchIssues(ctx context.Context, query string, filter types.IssueFilter) ([]*types.Issue, error) {
|
||||
return nil, nil
|
||||
}
|
||||
func (m *mockStorage) AddDependency(ctx context.Context, dep *types.Dependency, actor string) error {
|
||||
return nil
|
||||
}
|
||||
func (m *mockStorage) RemoveDependency(ctx context.Context, issueID, dependsOnID, actor string) error {
|
||||
return nil
|
||||
}
|
||||
func (m *mockStorage) GetDependencies(ctx context.Context, issueID string) ([]*types.Issue, error) {
|
||||
return nil, nil
|
||||
}
|
||||
func (m *mockStorage) GetDependents(ctx context.Context, issueID string) ([]*types.Issue, error) {
|
||||
return nil, nil
|
||||
}
|
||||
func (m *mockStorage) GetDependencyRecords(ctx context.Context, issueID string) ([]*types.Dependency, error) {
|
||||
return nil, nil
|
||||
}
|
||||
func (m *mockStorage) GetAllDependencyRecords(ctx context.Context) (map[string][]*types.Dependency, error) {
|
||||
return nil, nil
|
||||
}
|
||||
func (m *mockStorage) GetDependencyTree(ctx context.Context, issueID string, maxDepth int) ([]*types.TreeNode, error) {
|
||||
return nil, nil
|
||||
}
|
||||
func (m *mockStorage) DetectCycles(ctx context.Context) ([][]*types.Issue, error) { return nil, nil }
|
||||
func (m *mockStorage) AddLabel(ctx context.Context, issueID, label, actor string) error {
|
||||
return nil
|
||||
}
|
||||
func (m *mockStorage) RemoveLabel(ctx context.Context, issueID, label, actor string) error {
|
||||
return nil
|
||||
}
|
||||
func (m *mockStorage) GetLabels(ctx context.Context, issueID string) ([]string, error) {
|
||||
return nil, nil
|
||||
}
|
||||
func (m *mockStorage) GetIssuesByLabel(ctx context.Context, label string) ([]*types.Issue, error) {
|
||||
return nil, nil
|
||||
}
|
||||
func (m *mockStorage) GetReadyWork(ctx context.Context, filter types.WorkFilter) ([]*types.Issue, error) {
|
||||
return nil, nil
|
||||
}
|
||||
func (m *mockStorage) GetBlockedIssues(ctx context.Context) ([]*types.BlockedIssue, error) {
|
||||
return nil, nil
|
||||
}
|
||||
func (m *mockStorage) AddComment(ctx context.Context, issueID, actor, comment string) error {
|
||||
return nil
|
||||
}
|
||||
func (m *mockStorage) GetEvents(ctx context.Context, issueID string, limit int) ([]*types.Event, error) {
|
||||
return nil, nil
|
||||
}
|
||||
func (m *mockStorage) GetStatistics(ctx context.Context) (*types.Statistics, error) {
|
||||
return nil, nil
|
||||
}
|
||||
func (m *mockStorage) GetDirtyIssues(ctx context.Context) ([]string, error) { return nil, nil }
|
||||
func (m *mockStorage) ClearDirtyIssues(ctx context.Context) error { return nil }
|
||||
func (m *mockStorage) ClearDirtyIssuesByID(ctx context.Context, issueIDs []string) error {
|
||||
return nil
|
||||
}
|
||||
func (m *mockStorage) SetConfig(ctx context.Context, key, value string) error { return nil }
|
||||
func (m *mockStorage) GetConfig(ctx context.Context, key string) (string, error) {
|
||||
return "", nil
|
||||
}
|
||||
func (m *mockStorage) SetMetadata(ctx context.Context, key, value string) error { return nil }
|
||||
func (m *mockStorage) GetMetadata(ctx context.Context, key string) (string, error) {
|
||||
return "", nil
|
||||
}
|
||||
func (m *mockStorage) UpdateIssueID(ctx context.Context, oldID, newID string, issue *types.Issue, actor string) error {
|
||||
return nil
|
||||
}
|
||||
func (m *mockStorage) RenameDependencyPrefix(ctx context.Context, oldPrefix, newPrefix string) error {
|
||||
return nil
|
||||
}
|
||||
func (m *mockStorage) RenameCounterPrefix(ctx context.Context, oldPrefix, newPrefix string) error {
|
||||
return nil
|
||||
}
|
||||
func (m *mockStorage) Close() error { return nil }
|
||||
|
||||
func TestServerStartStop(t *testing.T) {
|
||||
tmpDir := t.TempDir()
|
||||
sockPath := filepath.Join(tmpDir, "test.sock")
|
||||
|
||||
store := &mockStorage{}
|
||||
server := NewServer(store, sockPath)
|
||||
|
||||
if err := server.Start(); err != nil {
|
||||
t.Fatalf("Failed to start server: %v", err)
|
||||
}
|
||||
|
||||
if _, err := os.Stat(sockPath); os.IsNotExist(err) {
|
||||
t.Fatal("Socket file was not created")
|
||||
}
|
||||
|
||||
if err := server.Stop(); err != nil {
|
||||
t.Fatalf("Failed to stop server: %v", err)
|
||||
}
|
||||
|
||||
if _, err := os.Stat(sockPath); !os.IsNotExist(err) {
|
||||
t.Fatal("Socket file was not removed")
|
||||
}
|
||||
}
|
||||
|
||||
func TestServerHandlesRequest(t *testing.T) {
|
||||
tmpDir := t.TempDir()
|
||||
sockPath := filepath.Join(tmpDir, "test.sock")
|
||||
|
||||
store := &mockStorage{}
|
||||
server := NewServer(store, sockPath)
|
||||
|
||||
if err := server.Start(); err != nil {
|
||||
t.Fatalf("Failed to start server: %v", err)
|
||||
}
|
||||
defer server.Stop()
|
||||
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
|
||||
conn, err := net.Dial("unix", sockPath)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to connect to server: %v", err)
|
||||
}
|
||||
defer conn.Close()
|
||||
|
||||
req, _ := NewRequest(OpStats, nil)
|
||||
reqJSON, _ := json.Marshal(req)
|
||||
reqJSON = append(reqJSON, '\n')
|
||||
|
||||
if _, err := conn.Write(reqJSON); err != nil {
|
||||
t.Fatalf("Failed to write request: %v", err)
|
||||
}
|
||||
|
||||
buf := make([]byte, 4096)
|
||||
n, err := conn.Read(buf)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to read response: %v", err)
|
||||
}
|
||||
|
||||
var resp Response
|
||||
if err := json.Unmarshal(buf[:n], &resp); err != nil {
|
||||
t.Fatalf("Failed to unmarshal response: %v", err)
|
||||
}
|
||||
|
||||
if resp.Success {
|
||||
t.Error("Expected error response for unimplemented operation")
|
||||
}
|
||||
}
|
||||
|
||||
func TestServerRejectsInvalidJSON(t *testing.T) {
|
||||
tmpDir := t.TempDir()
|
||||
sockPath := filepath.Join(tmpDir, "test.sock")
|
||||
|
||||
store := &mockStorage{}
|
||||
server := NewServer(store, sockPath)
|
||||
|
||||
if err := server.Start(); err != nil {
|
||||
t.Fatalf("Failed to start server: %v", err)
|
||||
}
|
||||
defer server.Stop()
|
||||
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
|
||||
conn, err := net.Dial("unix", sockPath)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to connect to server: %v", err)
|
||||
}
|
||||
defer conn.Close()
|
||||
|
||||
if _, err := conn.Write([]byte("invalid json\n")); err != nil {
|
||||
t.Fatalf("Failed to write request: %v", err)
|
||||
}
|
||||
|
||||
buf := make([]byte, 4096)
|
||||
n, err := conn.Read(buf)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to read response: %v", err)
|
||||
}
|
||||
|
||||
var resp Response
|
||||
if err := json.Unmarshal(buf[:n], &resp); err != nil {
|
||||
t.Fatalf("Failed to unmarshal response: %v", err)
|
||||
}
|
||||
|
||||
if resp.Success {
|
||||
t.Error("Expected error response for invalid JSON")
|
||||
}
|
||||
|
||||
if resp.Error == "" {
|
||||
t.Error("Expected error message")
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user