Add EventCompacted to event system (bd-262)
- Add EventCompacted event type constant - Add compaction fields to Issue struct (CompactionLevel, CompactedAt, OriginalSize) - Update ApplyCompaction to record compaction events with JSON metadata - Update bd show to display compaction status with emoji indicators - Update GetIssue query to load compaction fields - All tests passing Amp-Thread-ID: https://ampcode.com/threads/T-3f7946c6-8f5e-4a81-9527-1217041c7b39 Co-authored-by: Amp <amp@ampcode.com>
This commit is contained in:
@@ -178,7 +178,7 @@
|
||||
{"id":"bd-26","title":"Optimize reference updates to avoid loading all issues into memory","description":"In updateReferences(), we call SearchIssues with no filter to get ALL issues for updating references. For large databases (10k+ issues), this loads everything into memory. Options: 1) Use batched processing with LIMIT/OFFSET, 2) Use SQL UPDATE with REPLACE() directly, 3) Stream results instead of loading all at once. Located in collision.go:266","status":"open","priority":2,"issue_type":"task","created_at":"2025-10-14T14:43:06.911497-07:00","updated_at":"2025-10-15T16:27:22.001829-07:00"}
|
||||
{"id":"bd-260","title":"Add `bd compact --restore` functionality","description":"Implement restore command to undo compaction from snapshots.","design":"Add to `cmd/bd/compact.go`:\n\n```go\nvar compactRestore string\n\ncompactCmd.Flags().StringVar(\u0026compactRestore, \"restore\", \"\", \"Restore issue from snapshot\")\n```\n\nProcess:\n1. Load snapshot for issue\n2. Parse JSON content\n3. Update issue with original content\n4. Set compaction_level = 0, compacted_at = NULL, original_size = NULL\n5. Record event (EventRestored or EventUpdated)\n6. Mark dirty for export","acceptance_criteria":"- Restores exact original content\n- Handles multiple snapshots (use latest by default)\n- `--level` flag to choose specific snapshot\n- Updates compaction_level correctly\n- Exports restored content to JSONL\n- Shows before/after in output","notes":"Won't fix - snapshots defeat the purpose of compaction","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-15T21:51:23.240267-07:00","updated_at":"2025-10-16T00:26:13.913152-07:00","closed_at":"2025-10-16T00:26:13.913156-07:00","labels":["---","cli","compaction","restore"]}
|
||||
{"id":"bd-261","title":"Add `bd compact --stats` command","description":"Add statistics command showing compaction status and potential savings.","design":"```go\nvar compactStats bool\n\ncompactCmd.Flags().BoolVar(\u0026compactStats, \"stats\", false, \"Show compaction statistics\")\n```\n\nOutput:\n- Total issues, by compaction level (0, 1, 2)\n- Current DB size vs estimated uncompacted size\n- Space savings (KB/MB and %)\n- Candidates for each tier with size estimates\n- Estimated API cost (Haiku pricing)","acceptance_criteria":"- Accurate counts by compaction_level\n- Size calculations include all text fields (UTF-8 bytes)\n- Shows candidates with eligibility reasons\n- Cost estimation based on current Haiku pricing\n- JSON output supported\n- Clear, readable table format","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-15T21:51:23.242041-07:00","updated_at":"2025-10-16T00:26:06.624041-07:00","closed_at":"2025-10-16T00:26:06.624041-07:00","labels":["---","compaction","reporting","stats"]}
|
||||
{"id":"bd-262","title":"Add EventCompacted to event system","description":"Add new event type for tracking compaction in audit trail.","design":"1. Add to `internal/types/types.go`:\n```go\nconst EventCompacted EventType = \"compacted\"\n```\n\n2. Record event during compaction:\n```go\neventData := map[string]interface{}{\n \"tier\": tier,\n \"original_size\": originalSize,\n \"compressed_size\": compressedSize,\n \"reduction_pct\": (1 - float64(compressedSize)/float64(originalSize)) * 100,\n}\n```\n\n3. Update event display in `bd show`.","acceptance_criteria":"- Event includes tier, original_size, compressed_size, reduction_pct\n- Shows in event history (`bd events \u003cid\u003e`)\n- Exports to JSONL correctly\n- `bd show` displays compaction status and marker","status":"open","priority":2,"issue_type":"task","created_at":"2025-10-15T21:51:23.244219-07:00","updated_at":"2025-10-15T21:51:23.244219-07:00","labels":["---","audit","compaction","events"]}
|
||||
{"id":"bd-262","title":"Add EventCompacted to event system","description":"Add new event type for tracking compaction in audit trail.","design":"1. Add to `internal/types/types.go`:\n```go\nconst EventCompacted EventType = \"compacted\"\n```\n\n2. Record event during compaction:\n```go\neventData := map[string]interface{}{\n \"tier\": tier,\n \"original_size\": originalSize,\n \"compressed_size\": compressedSize,\n \"reduction_pct\": (1 - float64(compressedSize)/float64(originalSize)) * 100,\n}\n```\n\n3. Update event display in `bd show`.","acceptance_criteria":"- Event includes tier, original_size, compressed_size, reduction_pct\n- Shows in event history (`bd events \u003cid\u003e`)\n- Exports to JSONL correctly\n- `bd show` displays compaction status and marker","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-15T21:51:23.244219-07:00","updated_at":"2025-10-16T00:59:17.465182-07:00","closed_at":"2025-10-16T00:59:17.465182-07:00","labels":["---","audit","compaction","events"]}
|
||||
{"id":"bd-263","title":"Add compaction indicator to `bd show`","description":"Update `bd show` command to display compaction status prominently.","design":"Add to issue display:\n```\nbd-42: Fix authentication bug [CLOSED] 🗜️\n\nStatus: closed (compacted L1)\n...\n\n---\n💾 Restore: bd compact --restore bd-42\n📊 Original: 2,341 bytes | Compressed: 468 bytes (80% reduction)\n🗜️ Compacted: 2025-10-15 (Tier 1)\n```\n\nEmoji indicators:\n- Tier 1: 🗜️\n- Tier 2: 📦","acceptance_criteria":"- Compaction status visible in title line\n- Footer shows size savings when compacted\n- Restore command shown for compacted issues\n- Works with `--json` output (includes compaction fields)\n- Emoji optional (controlled by config or terminal detection)","status":"open","priority":2,"issue_type":"task","created_at":"2025-10-15T21:51:23.253091-07:00","updated_at":"2025-10-15T21:51:23.253091-07:00","labels":["---","compaction","display","ui"]}
|
||||
{"id":"bd-264","title":"Write compaction tests","description":"Comprehensive test suite for compaction functionality.","design":"Test coverage:\n\n1. **Candidate Identification:**\n - Eligibility by time\n - Dependency depth checking\n - Mixed status dependents\n - Edge cases (no deps, circular deps)\n\n2. **Snapshots:**\n - Create and restore\n - Multiple snapshots per issue\n - Content integrity (UTF-8, special chars)\n\n3. **Tier 1 Compaction:**\n - Single issue compaction\n - Batch processing\n - Error handling (API failures)\n\n4. **Tier 2 Compaction:**\n - Requires Tier 1\n - Events pruning\n - Commit counting fallback\n\n5. **CLI:**\n - All flag combinations\n - Dry-run accuracy\n - JSON output parsing\n\n6. **Integration:**\n - End-to-end flow\n - JSONL export/import\n - Restore verification","acceptance_criteria":"- Test coverage \u003e80%\n- All edge cases covered\n- Mock Haiku API in tests (no real API calls)\n- Integration tests pass\n- `go test ./...` passes\n- Benchmarks for performance-critical paths","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-15T21:51:23.262504-07:00","updated_at":"2025-10-16T00:02:11.246331-07:00","closed_at":"2025-10-16T00:02:11.246331-07:00","labels":["---","compaction","quality","testing"]}
|
||||
{"id":"bd-265","title":"Add compaction documentation","description":"Document compaction feature in README and create detailed COMPACTION.md guide.","design":"**Update README.md:**\n- Add to Features section\n- CLI examples (dry-run, compact, restore, stats)\n- Configuration guide\n- Cost analysis\n\n**Create COMPACTION.md:**\n- How compaction works (architecture overview)\n- When to use each tier\n- Detailed cost analysis with examples\n- Safety mechanisms (snapshots, restore, dry-run)\n- Troubleshooting guide\n- FAQ\n\n**Create examples/compaction/:**\n- `workflow.sh` - Example monthly compaction workflow\n- `cron-compact.sh` - Cron job setup\n- `auto-compact.sh` - Auto-compaction script","acceptance_criteria":"- README.md updated with compaction section\n- COMPACTION.md comprehensive and clear\n- Examples work as documented (tested)\n- Screenshots or ASCII examples included\n- API key setup documented (env var vs config)\n- Covers common questions and issues","status":"open","priority":2,"issue_type":"task","created_at":"2025-10-15T21:51:23.265589-07:00","updated_at":"2025-10-15T21:51:23.265589-07:00","labels":["---","compaction","docs","documentation","examples"]}
|
||||
|
||||
@@ -1003,6 +1003,30 @@ var showCmd = &cobra.Command{
|
||||
fmt.Printf("Created: %s\n", issue.CreatedAt.Format("2006-01-02 15:04"))
|
||||
fmt.Printf("Updated: %s\n", issue.UpdatedAt.Format("2006-01-02 15:04"))
|
||||
|
||||
// Show compaction status
|
||||
if issue.CompactionLevel > 0 {
|
||||
tierEmoji := ""
|
||||
if issue.CompactionLevel == 1 {
|
||||
tierEmoji = "🗜️ "
|
||||
} else if issue.CompactionLevel == 2 {
|
||||
tierEmoji = "📦 "
|
||||
}
|
||||
fmt.Printf("\n%sCompacted: Tier %d", tierEmoji, issue.CompactionLevel)
|
||||
if issue.CompactedAt != nil {
|
||||
fmt.Printf(" on %s", issue.CompactedAt.Format("2006-01-02"))
|
||||
}
|
||||
if issue.OriginalSize > 0 {
|
||||
currentSize := len(issue.Description) + len(issue.Design) + len(issue.Notes) + len(issue.AcceptanceCriteria)
|
||||
saved := issue.OriginalSize - currentSize
|
||||
if saved > 0 {
|
||||
reduction := float64(saved) / float64(issue.OriginalSize) * 100
|
||||
fmt.Printf(" (%.0f%% reduction, saved %d bytes)", reduction, saved)
|
||||
}
|
||||
}
|
||||
fmt.Println()
|
||||
fmt.Printf("💾 Restore: bd compact --restore %s\n", issue.ID)
|
||||
}
|
||||
|
||||
if issue.Description != "" {
|
||||
fmt.Printf("\nDescription:\n%s\n", issue.Description)
|
||||
}
|
||||
|
||||
@@ -114,7 +114,7 @@ func (c *Compactor) CompactTier1(ctx context.Context, issueID string) error {
|
||||
return fmt.Errorf("failed to update issue: %w", err)
|
||||
}
|
||||
|
||||
if err := c.store.ApplyCompaction(ctx, issueID, 1, originalSize); err != nil {
|
||||
if err := c.store.ApplyCompaction(ctx, issueID, 1, originalSize, compactedSize); err != nil {
|
||||
return fmt.Errorf("failed to set compaction level: %w", err)
|
||||
}
|
||||
|
||||
@@ -257,7 +257,7 @@ func (c *Compactor) compactSingleWithResult(ctx context.Context, issueID string,
|
||||
return fmt.Errorf("failed to update issue: %w", err)
|
||||
}
|
||||
|
||||
if err := c.store.ApplyCompaction(ctx, issueID, 1, result.OriginalSize); err != nil {
|
||||
if err := c.store.ApplyCompaction(ctx, issueID, 1, result.OriginalSize, result.CompactedSize); err != nil {
|
||||
return fmt.Errorf("failed to set compaction level: %w", err)
|
||||
}
|
||||
|
||||
|
||||
@@ -5,6 +5,8 @@ import (
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/steveyegge/beads/internal/types"
|
||||
)
|
||||
|
||||
// CompactionCandidate represents an issue eligible for compaction
|
||||
@@ -278,10 +280,16 @@ func (s *SQLiteStorage) CheckEligibility(ctx context.Context, issueID string, ti
|
||||
|
||||
// ApplyCompaction updates the compaction metadata for an issue after successfully compacting it.
|
||||
// This sets compaction_level, compacted_at, and original_size fields.
|
||||
func (s *SQLiteStorage) ApplyCompaction(ctx context.Context, issueID string, level int, originalSize int) error {
|
||||
func (s *SQLiteStorage) ApplyCompaction(ctx context.Context, issueID string, level int, originalSize int, compressedSize int) error {
|
||||
now := time.Now().UTC()
|
||||
|
||||
_, err := s.db.ExecContext(ctx, `
|
||||
tx, err := s.db.BeginTx(ctx, nil)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to begin transaction: %w", err)
|
||||
}
|
||||
defer tx.Rollback()
|
||||
|
||||
_, err = tx.ExecContext(ctx, `
|
||||
UPDATE issues
|
||||
SET compaction_level = ?,
|
||||
compacted_at = ?,
|
||||
@@ -294,5 +302,26 @@ func (s *SQLiteStorage) ApplyCompaction(ctx context.Context, issueID string, lev
|
||||
return fmt.Errorf("failed to apply compaction metadata: %w", err)
|
||||
}
|
||||
|
||||
reductionPct := 0.0
|
||||
if originalSize > 0 {
|
||||
reductionPct = (1.0 - float64(compressedSize)/float64(originalSize)) * 100
|
||||
}
|
||||
|
||||
eventData := fmt.Sprintf(`{"tier":%d,"original_size":%d,"compressed_size":%d,"reduction_pct":%.1f}`,
|
||||
level, originalSize, compressedSize, reductionPct)
|
||||
|
||||
_, err = tx.ExecContext(ctx, `
|
||||
INSERT INTO events (issue_id, event_type, actor, comment)
|
||||
VALUES (?, ?, 'compactor', ?)
|
||||
`, issueID, types.EventCompacted, eventData)
|
||||
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to record compaction event: %w", err)
|
||||
}
|
||||
|
||||
if err := tx.Commit(); err != nil {
|
||||
return fmt.Errorf("failed to commit transaction: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -333,7 +333,7 @@ func TestApplyCompaction(t *testing.T) {
|
||||
}
|
||||
|
||||
originalSize := len(issue.Description)
|
||||
err := store.ApplyCompaction(ctx, issue.ID, 1, originalSize)
|
||||
err := store.ApplyCompaction(ctx, issue.ID, 1, originalSize, 500)
|
||||
if err != nil {
|
||||
t.Fatalf("ApplyCompaction failed: %v", err)
|
||||
}
|
||||
|
||||
@@ -843,11 +843,14 @@ func (s *SQLiteStorage) GetIssue(ctx context.Context, id string) (*types.Issue,
|
||||
var estimatedMinutes sql.NullInt64
|
||||
var assignee sql.NullString
|
||||
var externalRef sql.NullString
|
||||
var compactedAt sql.NullTime
|
||||
var originalSize sql.NullInt64
|
||||
|
||||
err := s.db.QueryRowContext(ctx, `
|
||||
SELECT id, title, description, design, acceptance_criteria, notes,
|
||||
status, priority, issue_type, assignee, estimated_minutes,
|
||||
created_at, updated_at, closed_at, external_ref
|
||||
created_at, updated_at, closed_at, external_ref,
|
||||
compaction_level, compacted_at, original_size
|
||||
FROM issues
|
||||
WHERE id = ?
|
||||
`, id).Scan(
|
||||
@@ -855,6 +858,7 @@ func (s *SQLiteStorage) GetIssue(ctx context.Context, id string) (*types.Issue,
|
||||
&issue.AcceptanceCriteria, &issue.Notes, &issue.Status,
|
||||
&issue.Priority, &issue.IssueType, &assignee, &estimatedMinutes,
|
||||
&issue.CreatedAt, &issue.UpdatedAt, &closedAt, &externalRef,
|
||||
&issue.CompactionLevel, &compactedAt, &originalSize,
|
||||
)
|
||||
|
||||
if err == sql.ErrNoRows {
|
||||
@@ -877,6 +881,12 @@ func (s *SQLiteStorage) GetIssue(ctx context.Context, id string) (*types.Issue,
|
||||
if externalRef.Valid {
|
||||
issue.ExternalRef = &externalRef.String
|
||||
}
|
||||
if compactedAt.Valid {
|
||||
issue.CompactedAt = &compactedAt.Time
|
||||
}
|
||||
if originalSize.Valid {
|
||||
issue.OriginalSize = int(originalSize.Int64)
|
||||
}
|
||||
|
||||
return &issue, nil
|
||||
}
|
||||
|
||||
@@ -23,6 +23,9 @@ type Issue struct {
|
||||
UpdatedAt time.Time `json:"updated_at"`
|
||||
ClosedAt *time.Time `json:"closed_at,omitempty"`
|
||||
ExternalRef *string `json:"external_ref,omitempty"` // e.g., "gh-9", "jira-ABC"
|
||||
CompactionLevel int `json:"compaction_level,omitempty"`
|
||||
CompactedAt *time.Time `json:"compacted_at,omitempty"`
|
||||
OriginalSize int `json:"original_size,omitempty"`
|
||||
Labels []string `json:"labels,omitempty"` // Populated only for export/import
|
||||
Dependencies []*Dependency `json:"dependencies,omitempty"` // Populated only for export/import
|
||||
}
|
||||
@@ -160,6 +163,7 @@ const (
|
||||
EventDependencyRemoved EventType = "dependency_removed"
|
||||
EventLabelAdded EventType = "label_added"
|
||||
EventLabelRemoved EventType = "label_removed"
|
||||
EventCompacted EventType = "compacted"
|
||||
)
|
||||
|
||||
// BlockedIssue extends Issue with blocking information
|
||||
|
||||
Reference in New Issue
Block a user