Add compacted_at_commit field and git commit capture during compaction

- Add compacted_at_commit field to Issue type (bd-405)
- Add database schema and migration for new field
- Create GetCurrentCommitHash() helper function
- Update ApplyCompaction to store git commit hash (bd-395)
- Update compaction calls to capture current commit
- Update tests to verify commit hash storage
- All tests passing

Amp-Thread-ID: https://ampcode.com/threads/T-5518cccb-7fc9-4dcd-ba5a-e22cd10e45d7
Co-authored-by: Amp <amp@ampcode.com>
This commit is contained in:
Steve Yegge
2025-10-16 17:43:27 -07:00
parent b17fcdbb2a
commit 65f59e6b01
8 changed files with 81 additions and 12 deletions

View File

@@ -325,7 +325,7 @@
{"id":"bd-392","title":"Epic: Add intelligent database compaction with Claude Haiku","description":"Implement multi-tier database compaction using Claude Haiku to semantically compress old, closed issues. This keeps the database lightweight and agent-friendly while preserving essential context.\n\nGoals:\n- 70-95% space reduction for eligible issues\n- Full restore capability via snapshots\n- Opt-in with dry-run safety\n- ~$1 per 1,000 issues compacted","acceptance_criteria":"- Schema migration with snapshots table\n- Haiku integration for summarization\n- Two-tier compaction (30d, 90d)\n- CLI with dry-run, restore, stats\n- Full test coverage\n- Documentation complete","status":"open","priority":2,"issue_type":"epic","created_at":"2025-10-16T15:25:11.436559-07:00","updated_at":"2025-10-16T15:25:11.436559-07:00"}
{"id":"bd-393","title":"Critical: Auto-import was skipping collisions instead of remapping them","description":"The auto-import mechanism was SKIPPING colliding issues instead of automatically remapping them to new IDs. This caused work from other workers/devices to be LOST during git pull operations.\n\nRoot cause: Lines 283-326 in main.go were filtering out colliding issues instead of calling RemapCollisions() to resolve them.\n\nImpact: Multi-device workflows would silently lose issues when two devices created issues with the same ID.","acceptance_criteria":"- Auto-import detects collisions\n- Calls ScoreCollisions + RemapCollisions automatically\n- Shows remapping notification to user\n- No work is lost from other workers\n- All tests pass","status":"closed","priority":0,"issue_type":"bug","created_at":"2025-10-16T15:25:11.436559-07:00","updated_at":"2025-10-16T15:25:11.436559-07:00","closed_at":"2025-10-16T15:00:37.591033-07:00"}
{"id":"bd-394","title":"GH-11: Add Docker support for hosted/shared instance","description":"Request for Docker container hosting to use beads across multiple dev machines. Would need to consider: centralized database (PostgreSQL?), authentication, concurrent access, API server, etc. This is a significant architectural change from the current local-first model.","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-10-16T16:03:31.498107-07:00","updated_at":"2025-10-16T16:03:31.498107-07:00","closed_at":"2025-10-16T14:37:09.712087-07:00","external_ref":"gh-11"}
{"id":"bd-395","title":"Record git commit hash during compaction","description":"Update compact command to capture current git HEAD commit hash and store in compacted_at_commit field","status":"open","priority":1,"issue_type":"task","created_at":"2025-10-16T16:03:31.501666-07:00","updated_at":"2025-10-16T16:03:31.501666-07:00"}
{"id":"bd-395","title":"Record git commit hash during compaction","description":"Update compact command to capture current git HEAD commit hash and store in compacted_at_commit field","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-16T16:03:31.501666-07:00","updated_at":"2025-10-16T17:42:08.99069-07:00","closed_at":"2025-10-16T17:42:08.99069-07:00"}
{"id":"bd-396","title":"Epic: Fix status/closed_at inconsistency (bd-224 solution)","description":"Implement hybrid solution to enforce status/closed_at invariant:\n- Database CHECK constraint\n- Smart UpdateIssue logic\n- Import enforcement\n- Reopen command\n\nThis is a data integrity issue that affects statistics and will cause problems for agent swarms. The hybrid approach provides defense-in-depth.\n\nSee ULTRATHINK_BD224.md for full analysis and rationale.\n\nParent of all implementation tasks for this fix.","status":"open","priority":1,"issue_type":"epic","created_at":"2025-10-16T16:03:31.504655-07:00","updated_at":"2025-10-16T16:03:31.504655-07:00"}
{"id":"bd-397","title":"Audit and document all inconsistent issues in database","description":"Before we add the constraint, we need to know what data is inconsistent.\n\nSteps:\n1. Query database for status/closed_at mismatches\n2. Document each case (how many, which issues, patterns)\n3. Decide on cleanup strategy (trust status vs trust closed_at)\n4. Create SQL script for cleanup\n\nOutput: Document with counts and cleanup SQL ready to review.\n\nThis unblocks the migration work.","status":"open","priority":1,"issue_type":"task","created_at":"2025-10-16T16:03:31.520913-07:00","updated_at":"2025-10-16T16:03:31.520913-07:00"}
{"id":"bd-398","title":"Reach 1.0 release milestone","description":"Stabilize API, finalize documentation, comprehensive testing","status":"open","priority":1,"issue_type":"epic","created_at":"2025-10-16T16:03:31.523258-07:00","updated_at":"2025-10-16T16:03:31.523258-07:00"}
@@ -337,7 +337,7 @@
{"id":"bd-402","title":"Consider batching API for bulk issue creation (recovered from bd-222)","description":"Current CreateIssue acquires a dedicated connection for each call. For bulk imports or agent workflows creating many issues, a batched API could improve performance:\n\nCreateIssues(ctx, issues []*Issue, actor string) error\n\nThis would:\n- Acquire one connection\n- Use one IMMEDIATE transaction\n- Insert all issues atomically\n- Reduce connection overhead\n\nNot urgent - current approach is correct and fast enough for typical use.\n\n**Recovered from:** bd-360 (lost in auto-import bug, see LOST_ISSUES_RECOVERY.md)","status":"open","priority":2,"issue_type":"feature","created_at":"2025-10-16T16:03:31.547562-07:00","updated_at":"2025-10-16T16:03:31.547562-07:00"}
{"id":"bd-403","title":"Use safer placeholder pattern in replaceIDReferences","description":"Currently uses bd-324 which could theoretically collide with user text. Use a truly unique placeholder like null bytes: \\x00REMAP\\x00_0_\\x00 which are unlikely to appear in normal text. Located in collision.go:324. Very low probability issue but worth fixing for completeness.","status":"open","priority":3,"issue_type":"task","created_at":"2025-10-16T16:03:31.550675-07:00","updated_at":"2025-10-16T16:03:31.550675-07:00"}
{"id":"bd-404","title":"Git-based restoration for compacted issues","description":"Store git commit hash at compaction time to enable restoration of full issue history from version control. When issues are compacted, record the current git commit hash so users can restore the original uncompacted issue from git history.","status":"open","priority":1,"issue_type":"epic","created_at":"2025-10-16T16:03:31.564412-07:00","updated_at":"2025-10-16T16:03:31.564412-07:00"}
{"id":"bd-405","title":"Add compacted_at_commit field to Issue type","description":"Add optional compacted_at_commit string field to store git commit hash when issue is compacted","status":"open","priority":1,"issue_type":"task","created_at":"2025-10-16T16:03:31.57487-07:00","updated_at":"2025-10-16T16:03:31.57487-07:00"}
{"id":"bd-405","title":"Add compacted_at_commit field to Issue type","description":"Add optional compacted_at_commit string field to store git commit hash when issue is compacted","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-16T16:03:31.57487-07:00","updated_at":"2025-10-16T17:39:32.395095-07:00","closed_at":"2025-10-16T17:39:32.395095-07:00"}
{"id":"bd-406","title":"GH-3: Debug zsh killed error on bd init","description":"User reports 'zsh: killed bd init' when running bd init or just bd command. Likely a crash or signal. Need to reproduce and investigate cause.","notes":"Awaiting user feedback - cannot reproduce locally, waiting for user to provide more details about environment and error message","status":"blocked","priority":1,"issue_type":"bug","created_at":"2025-10-16T16:03:31.580261-07:00","updated_at":"2025-10-16T16:03:31.580261-07:00","external_ref":"gh-3"}
{"id":"bd-407","title":"Implement bd restore command","description":"Create new restore command that checks out git commit from compacted_at_commit, reads issue from JSONL, and displays full history","status":"open","priority":1,"issue_type":"feature","created_at":"2025-10-16T16:03:31.581845-07:00","updated_at":"2025-10-16T16:03:31.581845-07:00"}
{"id":"bd-408","title":"Add tests for git-based restoration","description":"Test compaction stores commit hash, restore command works, handles missing git repo gracefully","status":"open","priority":2,"issue_type":"task","created_at":"2025-10-16T16:03:31.582488-07:00","updated_at":"2025-10-16T16:03:31.582488-07:00"}

View File

@@ -114,7 +114,8 @@ func (c *Compactor) CompactTier1(ctx context.Context, issueID string) error {
return fmt.Errorf("failed to update issue: %w", err)
}
if err := c.store.ApplyCompaction(ctx, issueID, 1, originalSize, compactedSize); err != nil {
commitHash := GetCurrentCommitHash()
if err := c.store.ApplyCompaction(ctx, issueID, 1, originalSize, compactedSize, commitHash); err != nil {
return fmt.Errorf("failed to set compaction level: %w", err)
}
@@ -257,7 +258,8 @@ func (c *Compactor) compactSingleWithResult(ctx context.Context, issueID string,
return fmt.Errorf("failed to update issue: %w", err)
}
if err := c.store.ApplyCompaction(ctx, issueID, 1, result.OriginalSize, result.CompactedSize); err != nil {
commitHash := GetCurrentCommitHash()
if err := c.store.ApplyCompaction(ctx, issueID, 1, result.OriginalSize, result.CompactedSize, commitHash); err != nil {
return fmt.Errorf("failed to set compaction level: %w", err)
}

21
internal/compact/git.go Normal file
View File

@@ -0,0 +1,21 @@
package compact
import (
"bytes"
"os/exec"
"strings"
)
// GetCurrentCommitHash returns the current git HEAD commit hash.
// Returns empty string if not in a git repository or if git command fails.
func GetCurrentCommitHash() string {
cmd := exec.Command("git", "rev-parse", "HEAD")
var out bytes.Buffer
cmd.Stdout = &out
if err := cmd.Run(); err != nil {
return ""
}
return strings.TrimSpace(out.String())
}

View File

@@ -279,8 +279,8 @@ func (s *SQLiteStorage) CheckEligibility(ctx context.Context, issueID string, ti
}
// ApplyCompaction updates the compaction metadata for an issue after successfully compacting it.
// This sets compaction_level, compacted_at, and original_size fields.
func (s *SQLiteStorage) ApplyCompaction(ctx context.Context, issueID string, level int, originalSize int, compressedSize int) error {
// This sets compaction_level, compacted_at, compacted_at_commit, and original_size fields.
func (s *SQLiteStorage) ApplyCompaction(ctx context.Context, issueID string, level int, originalSize int, compressedSize int, commitHash string) error {
now := time.Now().UTC()
tx, err := s.db.BeginTx(ctx, nil)
@@ -289,14 +289,20 @@ func (s *SQLiteStorage) ApplyCompaction(ctx context.Context, issueID string, lev
}
defer tx.Rollback()
var commitHashPtr *string
if commitHash != "" {
commitHashPtr = &commitHash
}
_, err = tx.ExecContext(ctx, `
UPDATE issues
SET compaction_level = ?,
compacted_at = ?,
compacted_at_commit = ?,
original_size = ?,
updated_at = ?
WHERE id = ?
`, level, now, originalSize, now, issueID)
`, level, now, commitHashPtr, originalSize, now, issueID)
if err != nil {
return fmt.Errorf("failed to apply compaction metadata: %w", err)

View File

@@ -333,18 +333,19 @@ func TestApplyCompaction(t *testing.T) {
}
originalSize := len(issue.Description)
err := store.ApplyCompaction(ctx, issue.ID, 1, originalSize, 500)
err := store.ApplyCompaction(ctx, issue.ID, 1, originalSize, 500, "abc123")
if err != nil {
t.Fatalf("ApplyCompaction failed: %v", err)
}
var compactionLevel int
var compactedAt sql.NullTime
var compactedAtCommit sql.NullString
var storedSize int
err = store.db.QueryRowContext(ctx, `
SELECT COALESCE(compaction_level, 0), compacted_at, COALESCE(original_size, 0)
SELECT COALESCE(compaction_level, 0), compacted_at, compacted_at_commit, COALESCE(original_size, 0)
FROM issues WHERE id = ?
`, issue.ID).Scan(&compactionLevel, &compactedAt, &storedSize)
`, issue.ID).Scan(&compactionLevel, &compactedAt, &compactedAtCommit, &storedSize)
if err != nil {
t.Fatalf("Failed to query issue: %v", err)
}
@@ -355,6 +356,9 @@ func TestApplyCompaction(t *testing.T) {
if !compactedAt.Valid {
t.Error("Expected compacted_at to be set")
}
if !compactedAtCommit.Valid || compactedAtCommit.String != "abc123" {
t.Errorf("Expected compacted_at_commit 'abc123', got %v", compactedAtCommit)
}
if storedSize != originalSize {
t.Errorf("Expected original_size %d, got %d", originalSize, storedSize)
}

View File

@@ -20,6 +20,7 @@ CREATE TABLE IF NOT EXISTS issues (
external_ref TEXT,
compaction_level INTEGER DEFAULT 0,
compacted_at DATETIME,
compacted_at_commit TEXT,
original_size INTEGER,
CHECK ((status = 'closed') = (closed_at IS NOT NULL))
);

View File

@@ -87,6 +87,11 @@ func New(path string) (*SQLiteStorage, error) {
return nil, fmt.Errorf("failed to migrate compaction config: %w", err)
}
// Migrate existing databases to add compacted_at_commit column
if err := migrateCompactedAtCommitColumn(db); err != nil {
return nil, fmt.Errorf("failed to migrate compacted_at_commit column: %w", err)
}
return &SQLiteStorage{
db: db,
}, nil
@@ -401,6 +406,31 @@ func migrateCompactionConfig(db *sql.DB) error {
return nil
}
// migrateCompactedAtCommitColumn adds compacted_at_commit column to the issues table.
// This migration is idempotent and safe to run multiple times.
func migrateCompactedAtCommitColumn(db *sql.DB) error {
var columnExists bool
err := db.QueryRow(`
SELECT COUNT(*) > 0
FROM pragma_table_info('issues')
WHERE name = 'compacted_at_commit'
`).Scan(&columnExists)
if err != nil {
return fmt.Errorf("failed to check compacted_at_commit column: %w", err)
}
if columnExists {
return nil
}
_, err = db.Exec(`ALTER TABLE issues ADD COLUMN compacted_at_commit TEXT`)
if err != nil {
return fmt.Errorf("failed to add compacted_at_commit column: %w", err)
}
return nil
}
// getNextIDForPrefix atomically generates the next ID for a given prefix
// Uses the issue_counters table for atomic, cross-process ID generation
func (s *SQLiteStorage) getNextIDForPrefix(ctx context.Context, prefix string) (int, error) {
@@ -846,11 +876,12 @@ func (s *SQLiteStorage) GetIssue(ctx context.Context, id string) (*types.Issue,
var compactedAt sql.NullTime
var originalSize sql.NullInt64
var compactedAtCommit sql.NullString
err := s.db.QueryRowContext(ctx, `
SELECT id, title, description, design, acceptance_criteria, notes,
status, priority, issue_type, assignee, estimated_minutes,
created_at, updated_at, closed_at, external_ref,
compaction_level, compacted_at, original_size
compaction_level, compacted_at, compacted_at_commit, original_size
FROM issues
WHERE id = ?
`, id).Scan(
@@ -858,7 +889,7 @@ func (s *SQLiteStorage) GetIssue(ctx context.Context, id string) (*types.Issue,
&issue.AcceptanceCriteria, &issue.Notes, &issue.Status,
&issue.Priority, &issue.IssueType, &assignee, &estimatedMinutes,
&issue.CreatedAt, &issue.UpdatedAt, &closedAt, &externalRef,
&issue.CompactionLevel, &compactedAt, &originalSize,
&issue.CompactionLevel, &compactedAt, &compactedAtCommit, &originalSize,
)
if err == sql.ErrNoRows {
@@ -884,6 +915,9 @@ func (s *SQLiteStorage) GetIssue(ctx context.Context, id string) (*types.Issue,
if compactedAt.Valid {
issue.CompactedAt = &compactedAt.Time
}
if compactedAtCommit.Valid {
issue.CompactedAtCommit = &compactedAtCommit.String
}
if originalSize.Valid {
issue.OriginalSize = int(originalSize.Int64)
}

View File

@@ -25,6 +25,7 @@ type Issue struct {
ExternalRef *string `json:"external_ref,omitempty"` // e.g., "gh-9", "jira-ABC"
CompactionLevel int `json:"compaction_level,omitempty"`
CompactedAt *time.Time `json:"compacted_at,omitempty"`
CompactedAtCommit *string `json:"compacted_at_commit,omitempty"` // Git commit hash when compacted
OriginalSize int `json:"original_size,omitempty"`
Labels []string `json:"labels,omitempty"` // Populated only for export/import
Dependencies []*Dependency `json:"dependencies,omitempty"` // Populated only for export/import