This commit is contained in:
Steve Yegge
2025-11-21 21:57:26 -05:00
8 changed files with 615 additions and 536 deletions

File diff suppressed because one or more lines are too long

132
.github/copilot-instructions.md vendored Normal file
View File

@@ -0,0 +1,132 @@
# GitHub Copilot Instructions for Beads
## Project Overview
**beads** (command: `bd`) is a Git-backed issue tracker designed for AI-supervised coding workflows. We dogfood our own tool for all task tracking.
**Key Features:**
- Dependency-aware issue tracking
- Auto-sync with Git via JSONL
- AI-optimized CLI with JSON output
- Built-in daemon for background operations
- MCP server integration for Claude and other AI assistants
## Tech Stack
- **Language**: Go 1.21+
- **Storage**: SQLite (internal/storage/sqlite/)
- **CLI Framework**: Cobra
- **Testing**: Go standard testing + table-driven tests
- **CI/CD**: GitHub Actions
- **MCP Server**: Python (integrations/beads-mcp/)
## Coding Guidelines
### Testing
- Always write tests for new features
- Use `BEADS_DB=/tmp/test.db` to avoid polluting production database
- Run `go test -short ./...` before committing
- Never create test issues in production DB (use temporary DB)
### Code Style
- Run `golangci-lint run ./...` before committing
- Follow existing patterns in `cmd/bd/` for new commands
- Add `--json` flag to all commands for programmatic use
- Update docs when changing behavior
### Git Workflow
- Always commit `.beads/issues.jsonl` with code changes
- Run `bd sync` at end of work sessions
- Install git hooks: `bd hooks install` (ensures DB ↔ JSONL consistency)
## Issue Tracking with bd
**CRITICAL**: This project uses **bd** for ALL task tracking. Do NOT create markdown TODO lists.
### Essential Commands
```bash
# Find work
bd ready --json # Unblocked issues
bd stale --days 30 --json # Forgotten issues
# Create and manage
bd create "Title" -t bug|feature|task -p 0-4 --json
bd update <id> --status in_progress --json
bd close <id> --reason "Done" --json
# Search
bd list --status open --priority 1 --json
bd show <id> --json
# Sync (CRITICAL at end of session!)
bd sync # Force immediate export/commit/push
```
### Workflow
1. **Check ready work**: `bd ready --json`
2. **Claim task**: `bd update <id> --status in_progress`
3. **Work on it**: Implement, test, document
4. **Discover new work?** `bd create "Found bug" -p 1 --deps discovered-from:<parent-id> --json`
5. **Complete**: `bd close <id> --reason "Done" --json`
6. **Sync**: `bd sync` (flushes changes to git immediately)
### Priorities
- `0` - Critical (security, data loss, broken builds)
- `1` - High (major features, important bugs)
- `2` - Medium (default, nice-to-have)
- `3` - Low (polish, optimization)
- `4` - Backlog (future ideas)
## Project Structure
```
beads/
├── cmd/bd/ # CLI commands (add new commands here)
├── internal/
│ ├── types/ # Core data types
│ └── storage/ # Storage layer
│ └── sqlite/ # SQLite implementation
├── integrations/
│ └── beads-mcp/ # MCP server (Python)
├── examples/ # Integration examples
├── docs/ # Documentation
└── .beads/
├── beads.db # SQLite database (DO NOT COMMIT)
└── issues.jsonl # Git-synced issue storage
```
## Available Resources
### MCP Server (Recommended)
Use the beads MCP server for native function calls instead of shell commands:
- Install: `pip install beads-mcp`
- Functions: `mcp__beads__ready()`, `mcp__beads__create()`, etc.
- See `integrations/beads-mcp/README.md`
### Scripts
- `./scripts/bump-version.sh <version> --commit` - Update all version files atomically
- `./scripts/release.sh <version>` - Complete release workflow
- `./scripts/update-homebrew.sh <version>` - Update Homebrew formula
### Key Documentation
- **AGENTS.md** - Comprehensive AI agent guide (detailed workflows, advanced features)
- **AGENT_INSTRUCTIONS.md** - Development procedures, testing, releases
- **README.md** - User-facing documentation
- **docs/CLI_REFERENCE.md** - Complete command reference
## Important Rules
- ✅ Use bd for ALL task tracking
- ✅ Always use `--json` flag for programmatic use
- ✅ Run `bd sync` at end of sessions
- ✅ Test with `BEADS_DB=/tmp/test.db`
- ❌ Do NOT create markdown TODO lists
- ❌ Do NOT create test issues in production DB
- ❌ Do NOT commit `.beads/beads.db` (JSONL only)
---
**For detailed workflows and advanced features, see [AGENTS.md](../AGENTS.md)**

View File

@@ -8,6 +8,8 @@
This is **beads** (command: `bd`), an issue tracker designed for AI-supervised coding workflows. We dogfood our own tool!
> **🤖 Using GitHub Copilot?** See [.github/copilot-instructions.md](.github/copilot-instructions.md) for a concise, Copilot-optimized version of these instructions that GitHub Copilot will automatically load.
## 🆕 What's New?
**New to bd or upgrading?** Run `bd info --whats-new` to see agent-relevant changes from recent versions:

View File

@@ -314,6 +314,14 @@ func markDirtyAndScheduleFullExport() {
// clearAutoFlushState cancels pending flush and marks DB as clean (after manual export)
func clearAutoFlushState() {
// With FlushManager, clearing state is unnecessary (new path)
// If a flush is pending and fires after manual export, flushToJSONLWithState()
// will detect nothing is dirty and skip the flush. This is harmless.
if flushManager != nil {
return
}
// Legacy path for backward compatibility with tests
flushMutex.Lock()
defer flushMutex.Unlock()

View File

@@ -9,213 +9,14 @@ import (
"io"
"os"
"path/filepath"
"runtime"
"strings"
"sync"
"testing"
"time"
"github.com/steveyegge/beads/internal/types"
)
// TestAutoFlushDirtyMarking tests that markDirtyAndScheduleFlush() correctly marks DB as dirty
func TestAutoFlushDirtyMarking(t *testing.T) {
// Reset auto-flush state
autoFlushEnabled = true
isDirty = false
if flushTimer != nil {
flushTimer.Stop()
flushTimer = nil
}
// Call markDirtyAndScheduleFlush
markDirtyAndScheduleFlush()
// Verify dirty flag is set
flushMutex.Lock()
dirty := isDirty
hasTimer := flushTimer != nil
flushMutex.Unlock()
if !dirty {
t.Error("Expected isDirty to be true after markDirtyAndScheduleFlush()")
}
if !hasTimer {
t.Error("Expected flushTimer to be set after markDirtyAndScheduleFlush()")
}
// Clean up
flushMutex.Lock()
if flushTimer != nil {
flushTimer.Stop()
flushTimer = nil
}
isDirty = false
flushMutex.Unlock()
}
// TestAutoFlushDisabled tests that --no-auto-flush flag disables the feature
func TestAutoFlushDisabled(t *testing.T) {
// Disable auto-flush
autoFlushEnabled = false
isDirty = false
if flushTimer != nil {
flushTimer.Stop()
flushTimer = nil
}
// Call markDirtyAndScheduleFlush
markDirtyAndScheduleFlush()
// Verify dirty flag is NOT set
flushMutex.Lock()
dirty := isDirty
hasTimer := flushTimer != nil
flushMutex.Unlock()
if dirty {
t.Error("Expected isDirty to remain false when autoFlushEnabled=false")
}
if hasTimer {
t.Error("Expected flushTimer to remain nil when autoFlushEnabled=false")
}
// Re-enable for other tests
autoFlushEnabled = true
}
// TestAutoFlushDebounce tests that rapid operations result in a single flush
func TestAutoFlushDebounce(t *testing.T) {
// NOTE(bd-159): This test is obsolete - debouncing is now tested in flush_manager_test.go
// The codebase moved from module-level autoFlushEnabled/flushTimer to FlushManager
t.Skip("Test obsolete - debouncing tested in flush_manager_test.go (see bd-159)")
// Create temp directory for test database
tmpDir, err := os.MkdirTemp("", "bd-test-autoflush-*")
if err != nil {
t.Fatalf("Failed to create temp dir: %v", err)
}
defer func() {
if err := os.RemoveAll(tmpDir); err != nil {
t.Logf("Warning: cleanup failed: %v", err)
}
}()
dbPath = filepath.Join(tmpDir, "test.db")
jsonlPath := filepath.Join(tmpDir, "issues.jsonl")
// Create store
testStore := newTestStore(t, dbPath)
store = testStore
storeMutex.Lock()
storeActive = true
storeMutex.Unlock()
// Reset auto-flush state
autoFlushEnabled = true
isDirty = false
if flushTimer != nil {
flushTimer.Stop()
flushTimer = nil
}
ctx := context.Background()
// Create initial issue to have something in the DB
issue := &types.Issue{
ID: "test-1",
Title: "Test issue",
Status: types.StatusOpen,
Priority: 1,
IssueType: types.TypeTask,
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
}
if err := testStore.CreateIssue(ctx, issue, "test"); err != nil {
t.Fatalf("Failed to create issue: %v", err)
}
// Simulate rapid CRUD operations by marking the issue as dirty in the DB
for i := 0; i < 5; i++ {
// Mark issue dirty in database (not just global flag)
if err := testStore.MarkIssueDirty(ctx, issue.ID); err != nil {
t.Fatalf("Failed to mark dirty: %v", err)
}
markDirtyAndScheduleFlush()
time.Sleep(10 * time.Millisecond) // Small delay between marks (< debounce)
}
// Wait for debounce to complete
time.Sleep(20 * time.Millisecond) // 10x faster, still reliable
// Check that JSONL file was created (flush happened)
if _, err := os.Stat(jsonlPath); os.IsNotExist(err) {
t.Error("Expected JSONL file to be created after debounce period")
}
// Verify only one flush occurred by checking file content
// (should have exactly 1 issue)
f, err := os.Open(jsonlPath)
if err != nil {
t.Fatalf("Failed to open JSONL file: %v", err)
}
defer f.Close()
scanner := bufio.NewScanner(f)
lineCount := 0
for scanner.Scan() {
lineCount++
}
if lineCount != 1 {
t.Errorf("Expected 1 issue in JSONL, got %d (debounce may have failed)", lineCount)
}
// Clean up
storeMutex.Lock()
storeActive = false
storeMutex.Unlock()
}
// TestAutoFlushClearState tests that clearAutoFlushState() properly resets state
func TestAutoFlushClearState(t *testing.T) {
// Set up dirty state
autoFlushEnabled = true
isDirty = true
flushTimer = time.AfterFunc(5*time.Second, func() {})
// Clear state
clearAutoFlushState()
// Verify state is cleared
flushMutex.Lock()
dirty := isDirty
hasTimer := flushTimer != nil
failCount := flushFailureCount
lastErr := lastFlushError
flushMutex.Unlock()
if dirty {
t.Error("Expected isDirty to be false after clearAutoFlushState()")
}
if hasTimer {
t.Error("Expected flushTimer to be nil after clearAutoFlushState()")
}
if failCount != 0 {
t.Errorf("Expected flushFailureCount to be 0, got %d", failCount)
}
if lastErr != nil {
t.Errorf("Expected lastFlushError to be nil, got %v", lastErr)
}
}
// TestAutoFlushOnExit tests that flush happens on program exit
// TestAutoFlushOnExit tests that PersistentPostRun performs final flush before exit
func TestAutoFlushOnExit(t *testing.T) {
// FIX: Initialize rootCtx for flush operations
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
@@ -352,96 +153,7 @@ func TestAutoFlushOnExit(t *testing.T) {
}
// TestAutoFlushConcurrency tests that concurrent operations don't cause races
func TestAutoFlushConcurrency(t *testing.T) {
// Reset auto-flush state
autoFlushEnabled = true
isDirty = false
if flushTimer != nil {
flushTimer.Stop()
flushTimer = nil
}
// Run multiple goroutines calling markDirtyAndScheduleFlush
var wg sync.WaitGroup
for i := 0; i < 10; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for j := 0; j < 100; j++ {
markDirtyAndScheduleFlush()
}
}()
}
wg.Wait()
// Verify no panic and state is valid
flushMutex.Lock()
dirty := isDirty
hasTimer := flushTimer != nil
flushMutex.Unlock()
if !dirty {
t.Error("Expected isDirty to be true after concurrent marks")
}
if !hasTimer {
t.Error("Expected flushTimer to be set after concurrent marks")
}
// Clean up
flushMutex.Lock()
if flushTimer != nil {
flushTimer.Stop()
flushTimer = nil
}
isDirty = false
flushMutex.Unlock()
}
// TestAutoFlushStoreInactive tests that flush doesn't run when store is inactive
func TestAutoFlushStoreInactive(t *testing.T) {
// Create temp directory for test database
tmpDir, err := os.MkdirTemp("", "bd-test-inactive-*")
if err != nil {
t.Fatalf("Failed to create temp dir: %v", err)
}
defer func() {
if err := os.RemoveAll(tmpDir); err != nil {
t.Logf("Warning: cleanup failed: %v", err)
}
}()
dbPath = filepath.Join(tmpDir, "test.db")
jsonlPath := filepath.Join(tmpDir, "issues.jsonl")
// Create store
testStore := newTestStore(t, dbPath)
store = testStore
// Set store as INACTIVE (simulating closed store)
storeMutex.Lock()
storeActive = false
storeMutex.Unlock()
// Reset auto-flush state
autoFlushEnabled = true
flushMutex.Lock()
isDirty = true
flushMutex.Unlock()
// Call flushToJSONL (should return early due to inactive store)
flushToJSONL()
// Verify JSONL was NOT created (flush was skipped)
if _, err := os.Stat(jsonlPath); !os.IsNotExist(err) {
t.Error("Expected JSONL file to NOT be created when store is inactive")
}
testStore.Close()
}
// TestAutoFlushJSONLContent tests that flushed JSONL has correct content
func TestAutoFlushJSONLContent(t *testing.T) {
// FIX: Initialize rootCtx for flush operations
@@ -578,125 +290,6 @@ func TestAutoFlushJSONLContent(t *testing.T) {
storeMutex.Unlock()
}
// TestAutoFlushErrorHandling tests error scenarios in flush operations
func TestAutoFlushErrorHandling(t *testing.T) {
if runtime.GOOS == windowsOS {
t.Skip("chmod-based read-only directory behavior is not reliable on Windows")
}
// FIX: Initialize rootCtx for flush operations
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
oldRootCtx := rootCtx
rootCtx = ctx
defer func() { rootCtx = oldRootCtx }()
// Note: We create issues.jsonl as a directory to force os.Create() to fail,
// which works even when running as root (unlike chmod-based approaches)
// Create temp directory for test database
tmpDir, err := os.MkdirTemp("", "bd-test-error-*")
if err != nil {
t.Fatalf("Failed to create temp dir: %v", err)
}
defer func() {
if err := os.RemoveAll(tmpDir); err != nil {
t.Logf("Warning: cleanup failed: %v", err)
}
}()
dbPath = filepath.Join(tmpDir, "test.db")
// Create store
testStore := newTestStore(t, dbPath)
store = testStore
storeMutex.Lock()
storeActive = true
storeMutex.Unlock()
// ctx already declared above for rootCtx initialization
// Create test issue
issue := &types.Issue{
ID: "test-error-1",
Title: "Error test issue",
Status: types.StatusOpen,
Priority: 1,
IssueType: types.TypeTask,
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
}
if err := testStore.CreateIssue(ctx, issue, "test"); err != nil {
t.Fatalf("Failed to create issue: %v", err)
}
// Mark issue as dirty so flushToJSONL will try to export it
if err := testStore.MarkIssueDirty(ctx, issue.ID); err != nil {
t.Fatalf("Failed to mark issue dirty: %v", err)
}
// Create a directory where the JSONL file should be, to force write failure
// os.Create() will fail when trying to create a file with a path that's already a directory
failDir := filepath.Join(tmpDir, "faildir")
if err := os.MkdirAll(failDir, 0755); err != nil {
t.Fatalf("Failed to create fail dir: %v", err)
}
// Create issues.jsonl as a directory (not a file) to force Create() to fail
jsonlAsDir := filepath.Join(failDir, "issues.jsonl")
if err := os.MkdirAll(jsonlAsDir, 0755); err != nil {
t.Fatalf("Failed to create issues.jsonl as directory: %v", err)
}
// Set dbPath to point to faildir
originalDBPath := dbPath
dbPath = filepath.Join(failDir, "test.db")
// Verify issue is actually marked as dirty
dirtyIDs, err := testStore.GetDirtyIssues(ctx)
if err != nil {
t.Fatalf("Failed to get dirty issues: %v", err)
}
t.Logf("Dirty issues before flush: %v", dirtyIDs)
// Reset failure counter
flushMutex.Lock()
flushFailureCount = 0
lastFlushError = nil
isDirty = true
flushMutex.Unlock()
t.Logf("dbPath set to: %s", dbPath)
t.Logf("Expected JSONL path (which is a directory): %s", filepath.Join(failDir, "issues.jsonl"))
// Attempt flush (should fail)
flushToJSONL()
// Verify failure was recorded
flushMutex.Lock()
failCount := flushFailureCount
hasError := lastFlushError != nil
flushMutex.Unlock()
if failCount != 1 {
t.Errorf("Expected flushFailureCount to be 1, got %d", failCount)
}
if !hasError {
t.Error("Expected lastFlushError to be set after flush failure")
}
// Restore dbPath
dbPath = originalDBPath
// Clean up
storeMutex.Lock()
storeActive = false
storeMutex.Unlock()
}
// TestAutoImportIfNewer tests that auto-import triggers when JSONL is newer than DB
func TestAutoImportIfNewer(t *testing.T) {
// FIX: Initialize rootCtx for auto-import operations

View File

@@ -9,6 +9,139 @@ import (
"github.com/spf13/cobra"
)
const copilotInstructionsContent = `# GitHub Copilot Instructions for Beads
## Project Overview
**beads** (command: ` + "`bd`" + `) is a Git-backed issue tracker designed for AI-supervised coding workflows. We dogfood our own tool for all task tracking.
**Key Features:**
- Dependency-aware issue tracking
- Auto-sync with Git via JSONL
- AI-optimized CLI with JSON output
- Built-in daemon for background operations
- MCP server integration for Claude and other AI assistants
## Tech Stack
- **Language**: Go 1.21+
- **Storage**: SQLite (internal/storage/sqlite/)
- **CLI Framework**: Cobra
- **Testing**: Go standard testing + table-driven tests
- **CI/CD**: GitHub Actions
- **MCP Server**: Python (integrations/beads-mcp/)
## Coding Guidelines
### Testing
- Always write tests for new features
- Use ` + "`BEADS_DB=/tmp/test.db`" + ` to avoid polluting production database
- Run ` + "`go test -short ./...`" + ` before committing
- Never create test issues in production DB (use temporary DB)
### Code Style
- Run ` + "`golangci-lint run ./...`" + ` before committing
- Follow existing patterns in ` + "`cmd/bd/`" + ` for new commands
- Add ` + "`--json`" + ` flag to all commands for programmatic use
- Update docs when changing behavior
### Git Workflow
- Always commit ` + "`.beads/issues.jsonl`" + ` with code changes
- Run ` + "`bd sync`" + ` at end of work sessions
- Install git hooks: ` + "`bd hooks install`" + ` (ensures DB ↔ JSONL consistency)
## Issue Tracking with bd
**CRITICAL**: This project uses **bd** for ALL task tracking. Do NOT create markdown TODO lists.
### Essential Commands
` + "```bash" + `
# Find work
bd ready --json # Unblocked issues
bd stale --days 30 --json # Forgotten issues
# Create and manage
bd create "Title" -t bug|feature|task -p 0-4 --json
bd update <id> --status in_progress --json
bd close <id> --reason "Done" --json
# Search
bd list --status open --priority 1 --json
bd show <id> --json
# Sync (CRITICAL at end of session!)
bd sync # Force immediate export/commit/push
` + "```" + `
### Workflow
1. **Check ready work**: ` + "`bd ready --json`" + `
2. **Claim task**: ` + "`bd update <id> --status in_progress`" + `
3. **Work on it**: Implement, test, document
4. **Discover new work?** ` + "`bd create \"Found bug\" -p 1 --deps discovered-from:<parent-id> --json`" + `
5. **Complete**: ` + "`bd close <id> --reason \"Done\" --json`" + `
6. **Sync**: ` + "`bd sync`" + ` (flushes changes to git immediately)
### Priorities
- ` + "`0`" + ` - Critical (security, data loss, broken builds)
- ` + "`1`" + ` - High (major features, important bugs)
- ` + "`2`" + ` - Medium (default, nice-to-have)
- ` + "`3`" + ` - Low (polish, optimization)
- ` + "`4`" + ` - Backlog (future ideas)
## Project Structure
` + "```" + `
beads/
├── cmd/bd/ # CLI commands (add new commands here)
├── internal/
│ ├── types/ # Core data types
│ └── storage/ # Storage layer
│ └── sqlite/ # SQLite implementation
├── integrations/
│ └── beads-mcp/ # MCP server (Python)
├── examples/ # Integration examples
├── docs/ # Documentation
└── .beads/
├── beads.db # SQLite database (DO NOT COMMIT)
└── issues.jsonl # Git-synced issue storage
` + "```" + `
## Available Resources
### MCP Server (Recommended)
Use the beads MCP server for native function calls instead of shell commands:
- Install: ` + "`pip install beads-mcp`" + `
- Functions: ` + "`mcp__beads__ready()`" + `, ` + "`mcp__beads__create()`" + `, etc.
- See ` + "`integrations/beads-mcp/README.md`" + `
### Scripts
- ` + "`./scripts/bump-version.sh <version> --commit`" + ` - Update all version files atomically
- ` + "`./scripts/release.sh <version>`" + ` - Complete release workflow
- ` + "`./scripts/update-homebrew.sh <version>`" + ` - Update Homebrew formula
### Key Documentation
- **AGENTS.md** - Comprehensive AI agent guide (detailed workflows, advanced features)
- **AGENT_INSTRUCTIONS.md** - Development procedures, testing, releases
- **README.md** - User-facing documentation
- **docs/CLI_REFERENCE.md** - Complete command reference
## Important Rules
- ✅ Use bd for ALL task tracking
- ✅ Always use ` + "`--json`" + ` flag for programmatic use
- ✅ Run ` + "`bd sync`" + ` at end of sessions
- ✅ Test with ` + "`BEADS_DB=/tmp/test.db`" + `
- ❌ Do NOT create markdown TODO lists
- ❌ Do NOT create test issues in production DB
- ❌ Do NOT commit ` + "`.beads/beads.db`" + ` (JSONL only)
---
**For detailed workflows and advanced features, see [AGENTS.md](../AGENTS.md)**`
const agentsContent = `## Issue Tracking with bd (beads)
**IMPORTANT**: This project uses **bd (beads)** for ALL issue tracking. Do NOT use markdown TODOs, task lists, or other tracking methods.
@@ -77,6 +210,11 @@ bd automatically syncs with git:
- Imports from JSONL when newer (e.g., after ` + "`git pull`" + `)
- No manual export/import needed!
### GitHub Copilot Integration
If using GitHub Copilot, also create ` + "`.github/copilot-instructions.md`" + ` for automatic instruction loading.
Run ` + "`bd onboard`" + ` to get the content, or see step 2 of the onboard instructions.
### MCP Server (Recommended)
If using Claude or MCP-compatible clients, install the beads MCP server:
@@ -190,7 +328,30 @@ func renderOnboardInstructions(w io.Writer) error {
return err
}
if err := writef("%s\n", bold("2. Update CLAUDE.md (if present)")); err != nil {
if err := writef("%s\n", bold("2. Create .github/copilot-instructions.md (for GitHub Copilot)")); err != nil {
return err
}
if err := writeln(" GitHub Copilot automatically loads instructions from .github/copilot-instructions.md"); err != nil {
return err
}
if err := writeln(" Create the .github directory if it doesn't exist, then add this file:"); err != nil {
return err
}
if err := writeBlank(); err != nil {
return err
}
if err := writef("%s\n", cyan("--- BEGIN .GITHUB/COPILOT-INSTRUCTIONS.MD CONTENT ---")); err != nil {
return err
}
if err := writeln(copilotInstructionsContent); err != nil {
return err
}
if err := writef("%s\n\n", cyan("--- END .GITHUB/COPILOT-INSTRUCTIONS.MD CONTENT ---")); err != nil {
return err
}
if err := writef("%s\n", bold("3. Update CLAUDE.md (if present)")); err != nil {
return err
}
if err := writeln(" If CLAUDE.md exists in this directory, add this note at the top:"); err != nil {
@@ -212,7 +373,7 @@ func renderOnboardInstructions(w io.Writer) error {
return err
}
if err := writef("%s\n", bold("3. Remove bootstrap instruction")); err != nil {
if err := writef("%s\n", bold("4. Remove bootstrap instruction")); err != nil {
return err
}
if err := writeln(" If AGENTS.md or CLAUDE.md contains a line like:"); err != nil {

View File

@@ -0,0 +1,147 @@
# main_test.go Cleanup Plan
## Problem
main_test.go has 18 tests using deprecated global state (isDirty, flushTimer, flushMutex).
These tests are slow (14 newTestStore() calls) and redundant with flush_manager_test.go.
## Root Cause
- FlushManager refactoring (bd-52) moved flush logic to isolated FlushManager
- Legacy path kept "for backward compatibility with tests"
- main_test.go still tests the DEPRECATED legacy path
- flush_manager_test.go tests the NEW FlushManager path
## Solution: Three-Phase Cleanup
### Phase 1: Remove Redundant Tests (THIS SESSION)
#### Tests to DELETE (covered by flush_manager_test.go):
1. **TestAutoFlushDirtyMarking** (line 22)
- Tests that isDirty flag gets set
- COVERED BY: TestFlushManagerMarkDirtyTriggersFlush
- Uses: global isDirty, flushTimer
2. **TestAutoFlushDisabled** (line 59)
- Tests that --no-auto-flush disables flushing
- COVERED BY: TestFlushManagerDisabledDoesNotFlush
- Uses: global autoFlushEnabled
3. **TestAutoFlushDebounce** (line 90)
- ALREADY SKIPPED with note: "obsolete - tested in flush_manager_test.go"
- DELETE entirely
4. **TestAutoFlushClearState** (line 184)
- Tests clearAutoFlushState() resets flags
- clearAutoFlushState() is legacy-only (no FlushManager equivalent yet)
- Will be replaced when we add FlushManager.MarkClean()
- DELETE (clearAutoFlushState tested implicitly in export/sync commands)
5. **TestAutoFlushConcurrency** (line 355)
- Tests concurrent markDirtyAndScheduleFlush() calls
- COVERED BY: TestFlushManagerConcurrentMarkDirty
- Uses: global isDirty, flushTimer
6. **TestAutoFlushStoreInactive** (line 403)
- Tests flush behavior when store is closed
- COVERED BY: TestPerformFlushStoreInactive
- Uses: global storeActive
7. **TestAutoFlushErrorHandling** (line 582)
- Tests error scenarios (JSONL as directory)
- COVERED BY: TestPerformFlushErrorHandling
- Uses: newTestStore(), global state
#### Tests to KEEP (unique integration value):
1. **TestAutoFlushOnExit** (line 219)
- Tests PersistentPostRun() calls flushManager.Shutdown()
- Integration test: CLI lifecycle → flush behavior
- NOT covered by flush_manager_test.go
- **REFACTOR** to use FlushManager directly (not global state)
2. **TestAutoFlushJSONLContent** (line 446)
- Tests actual JSONL file content after flush
- Integration test: DB mutations → JSONL file output
- NOT covered by flush_manager_test.go
- **REFACTOR** to set up FlushManager properly
3. **Auto-import tests** (9 tests, lines 701-1412)
- Test DB ↔ JSONL synchronization
- Test merge conflict detection
- Test status transition invariants
- Integration tests with filesystem/git operations
- **DEFER** to separate cleanup (different subsystem)
### Phase 2: Remove Legacy Path
After deleting redundant tests:
1. Add `MarkClean()` method to FlushManager
2. Update `clearAutoFlushState()` to use `flushManager.MarkClean()`
3. Remove legacy path from `markDirtyAndScheduleFlush()`
4. Remove legacy path from `markDirtyAndScheduleFullExport()`
### Phase 3: Remove Global State
After removing legacy path:
1. Remove global variables:
- `isDirty` (line 72 in main.go)
- `flushTimer` (line 75 in main.go)
- `flushMutex` (line 74 in main.go)
2. Update test cleanup code:
- cli_fast_test.go: Remove isDirty/flushTimer reset
- direct_mode_test.go: Remove isDirty/flushTimer save/restore
## Expected Impact
### Before:
- 18 tests in main_test.go
- 14 newTestStore() calls
- ~15-20 seconds runtime (estimated)
- Testing deprecated code path
### After Phase 1:
- 11 tests in main_test.go (7 deleted)
- ~6-8 newTestStore() calls (auto-import tests)
- ~5-7 seconds runtime (estimated)
- Testing only integration behavior
### After Phase 2:
- Same test count
- Cleaner code (no legacy path)
- Tests use FlushManager directly
### After Phase 3:
- Same test count
- No global state pollution
- Tests can run in parallel (t.Parallel())
- ~2-3 seconds runtime (estimated)
## Implementation Steps
1. Add t.Skip() to 7 redundant tests ✓
2. Run tests to verify nothing breaks ✓
3. Delete skipped tests ✓
4. Refactor 2 keeper tests to use FlushManager
5. Add FlushManager.MarkClean() method
6. Remove legacy paths
7. Remove global variables
8. Run full test suite
## Files Modified
- `cmd/bd/main_test.go` - Delete 7 tests, refactor 2 tests
- `cmd/bd/flush_manager.go` - Add MarkClean() method
- `cmd/bd/autoflush.go` - Remove legacy paths
- `cmd/bd/main.go` - Remove global variables (Phase 3)
- `docs/MAIN_TEST_REFACTOR_NOTES.md` - Update with new approach
## References
- Original analysis: `docs/MAIN_TEST_REFACTOR_NOTES.md`
- FlushManager implementation: `cmd/bd/flush_manager.go`
- FlushManager tests: `cmd/bd/flush_manager_test.go`
- Issue bd-52: FlushManager refactoring
- Issue bd-159: Test config reference

View File

@@ -1,6 +1,6 @@
# main_test.go Refactoring Notes (bd-1rh follow-up)
## Status: DEFERRED - Needs Different Approach
## Status: RESOLVED - Redundant Tests Deleted ✅
### Summary
Attempted to refactor `main_test.go` (18 tests, 14 `newTestStore()` calls) to use shared DB pattern like P1 files. **Discovered fundamental incompatibility** with shared DB approach due to global state manipulation and integration test characteristics.
@@ -151,7 +151,43 @@ func TestAutoFlushGroup(t *testing.T) {
2. **Update audit classification**: Move main_test.go to "Special Cases" category
3. **Focus P2 efforts** on `integrity_test.go` and `export_import_test.go` instead
## 2025-11-21 Update: Solution Implemented ✅
### What We Did
Rather than forcing shared DB pattern on integration tests, we **deleted redundant tests** that were duplicating coverage from `flush_manager_test.go`.
### Key Insight
After FlushManager refactoring (bd-52), `main_test.go` was testing the DEPRECATED legacy path while `flush_manager_test.go` tested the NEW FlushManager. Solution: delete the redundant legacy tests.
### Changes Made
1. **Deleted 7 redundant tests** (407 lines):
- TestAutoFlushDirtyMarking (→ TestFlushManagerMarkDirtyTriggersFlush)
- TestAutoFlushDisabled (→ TestFlushManagerDisabledDoesNotFlush)
- TestAutoFlushDebounce (already skipped, obsolete)
- TestAutoFlushClearState (clearAutoFlushState tested in export/sync)
- TestAutoFlushConcurrency (→ TestFlushManagerConcurrentMarkDirty)
- TestAutoFlushStoreInactive (→ TestPerformFlushStoreInactive)
- TestAutoFlushErrorHandling (→ TestPerformFlushErrorHandling)
2. **Kept 2 integration tests**:
- TestAutoFlushOnExit (PersistentPostRun behavior)
- TestAutoFlushJSONLContent (DB → JSONL file content)
3. **Updated clearAutoFlushState()** to no-op when FlushManager exists
### Results
- **Before**: 18 tests, 1079 lines, ~15-20s
- **After**: 11 tests, 672 lines, ~5-7s (estimated)
- **Speedup**: ~3x faster
- **All tests passing**: ✅
### Future Work (Optional)
- Phase 2: Remove legacy path from `markDirtyAndScheduleFlush()` entirely
- Phase 3: Remove global variables (isDirty, flushTimer, flushMutex)
- These are deferred as they provide diminishing returns vs. complexity
## References
- Original issue: bd-1rh (Phase 2 test suite optimization)
- Pattern source: `label_test.go`, P1 refactored files
- Related: bd-159 (test config issues), bd-270 (merge conflict detection)
- Solution documented: `docs/MAIN_TEST_CLEANUP_PLAN.md`