Performance Improvements (#319)
* feat: add performance testing framework foundation Implements foundation for comprehensive performance testing and user diagnostics for beads databases at 10K-20K scale. Components added: - Fixture generator (internal/testutil/fixtures/) for realistic test data * LargeSQLite/XLargeSQLite: 10K/20K issues with epic hierarchies * LargeFromJSONL/XLargeFromJSONL: test JSONL import path * Realistic cross-linked dependencies, labels, assignees * Reproducible with seeded RNG - User diagnostics (bd doctor --perf) for field performance data * Collects platform info (OS, arch, Go/SQLite versions) * Measures key operation timings (ready, list, show, search) * Generates CPU profiles for bug reports * Clean separation in cmd/bd/doctor/perf.go Test data characteristics: - 10% epics, 30% features, 60% tasks - 4-level hierarchies (Epic → Feature → Task → Subtask) - 20% cross-epic blocking dependencies - Realistic status/priority/label distributions Supports bd-l954 (Performance Testing Framework epic) Closes bd-6ed8, bd-q59i * perf: optimize GetReadyWork with compound index (20x speedup) Add compound index on dependencies(depends_on_id, type, issue_id) to eliminate performance bottleneck in GetReadyWork recursive CTE query. Performance improvements (10K issue database): - GetReadyWork: 752ms → 36.6ms (20.5x faster) - Target: <50ms ✓ ACHIEVED - 20K database: ~1500ms → 79.4ms (19x faster) Benchmark infrastructure enhancements: - Add dataset caching in /tmp/beads-bench-cache/ to avoid regenerating 10K-20K issues on every benchmark run (first run: ~2min, subsequent: <5s) - Add progress logging during fixture generation (shows 10%, 20%... completion) - Add database size logging (17.5 MB for 10K, 35.1 MB for 20K) - Document rationale for only benchmarking large datasets (>10K issues) - Add CPU/trace profiling with --profile flag for performance debugging Schema changes: - internal/storage/sqlite/schema.go: Add idx_dependencies_depends_on_type_issue New files: - internal/storage/sqlite/bench_helpers_test.go: Reusable benchmark setup with caching - internal/storage/sqlite/sqlite_bench_test.go: Comprehensive benchmarks for critical operations - Makefile: Convenient benchmark execution (make bench-quick, make bench) Related: - Resolves bd-5qim (optimize GetReadyWork performance) - Builds on bd-6ed8 (fixture generator), bd-q59i (bd doctor --perf) * perf: add WASM compilation cache to eliminate cold-start overhead Configure wazero compilation cache for ncruces/go-sqlite3 to avoid ~220ms JIT compilation on every process start. Cache configuration: - Location: ~/.cache/beads/wasm/ (platform-specific via os.UserCacheDir) - Automatic version management: wazero keys entries by its version - Fallback: in-memory cache if directory creation fails - No cleanup needed: old versions are harmless (~5-10MB each) Performance impact: - First run: ~220ms (populate cache) - Subsequent runs: ~20ms (load from cache) - Savings: ~200ms per cold start Cache invalidation: - Automatic when wazero version changes (upgrades use new cache dir) - Manual cleanup: rm -rf ~/.cache/beads/wasm/ (safe to delete anytime) This complements daemon mode: - Daemon mode: eliminates startup cost by keeping process alive - WASM cache: reduces startup cost for one-off commands or daemon restarts Changes: - internal/storage/sqlite/sqlite.go: Add init() with cache setup * refactor: improve maintainability of performance testing code Extract common patterns and eliminate duplication across benchmarks, fixture generation, and performance diagnostics. Replace magic numbers with explicit configuration to improve readability and make it easier to tune test parameters. * docs: clarify profiling behavior and add missing documentation Add explanatory comments for profiling setup to clarify why --profile forces direct mode (captures actual database operations instead of RPC overhead) and document the stopCPUProfile function's role in flushing profile data to disk. Also fix gosec G104 linter warning by explicitly ignoring Close() error during cleanup. * fix: prevent bench-quick from running indefinitely Added //go:build bench tags and skipped timeout-prone benchmarks to prevent make bench-quick from running for hours. Changes: - Add //go:build bench tag to cycle_bench_test.go and compact_bench_test.go - Skip Dense graph benchmarks (documented to timeout >120s) - Fix compact benchmark prefix: bd- → bd (validation expects prefix without trailing dash) Before: make bench-quick ran for 3.5+ hours (12,699s) before manual interrupt After: make bench-quick completes in ~25 seconds The Dense graph benchmarks are known to timeout and represent rare edge cases that don't need optimization for typical workflows.
This commit is contained in:
@@ -46,6 +46,7 @@ type doctorResult struct {
|
||||
|
||||
var (
|
||||
doctorFix bool
|
||||
perfMode bool
|
||||
)
|
||||
|
||||
var doctorCmd = &cobra.Command{
|
||||
@@ -68,11 +69,19 @@ This command checks:
|
||||
- Git hooks (pre-commit, post-merge, pre-push)
|
||||
- .beads/.gitignore up to date
|
||||
|
||||
Performance Mode (--perf):
|
||||
Run performance diagnostics on your database:
|
||||
- Times key operations (bd ready, bd list, bd show, etc.)
|
||||
- Collects system info (OS, arch, SQLite version, database stats)
|
||||
- Generates CPU profile for analysis
|
||||
- Outputs shareable report for bug reports
|
||||
|
||||
Examples:
|
||||
bd doctor # Check current directory
|
||||
bd doctor /path/to/repo # Check specific repository
|
||||
bd doctor --json # Machine-readable output
|
||||
bd doctor --fix # Automatically fix issues`,
|
||||
bd doctor --fix # Automatically fix issues
|
||||
bd doctor --perf # Performance diagnostics`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
// Use global jsonOutput set by PersistentPreRun
|
||||
|
||||
@@ -89,6 +98,12 @@ Examples:
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Run performance diagnostics if --perf flag is set
|
||||
if perfMode {
|
||||
doctor.RunPerformanceDiagnostics(absPath)
|
||||
return
|
||||
}
|
||||
|
||||
// Run diagnostics
|
||||
result := runDiagnostics(absPath)
|
||||
|
||||
@@ -1309,4 +1324,5 @@ func checkSchemaCompatibility(path string) doctorCheck {
|
||||
|
||||
func init() {
|
||||
rootCmd.AddCommand(doctorCmd)
|
||||
doctorCmd.Flags().BoolVar(&perfMode, "perf", false, "Run performance diagnostics and generate CPU profile")
|
||||
}
|
||||
|
||||
276
cmd/bd/doctor/perf.go
Normal file
276
cmd/bd/doctor/perf.go
Normal file
@@ -0,0 +1,276 @@
|
||||
package doctor
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"runtime/pprof"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/steveyegge/beads/internal/beads"
|
||||
)
|
||||
|
||||
var cpuProfileFile *os.File
|
||||
|
||||
// RunPerformanceDiagnostics runs performance diagnostics and generates a CPU profile
|
||||
func RunPerformanceDiagnostics(path string) {
|
||||
fmt.Println("\nBeads Performance Diagnostics")
|
||||
fmt.Println(strings.Repeat("=", 50))
|
||||
|
||||
// Check if .beads directory exists
|
||||
beadsDir := filepath.Join(path, ".beads")
|
||||
if _, err := os.Stat(beadsDir); os.IsNotExist(err) {
|
||||
fmt.Fprintf(os.Stderr, "Error: No .beads/ directory found at %s\n", path)
|
||||
fmt.Fprintf(os.Stderr, "Run 'bd init' to initialize beads\n")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Get database path
|
||||
dbPath := filepath.Join(beadsDir, beads.CanonicalDatabaseName)
|
||||
if _, err := os.Stat(dbPath); os.IsNotExist(err) {
|
||||
fmt.Fprintf(os.Stderr, "Error: No database found at %s\n", dbPath)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Collect platform info
|
||||
platformInfo := collectPlatformInfo(dbPath)
|
||||
fmt.Printf("\nPlatform: %s\n", platformInfo["os_arch"])
|
||||
fmt.Printf("Go: %s\n", platformInfo["go_version"])
|
||||
fmt.Printf("SQLite: %s\n", platformInfo["sqlite_version"])
|
||||
|
||||
// Collect database stats
|
||||
dbStats := collectDatabaseStats(dbPath)
|
||||
fmt.Printf("\nDatabase Statistics:\n")
|
||||
fmt.Printf(" Total issues: %s\n", dbStats["total_issues"])
|
||||
fmt.Printf(" Open issues: %s\n", dbStats["open_issues"])
|
||||
fmt.Printf(" Closed issues: %s\n", dbStats["closed_issues"])
|
||||
fmt.Printf(" Dependencies: %s\n", dbStats["dependencies"])
|
||||
fmt.Printf(" Labels: %s\n", dbStats["labels"])
|
||||
fmt.Printf(" Database size: %s\n", dbStats["db_size"])
|
||||
|
||||
// Start CPU profiling
|
||||
profilePath := fmt.Sprintf("beads-perf-%s.prof", time.Now().Format("2006-01-02-150405"))
|
||||
if err := startCPUProfile(profilePath); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Warning: failed to start CPU profiling: %v\n", err)
|
||||
} else {
|
||||
defer stopCPUProfile()
|
||||
fmt.Printf("\nCPU profiling enabled: %s\n", profilePath)
|
||||
}
|
||||
|
||||
// Time key operations
|
||||
fmt.Printf("\nOperation Performance:\n")
|
||||
|
||||
// Measure GetReadyWork
|
||||
readyDuration := measureOperation("bd ready", func() error {
|
||||
return runReadyWork(dbPath)
|
||||
})
|
||||
fmt.Printf(" bd ready %dms\n", readyDuration.Milliseconds())
|
||||
|
||||
// Measure SearchIssues (list open)
|
||||
listDuration := measureOperation("bd list --status=open", func() error {
|
||||
return runListOpen(dbPath)
|
||||
})
|
||||
fmt.Printf(" bd list --status=open %dms\n", listDuration.Milliseconds())
|
||||
|
||||
// Measure GetIssue (show random issue)
|
||||
showDuration := measureOperation("bd show <issue>", func() error {
|
||||
return runShowRandom(dbPath)
|
||||
})
|
||||
if showDuration > 0 {
|
||||
fmt.Printf(" bd show <random-issue> %dms\n", showDuration.Milliseconds())
|
||||
}
|
||||
|
||||
// Measure SearchIssues with filters
|
||||
searchDuration := measureOperation("bd list (complex filters)", func() error {
|
||||
return runComplexSearch(dbPath)
|
||||
})
|
||||
fmt.Printf(" bd list (complex filters) %dms\n", searchDuration.Milliseconds())
|
||||
|
||||
fmt.Printf("\nProfile saved: %s\n", profilePath)
|
||||
fmt.Printf("Share this file with bug reports for performance issues.\n\n")
|
||||
fmt.Printf("View flamegraph:\n")
|
||||
fmt.Printf(" go tool pprof -http=:8080 %s\n\n", profilePath)
|
||||
}
|
||||
|
||||
func collectPlatformInfo(dbPath string) map[string]string {
|
||||
info := make(map[string]string)
|
||||
|
||||
// OS and architecture
|
||||
info["os_arch"] = fmt.Sprintf("%s/%s", runtime.GOOS, runtime.GOARCH)
|
||||
|
||||
// Go version
|
||||
info["go_version"] = runtime.Version()
|
||||
|
||||
// SQLite version
|
||||
db, err := sql.Open("sqlite3", "file:"+dbPath+"?mode=ro")
|
||||
if err == nil {
|
||||
defer db.Close()
|
||||
var version string
|
||||
if err := db.QueryRow("SELECT sqlite_version()").Scan(&version); err == nil {
|
||||
info["sqlite_version"] = version
|
||||
} else {
|
||||
info["sqlite_version"] = "unknown"
|
||||
}
|
||||
} else {
|
||||
info["sqlite_version"] = "unknown"
|
||||
}
|
||||
|
||||
return info
|
||||
}
|
||||
|
||||
func collectDatabaseStats(dbPath string) map[string]string {
|
||||
stats := make(map[string]string)
|
||||
|
||||
db, err := sql.Open("sqlite3", "file:"+dbPath+"?mode=ro")
|
||||
if err != nil {
|
||||
stats["total_issues"] = "error"
|
||||
stats["open_issues"] = "error"
|
||||
stats["closed_issues"] = "error"
|
||||
stats["dependencies"] = "error"
|
||||
stats["labels"] = "error"
|
||||
stats["db_size"] = "error"
|
||||
return stats
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
// Total issues
|
||||
var total int
|
||||
if err := db.QueryRow("SELECT COUNT(*) FROM issues").Scan(&total); err == nil {
|
||||
stats["total_issues"] = fmt.Sprintf("%d", total)
|
||||
} else {
|
||||
stats["total_issues"] = "error"
|
||||
}
|
||||
|
||||
// Open issues
|
||||
var open int
|
||||
if err := db.QueryRow("SELECT COUNT(*) FROM issues WHERE status != 'closed'").Scan(&open); err == nil {
|
||||
stats["open_issues"] = fmt.Sprintf("%d", open)
|
||||
} else {
|
||||
stats["open_issues"] = "error"
|
||||
}
|
||||
|
||||
// Closed issues
|
||||
var closed int
|
||||
if err := db.QueryRow("SELECT COUNT(*) FROM issues WHERE status = 'closed'").Scan(&closed); err == nil {
|
||||
stats["closed_issues"] = fmt.Sprintf("%d", closed)
|
||||
} else {
|
||||
stats["closed_issues"] = "error"
|
||||
}
|
||||
|
||||
// Dependencies
|
||||
var deps int
|
||||
if err := db.QueryRow("SELECT COUNT(*) FROM dependencies").Scan(&deps); err == nil {
|
||||
stats["dependencies"] = fmt.Sprintf("%d", deps)
|
||||
} else {
|
||||
stats["dependencies"] = "error"
|
||||
}
|
||||
|
||||
// Labels
|
||||
var labels int
|
||||
if err := db.QueryRow("SELECT COUNT(DISTINCT label) FROM labels").Scan(&labels); err == nil {
|
||||
stats["labels"] = fmt.Sprintf("%d", labels)
|
||||
} else {
|
||||
stats["labels"] = "error"
|
||||
}
|
||||
|
||||
// Database file size
|
||||
if info, err := os.Stat(dbPath); err == nil {
|
||||
sizeMB := float64(info.Size()) / (1024 * 1024)
|
||||
stats["db_size"] = fmt.Sprintf("%.2f MB", sizeMB)
|
||||
} else {
|
||||
stats["db_size"] = "error"
|
||||
}
|
||||
|
||||
return stats
|
||||
}
|
||||
|
||||
func startCPUProfile(path string) error {
|
||||
f, err := os.Create(path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
cpuProfileFile = f
|
||||
return pprof.StartCPUProfile(f)
|
||||
}
|
||||
|
||||
// stopCPUProfile stops CPU profiling and closes the profile file.
|
||||
// Must be called after pprof.StartCPUProfile() to flush profile data to disk.
|
||||
func stopCPUProfile() {
|
||||
pprof.StopCPUProfile()
|
||||
if cpuProfileFile != nil {
|
||||
_ = cpuProfileFile.Close() // best effort cleanup
|
||||
}
|
||||
}
|
||||
|
||||
func measureOperation(name string, op func() error) time.Duration {
|
||||
start := time.Now()
|
||||
if err := op(); err != nil {
|
||||
return 0
|
||||
}
|
||||
return time.Since(start)
|
||||
}
|
||||
|
||||
// runQuery executes a read-only database query and returns any error
|
||||
func runQuery(dbPath string, queryFn func(*sql.DB) error) error {
|
||||
db, err := sql.Open("sqlite3", "file:"+dbPath+"?mode=ro")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer db.Close()
|
||||
return queryFn(db)
|
||||
}
|
||||
|
||||
func runReadyWork(dbPath string) error {
|
||||
return runQuery(dbPath, func(db *sql.DB) error {
|
||||
// simplified ready work query (the real one is more complex)
|
||||
_, err := db.Query(`
|
||||
SELECT id FROM issues
|
||||
WHERE status IN ('open', 'in_progress')
|
||||
AND id NOT IN (
|
||||
SELECT issue_id FROM dependencies WHERE type = 'blocks'
|
||||
)
|
||||
LIMIT 100
|
||||
`)
|
||||
return err
|
||||
})
|
||||
}
|
||||
|
||||
func runListOpen(dbPath string) error {
|
||||
return runQuery(dbPath, func(db *sql.DB) error {
|
||||
_, err := db.Query("SELECT id, title, status FROM issues WHERE status != 'closed' LIMIT 100")
|
||||
return err
|
||||
})
|
||||
}
|
||||
|
||||
func runShowRandom(dbPath string) error {
|
||||
return runQuery(dbPath, func(db *sql.DB) error {
|
||||
// get a random issue
|
||||
var issueID string
|
||||
if err := db.QueryRow("SELECT id FROM issues ORDER BY RANDOM() LIMIT 1").Scan(&issueID); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// get issue details
|
||||
_, err := db.Query("SELECT * FROM issues WHERE id = ?", issueID)
|
||||
return err
|
||||
})
|
||||
}
|
||||
|
||||
func runComplexSearch(dbPath string) error {
|
||||
return runQuery(dbPath, func(db *sql.DB) error {
|
||||
// complex query with filters
|
||||
_, err := db.Query(`
|
||||
SELECT i.id, i.title, i.status, i.priority
|
||||
FROM issues i
|
||||
LEFT JOIN labels l ON i.id = l.issue_id
|
||||
WHERE i.status IN ('open', 'in_progress')
|
||||
AND i.priority <= 2
|
||||
GROUP BY i.id
|
||||
LIMIT 100
|
||||
`)
|
||||
return err
|
||||
})
|
||||
}
|
||||
@@ -4,6 +4,8 @@ import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime/pprof"
|
||||
"runtime/trace"
|
||||
"slices"
|
||||
"sync"
|
||||
"time"
|
||||
@@ -78,6 +80,9 @@ var (
|
||||
noAutoImport bool
|
||||
sandboxMode bool
|
||||
noDb bool // Use --no-db mode: load from JSONL, write back after each command
|
||||
profileEnabled bool
|
||||
profileFile *os.File
|
||||
traceFile *os.File
|
||||
)
|
||||
|
||||
func init() {
|
||||
@@ -95,6 +100,7 @@ func init() {
|
||||
rootCmd.PersistentFlags().BoolVar(&noAutoImport, "no-auto-import", false, "Disable automatic JSONL import when newer than DB")
|
||||
rootCmd.PersistentFlags().BoolVar(&sandboxMode, "sandbox", false, "Sandbox mode: disables daemon and auto-sync")
|
||||
rootCmd.PersistentFlags().BoolVar(&noDb, "no-db", false, "Use no-db mode: load from JSONL, no SQLite")
|
||||
rootCmd.PersistentFlags().BoolVar(&profileEnabled, "profile", false, "Generate CPU profile for performance analysis")
|
||||
|
||||
// Add --version flag to root command (same behavior as version subcommand)
|
||||
rootCmd.Flags().BoolP("version", "v", false, "Print version information")
|
||||
@@ -141,6 +147,23 @@ var rootCmd = &cobra.Command{
|
||||
actor = config.GetString("actor")
|
||||
}
|
||||
|
||||
// Performance profiling setup
|
||||
// When --profile is enabled, force direct mode to capture actual database operations
|
||||
// rather than just RPC serialization/network overhead. This gives accurate profiles
|
||||
// of the storage layer, query performance, and business logic.
|
||||
if profileEnabled {
|
||||
noDaemon = true
|
||||
timestamp := time.Now().Format("20060102-150405")
|
||||
if f, _ := os.Create(fmt.Sprintf("bd-profile-%s-%s.prof", cmd.Name(), timestamp)); f != nil {
|
||||
profileFile = f
|
||||
_ = pprof.StartCPUProfile(f)
|
||||
}
|
||||
if f, _ := os.Create(fmt.Sprintf("bd-trace-%s-%s.out", cmd.Name(), timestamp)); f != nil {
|
||||
traceFile = f
|
||||
_ = trace.Start(f)
|
||||
}
|
||||
}
|
||||
|
||||
// Skip database initialization for commands that don't need a database
|
||||
noDbCommands := []string{
|
||||
cmdDaemon,
|
||||
@@ -505,6 +528,8 @@ var rootCmd = &cobra.Command{
|
||||
if store != nil {
|
||||
_ = store.Close()
|
||||
}
|
||||
if profileFile != nil { pprof.StopCPUProfile(); _ = profileFile.Close() }
|
||||
if traceFile != nil { trace.Stop(); _ = traceFile.Close() }
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user