Document blocked_issues_cache architecture and behavior

Add comprehensive documentation for the blocked_issues_cache optimization
that improved GetReadyWork performance from 752ms to 29ms (25x speedup).

Documentation locations:
- blocked_cache.go: Detailed package comment covering architecture,
  invalidation strategy, transaction safety, edge cases, and future
  optimizations
- ready.go: Enhanced comment at query site explaining the optimization
  and maintenance triggers
- ARCHITECTURE.md: New section with diagrams, blocking semantics,
  performance characteristics, and testing instructions

Closes bd-1w6i

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Steve Yegge
2025-11-24 01:14:21 -08:00
parent 88009830ee
commit fcd6ca694e
4 changed files with 241 additions and 2 deletions

View File

@@ -1,3 +1,89 @@
// Package sqlite provides the blocked_issues_cache optimization for GetReadyWork performance.
//
// # Performance Impact
//
// GetReadyWork originally used a recursive CTE to compute blocked issues on every query,
// taking ~752ms on a 10K issue database. With the cache, queries complete in ~29ms
// (25x speedup) by using a simple NOT EXISTS check against the materialized cache table.
//
// # Cache Architecture
//
// The blocked_issues_cache table stores issue_id values for all issues that are currently
// blocked. An issue is blocked if:
// - It has a 'blocks' dependency on an open/in_progress/blocked issue (direct blocking)
// - Its parent is blocked and it's connected via 'parent-child' dependency (transitive blocking)
//
// The cache is maintained automatically by invalidating and rebuilding whenever:
// - A 'blocks' or 'parent-child' dependency is added or removed
// - Any issue's status changes (affects whether it blocks others)
// - An issue is closed (closed issues don't block others)
//
// Related and discovered-from dependencies do NOT trigger cache invalidation since they
// don't affect blocking semantics.
//
// # Cache Invalidation Strategy
//
// On any triggering change, the entire cache is rebuilt from scratch (DELETE + INSERT).
// This full-rebuild approach is chosen because:
// - Rebuild is fast (<50ms even on 10K databases) due to optimized CTE logic
// - Simpler implementation than incremental updates
// - Dependency changes are rare compared to reads
// - Guarantees consistency - no risk of partial/stale updates
//
// The rebuild happens within the same transaction as the triggering change, ensuring
// atomicity and consistency. The cache can never be in an inconsistent state visible
// to queries.
//
// # Transaction Safety
//
// All cache operations support both transaction and direct database execution:
// - rebuildBlockedCache accepts optional *sql.Tx parameter
// - If tx != nil, uses transaction; otherwise uses direct db connection
// - Cache invalidation during CreateIssue/UpdateIssue/AddDependency happens in their tx
// - Ensures cache is always consistent with the database state
//
// # Performance Characteristics
//
// Query performance (GetReadyWork):
// - Before cache: ~752ms (recursive CTE on 10K issues)
// - With cache: ~29ms (NOT EXISTS check)
// - Speedup: 25x
//
// Write overhead:
// - Cache rebuild: <50ms (full DELETE + INSERT)
// - Only triggered on dependency/status changes (rare operations)
// - Trade-off: slower writes for much faster reads
//
// # Edge Cases Handled
//
// 1. Parent-child transitive blocking:
// - Children of blocked parents are automatically marked as blocked
// - Propagates through arbitrary depth hierarchies (limited to depth 50)
//
// 2. Multiple blockers:
// - Issue blocked by multiple open issues stays blocked until all are closed
// - DISTINCT in CTE ensures issue appears once in cache
//
// 3. Status changes:
// - Closing a blocker removes all blocked descendants from cache
// - Reopening a blocker adds them back
//
// 4. Dependency removal:
// - Removing last blocker unblocks the issue
// - Removing parent-child link unblocks orphaned subtree
//
// 5. Foreign key cascades:
// - Cache entries automatically deleted when issue is deleted (ON DELETE CASCADE)
// - No manual cleanup needed
//
// # Future Optimizations
//
// If rebuild becomes a bottleneck in very large databases (>100K issues):
// - Consider incremental updates for specific dependency types
// - Add indexes to dependencies table for CTE performance
// - Implement dirty tracking to avoid rebuilds when cache is unchanged
//
// However, current performance is excellent for realistic workloads.
package sqlite
import (

View File

@@ -82,7 +82,16 @@ func (s *SQLiteStorage) GetReadyWork(ctx context.Context, filter types.WorkFilte
orderBySQL := buildOrderByClause(sortPolicy)
// Use blocked_issues_cache for performance (bd-5qim)
// Cache is maintained by invalidateBlockedCache() called on dependency/status changes
// This optimization replaces the recursive CTE that computed blocked issues on every query.
// Performance improvement: 752ms → 29ms on 10K issues (25x speedup).
//
// The cache is automatically maintained by invalidateBlockedCache() which is called:
// - When adding/removing 'blocks' or 'parent-child' dependencies
// - When any issue status changes
// - When closing any issue
//
// Cache rebuild is fast (<50ms) and happens within the same transaction as the
// triggering change, ensuring consistency. See blocked_cache.go for full details.
// #nosec G201 - safe SQL with controlled formatting
query := fmt.Sprintf(`
SELECT i.id, i.content_hash, i.title, i.description, i.design, i.acceptance_criteria, i.notes,