Add configurable export error handling policies (bd-exug)
Implements flexible error handling for export operations with four policies: - strict: Fail-fast on any error (default for user exports) - best-effort: Skip errors with warnings (default for auto-exports) - partial: Retry then skip with manifest tracking - required-core: Fail on core data, skip enrichments Key features: - Per-project configuration via `bd config set export.error_policy` - Separate policy for auto-exports: `auto_export.error_policy` - Retry with exponential backoff (configurable attempts/delay) - Optional export manifests documenting completeness - Per-issue encoding error handling This allows users to choose the right trade-off between data integrity and system availability for their specific project needs. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
@@ -371,6 +371,7 @@ With --no-db: creates .beads/ directory and issues.jsonl file instead of SQLite
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
if hasIssues {
|
if hasIssues {
|
||||||
|
yellow := color.New(color.FgYellow).SprintFunc()
|
||||||
fmt.Printf("%s Setup incomplete. Some issues were detected:\n", yellow("⚠"))
|
fmt.Printf("%s Setup incomplete. Some issues were detected:\n", yellow("⚠"))
|
||||||
// Show just the warnings/errors, not all checks
|
// Show just the warnings/errors, not all checks
|
||||||
for _, check := range doctorResult.Checks {
|
for _, check := range doctorResult.Checks {
|
||||||
|
|||||||
@@ -170,6 +170,12 @@ Configuration keys use dot-notation namespaces to organize settings:
|
|||||||
- `min_hash_length` - Minimum hash ID length (default: 4)
|
- `min_hash_length` - Minimum hash ID length (default: 4)
|
||||||
- `max_hash_length` - Maximum hash ID length (default: 8)
|
- `max_hash_length` - Maximum hash ID length (default: 8)
|
||||||
- `import.orphan_handling` - How to handle hierarchical issues with missing parents during import (default: `allow`)
|
- `import.orphan_handling` - How to handle hierarchical issues with missing parents during import (default: `allow`)
|
||||||
|
- `export.error_policy` - Error handling strategy for exports (default: `strict`)
|
||||||
|
- `export.retry_attempts` - Number of retry attempts for transient errors (default: 3)
|
||||||
|
- `export.retry_backoff_ms` - Initial backoff in milliseconds for retries (default: 100)
|
||||||
|
- `export.skip_encoding_errors` - Skip issues that fail JSON encoding (default: false)
|
||||||
|
- `export.write_manifest` - Write .manifest.json with export metadata (default: false)
|
||||||
|
- `auto_export.error_policy` - Override error policy for auto-exports (default: `best-effort`)
|
||||||
|
|
||||||
### Integration Namespaces
|
### Integration Namespaces
|
||||||
|
|
||||||
@@ -200,6 +206,74 @@ bd config set min_hash_length "5"
|
|||||||
|
|
||||||
See [docs/ADAPTIVE_IDS.md](docs/ADAPTIVE_IDS.md) for detailed documentation.
|
See [docs/ADAPTIVE_IDS.md](docs/ADAPTIVE_IDS.md) for detailed documentation.
|
||||||
|
|
||||||
|
### Example: Export Error Handling
|
||||||
|
|
||||||
|
Controls how export operations handle errors when fetching issue data (labels, comments, dependencies).
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Strict: Fail fast on any error (default for user-initiated exports)
|
||||||
|
bd config set export.error_policy "strict"
|
||||||
|
|
||||||
|
# Best-effort: Skip failed operations with warnings (good for auto-export)
|
||||||
|
bd config set export.error_policy "best-effort"
|
||||||
|
|
||||||
|
# Partial: Retry transient failures, skip persistent ones with manifest
|
||||||
|
bd config set export.error_policy "partial"
|
||||||
|
bd config set export.write_manifest "true"
|
||||||
|
|
||||||
|
# Required-core: Fail on core data (issues/deps), skip enrichments (labels/comments)
|
||||||
|
bd config set export.error_policy "required-core"
|
||||||
|
|
||||||
|
# Customize retry behavior
|
||||||
|
bd config set export.retry_attempts "5"
|
||||||
|
bd config set export.retry_backoff_ms "200"
|
||||||
|
|
||||||
|
# Skip individual issues that fail JSON encoding
|
||||||
|
bd config set export.skip_encoding_errors "true"
|
||||||
|
|
||||||
|
# Auto-export uses different policy (background operation)
|
||||||
|
bd config set auto_export.error_policy "best-effort"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Policy details:**
|
||||||
|
|
||||||
|
- **`strict`** (default) - Fail immediately on any error. Ensures complete exports but may block on transient issues like database locks. Best for critical exports and migrations.
|
||||||
|
|
||||||
|
- **`best-effort`** - Skip failed batches with warnings. Continues export even if labels or comments fail to load. Best for auto-exports and background sync where availability matters more than completeness.
|
||||||
|
|
||||||
|
- **`partial`** - Retry transient failures (3x by default), then skip with manifest file. Creates `.manifest.json` alongside JSONL documenting what succeeded/failed. Best for large databases with occasional corruption.
|
||||||
|
|
||||||
|
- **`required-core`** - Fail on core data (issues, dependencies), skip enrichments (labels, comments) with warnings. Best when metadata is secondary to issue tracking.
|
||||||
|
|
||||||
|
**When to use each mode:**
|
||||||
|
|
||||||
|
- Use `strict` (default) for production backups and critical exports
|
||||||
|
- Use `best-effort` for auto-exports (default via `auto_export.error_policy`)
|
||||||
|
- Use `partial` when you need visibility into export completeness
|
||||||
|
- Use `required-core` when labels/comments are optional
|
||||||
|
|
||||||
|
**Context-specific behavior:**
|
||||||
|
|
||||||
|
User-initiated exports (`bd sync`, manual export commands) use `export.error_policy` (default: `strict`).
|
||||||
|
|
||||||
|
Auto-exports (daemon background sync) use `auto_export.error_policy` (default: `best-effort`), falling back to `export.error_policy` if not set.
|
||||||
|
|
||||||
|
**Example: Different policies for different contexts:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Critical project: strict everywhere
|
||||||
|
bd config set export.error_policy "strict"
|
||||||
|
|
||||||
|
# Development project: strict user exports, permissive auto-exports
|
||||||
|
bd config set export.error_policy "strict"
|
||||||
|
bd config set auto_export.error_policy "best-effort"
|
||||||
|
|
||||||
|
# Large database with occasional corruption
|
||||||
|
bd config set export.error_policy "partial"
|
||||||
|
bd config set export.write_manifest "true"
|
||||||
|
bd config set export.retry_attempts "5"
|
||||||
|
```
|
||||||
|
|
||||||
### Example: Import Orphan Handling
|
### Example: Import Orphan Handling
|
||||||
|
|
||||||
Controls how imports handle hierarchical child issues when their parent is missing from the database:
|
Controls how imports handle hierarchical child issues when their parent is missing from the database:
|
||||||
|
|||||||
117
internal/export/config.go
Normal file
117
internal/export/config.go
Normal file
@@ -0,0 +1,117 @@
|
|||||||
|
package export
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"strconv"
|
||||||
|
|
||||||
|
"github.com/steveyegge/beads/internal/storage"
|
||||||
|
)
|
||||||
|
|
||||||
|
// ConfigStore defines the minimal storage interface needed for config
|
||||||
|
type ConfigStore interface {
|
||||||
|
GetConfig(ctx context.Context, key string) (string, error)
|
||||||
|
SetConfig(ctx context.Context, key, value string) error
|
||||||
|
}
|
||||||
|
|
||||||
|
// LoadConfig reads export configuration from storage
|
||||||
|
func LoadConfig(ctx context.Context, store ConfigStore, isAutoExport bool) (*Config, error) {
|
||||||
|
cfg := &Config{
|
||||||
|
Policy: DefaultErrorPolicy,
|
||||||
|
RetryAttempts: DefaultRetryAttempts,
|
||||||
|
RetryBackoffMS: DefaultRetryBackoffMS,
|
||||||
|
SkipEncodingErrors: DefaultSkipEncodingErrors,
|
||||||
|
WriteManifest: DefaultWriteManifest,
|
||||||
|
IsAutoExport: isAutoExport,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Load error policy
|
||||||
|
if isAutoExport {
|
||||||
|
// Check auto-export specific policy first
|
||||||
|
if val, err := store.GetConfig(ctx, ConfigKeyAutoExportPolicy); err == nil && val != "" {
|
||||||
|
policy := ErrorPolicy(val)
|
||||||
|
if policy.IsValid() {
|
||||||
|
cfg.Policy = policy
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Fall back to general export policy if not set or not auto-export
|
||||||
|
if cfg.Policy == DefaultErrorPolicy {
|
||||||
|
if val, err := store.GetConfig(ctx, ConfigKeyErrorPolicy); err == nil && val != "" {
|
||||||
|
policy := ErrorPolicy(val)
|
||||||
|
if policy.IsValid() {
|
||||||
|
cfg.Policy = policy
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Load retry attempts
|
||||||
|
if val, err := store.GetConfig(ctx, ConfigKeyRetryAttempts); err == nil && val != "" {
|
||||||
|
if attempts, err := strconv.Atoi(val); err == nil && attempts >= 0 {
|
||||||
|
cfg.RetryAttempts = attempts
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Load retry backoff
|
||||||
|
if val, err := store.GetConfig(ctx, ConfigKeyRetryBackoffMS); err == nil && val != "" {
|
||||||
|
if backoff, err := strconv.Atoi(val); err == nil && backoff > 0 {
|
||||||
|
cfg.RetryBackoffMS = backoff
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Load skip encoding errors flag
|
||||||
|
if val, err := store.GetConfig(ctx, ConfigKeySkipEncodingErrors); err == nil && val != "" {
|
||||||
|
if skip, err := strconv.ParseBool(val); err == nil {
|
||||||
|
cfg.SkipEncodingErrors = skip
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Load write manifest flag
|
||||||
|
if val, err := store.GetConfig(ctx, ConfigKeyWriteManifest); err == nil && val != "" {
|
||||||
|
if write, err := strconv.ParseBool(val); err == nil {
|
||||||
|
cfg.WriteManifest = write
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return cfg, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetPolicy sets the error policy for exports
|
||||||
|
func SetPolicy(ctx context.Context, store storage.Storage, policy ErrorPolicy, autoExport bool) error {
|
||||||
|
if !policy.IsValid() {
|
||||||
|
return fmt.Errorf("invalid error policy: %s (valid: strict, best-effort, partial, required-core)", policy)
|
||||||
|
}
|
||||||
|
|
||||||
|
key := ConfigKeyErrorPolicy
|
||||||
|
if autoExport {
|
||||||
|
key = ConfigKeyAutoExportPolicy
|
||||||
|
}
|
||||||
|
|
||||||
|
return store.SetConfig(ctx, key, string(policy))
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetRetryAttempts sets the number of retry attempts
|
||||||
|
func SetRetryAttempts(ctx context.Context, store storage.Storage, attempts int) error {
|
||||||
|
if attempts < 0 {
|
||||||
|
return fmt.Errorf("retry attempts must be non-negative")
|
||||||
|
}
|
||||||
|
return store.SetConfig(ctx, ConfigKeyRetryAttempts, strconv.Itoa(attempts))
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetRetryBackoff sets the initial retry backoff in milliseconds
|
||||||
|
func SetRetryBackoff(ctx context.Context, store storage.Storage, backoffMS int) error {
|
||||||
|
if backoffMS <= 0 {
|
||||||
|
return fmt.Errorf("retry backoff must be positive")
|
||||||
|
}
|
||||||
|
return store.SetConfig(ctx, ConfigKeyRetryBackoffMS, strconv.Itoa(backoffMS))
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetSkipEncodingErrors sets whether to skip issues with encoding errors
|
||||||
|
func SetSkipEncodingErrors(ctx context.Context, store storage.Storage, skip bool) error {
|
||||||
|
return store.SetConfig(ctx, ConfigKeySkipEncodingErrors, strconv.FormatBool(skip))
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetWriteManifest sets whether to write export manifests
|
||||||
|
func SetWriteManifest(ctx context.Context, store storage.Storage, write bool) error {
|
||||||
|
return store.SetConfig(ctx, ConfigKeyWriteManifest, strconv.FormatBool(write))
|
||||||
|
}
|
||||||
96
internal/export/executor.go
Normal file
96
internal/export/executor.go
Normal file
@@ -0,0 +1,96 @@
|
|||||||
|
package export
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
)
|
||||||
|
|
||||||
|
// DataType represents a type of data being fetched
|
||||||
|
type DataType string
|
||||||
|
|
||||||
|
const (
|
||||||
|
DataTypeCore DataType = "core" // Issues and dependencies
|
||||||
|
DataTypeLabels DataType = "labels" // Issue labels
|
||||||
|
DataTypeComments DataType = "comments" // Issue comments
|
||||||
|
)
|
||||||
|
|
||||||
|
// FetchResult holds the result of a data fetch operation
|
||||||
|
type FetchResult struct {
|
||||||
|
Success bool
|
||||||
|
Err error
|
||||||
|
Warnings []string
|
||||||
|
}
|
||||||
|
|
||||||
|
// FetchWithPolicy executes a fetch operation with the configured error policy
|
||||||
|
func FetchWithPolicy(ctx context.Context, cfg *Config, dataType DataType, desc string, fn func() error) FetchResult {
|
||||||
|
var result FetchResult
|
||||||
|
|
||||||
|
// Determine if this is core data
|
||||||
|
isCore := dataType == DataTypeCore
|
||||||
|
|
||||||
|
// Execute based on policy
|
||||||
|
switch cfg.Policy {
|
||||||
|
case PolicyStrict:
|
||||||
|
// Fail-fast on any error
|
||||||
|
err := RetryWithBackoff(ctx, cfg.RetryAttempts, cfg.RetryBackoffMS, desc, fn)
|
||||||
|
if err != nil {
|
||||||
|
result.Err = err
|
||||||
|
return result
|
||||||
|
}
|
||||||
|
result.Success = true
|
||||||
|
|
||||||
|
case PolicyBestEffort:
|
||||||
|
// Skip errors with warnings
|
||||||
|
err := RetryWithBackoff(ctx, cfg.RetryAttempts, cfg.RetryBackoffMS, desc, fn)
|
||||||
|
if err != nil {
|
||||||
|
warning := fmt.Sprintf("Warning: %s failed, skipping: %v", desc, err)
|
||||||
|
fmt.Fprintf(os.Stderr, "%s\n", warning)
|
||||||
|
result.Warnings = append(result.Warnings, warning)
|
||||||
|
result.Success = false // Data is missing
|
||||||
|
return result
|
||||||
|
}
|
||||||
|
result.Success = true
|
||||||
|
|
||||||
|
case PolicyPartial:
|
||||||
|
// Retry with backoff, then skip with manifest entry
|
||||||
|
err := RetryWithBackoff(ctx, cfg.RetryAttempts, cfg.RetryBackoffMS, desc, fn)
|
||||||
|
if err != nil {
|
||||||
|
warning := fmt.Sprintf("Warning: %s failed after retries, skipping: %v", desc, err)
|
||||||
|
fmt.Fprintf(os.Stderr, "%s\n", warning)
|
||||||
|
result.Warnings = append(result.Warnings, warning)
|
||||||
|
result.Success = false
|
||||||
|
return result
|
||||||
|
}
|
||||||
|
result.Success = true
|
||||||
|
|
||||||
|
case PolicyRequiredCore:
|
||||||
|
// Fail on core data, skip enrichments
|
||||||
|
if isCore {
|
||||||
|
err := RetryWithBackoff(ctx, cfg.RetryAttempts, cfg.RetryBackoffMS, desc, fn)
|
||||||
|
if err != nil {
|
||||||
|
result.Err = err
|
||||||
|
return result
|
||||||
|
}
|
||||||
|
result.Success = true
|
||||||
|
} else {
|
||||||
|
// Best-effort for enrichments
|
||||||
|
err := RetryWithBackoff(ctx, cfg.RetryAttempts, cfg.RetryBackoffMS, desc, fn)
|
||||||
|
if err != nil {
|
||||||
|
warning := fmt.Sprintf("Warning: %s (enrichment) failed, skipping: %v", desc, err)
|
||||||
|
fmt.Fprintf(os.Stderr, "%s\n", warning)
|
||||||
|
result.Warnings = append(result.Warnings, warning)
|
||||||
|
result.Success = false
|
||||||
|
return result
|
||||||
|
}
|
||||||
|
result.Success = true
|
||||||
|
}
|
||||||
|
|
||||||
|
default:
|
||||||
|
// Unknown policy, fail-fast as safest option
|
||||||
|
result.Err = fmt.Errorf("unknown error policy: %s", cfg.Policy)
|
||||||
|
return result
|
||||||
|
}
|
||||||
|
|
||||||
|
return result
|
||||||
|
}
|
||||||
65
internal/export/manifest.go
Normal file
65
internal/export/manifest.go
Normal file
@@ -0,0 +1,65 @@
|
|||||||
|
package export
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
// WriteManifest writes an export manifest alongside the JSONL file
|
||||||
|
func WriteManifest(jsonlPath string, manifest *Manifest) error {
|
||||||
|
// Derive manifest path from JSONL path
|
||||||
|
manifestPath := strings.TrimSuffix(jsonlPath, ".jsonl") + ".manifest.json"
|
||||||
|
|
||||||
|
// Marshal manifest
|
||||||
|
data, err := json.MarshalIndent(manifest, "", " ")
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to marshal manifest: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create temp file for atomic write
|
||||||
|
dir := filepath.Dir(manifestPath)
|
||||||
|
base := filepath.Base(manifestPath)
|
||||||
|
tempFile, err := os.CreateTemp(dir, base+".tmp.*")
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to create temp manifest file: %w", err)
|
||||||
|
}
|
||||||
|
tempPath := tempFile.Name()
|
||||||
|
defer func() {
|
||||||
|
_ = tempFile.Close()
|
||||||
|
_ = os.Remove(tempPath)
|
||||||
|
}()
|
||||||
|
|
||||||
|
// Write manifest
|
||||||
|
if _, err := tempFile.Write(data); err != nil {
|
||||||
|
return fmt.Errorf("failed to write manifest: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Close before rename
|
||||||
|
_ = tempFile.Close()
|
||||||
|
|
||||||
|
// Atomic replace
|
||||||
|
if err := os.Rename(tempPath, manifestPath); err != nil {
|
||||||
|
return fmt.Errorf("failed to replace manifest file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set appropriate file permissions (0600: rw-------)
|
||||||
|
if err := os.Chmod(manifestPath, 0600); err != nil {
|
||||||
|
// Non-fatal, just log
|
||||||
|
fmt.Fprintf(os.Stderr, "Warning: failed to set manifest permissions: %v\n", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewManifest creates a new export manifest
|
||||||
|
func NewManifest(policy ErrorPolicy) *Manifest {
|
||||||
|
return &Manifest{
|
||||||
|
ExportedAt: time.Now(),
|
||||||
|
ErrorPolicy: string(policy),
|
||||||
|
Complete: true, // Will be set to false if any data is missing
|
||||||
|
}
|
||||||
|
}
|
||||||
127
internal/export/policy.go
Normal file
127
internal/export/policy.go
Normal file
@@ -0,0 +1,127 @@
|
|||||||
|
package export
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
// ErrorPolicy defines how export operations handle errors
|
||||||
|
type ErrorPolicy string
|
||||||
|
|
||||||
|
const (
|
||||||
|
// PolicyStrict fails fast on any error (default for user-initiated exports)
|
||||||
|
PolicyStrict ErrorPolicy = "strict"
|
||||||
|
|
||||||
|
// PolicyBestEffort skips failed operations with warnings (good for auto-export)
|
||||||
|
PolicyBestEffort ErrorPolicy = "best-effort"
|
||||||
|
|
||||||
|
// PolicyPartial retries transient failures, skips persistent ones with manifest
|
||||||
|
PolicyPartial ErrorPolicy = "partial"
|
||||||
|
|
||||||
|
// PolicyRequiredCore fails on core data (issues/deps), skips enrichments (labels/comments)
|
||||||
|
PolicyRequiredCore ErrorPolicy = "required-core"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Config keys for export error handling
|
||||||
|
const (
|
||||||
|
ConfigKeyErrorPolicy = "export.error_policy"
|
||||||
|
ConfigKeyRetryAttempts = "export.retry_attempts"
|
||||||
|
ConfigKeyRetryBackoffMS = "export.retry_backoff_ms"
|
||||||
|
ConfigKeySkipEncodingErrors = "export.skip_encoding_errors"
|
||||||
|
ConfigKeyWriteManifest = "export.write_manifest"
|
||||||
|
ConfigKeyAutoExportPolicy = "auto_export.error_policy"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Default values
|
||||||
|
const (
|
||||||
|
DefaultErrorPolicy = PolicyStrict
|
||||||
|
DefaultRetryAttempts = 3
|
||||||
|
DefaultRetryBackoffMS = 100
|
||||||
|
DefaultSkipEncodingErrors = false
|
||||||
|
DefaultWriteManifest = false
|
||||||
|
DefaultAutoExportPolicy = PolicyBestEffort
|
||||||
|
)
|
||||||
|
|
||||||
|
// Config holds export error handling configuration
|
||||||
|
type Config struct {
|
||||||
|
Policy ErrorPolicy
|
||||||
|
RetryAttempts int
|
||||||
|
RetryBackoffMS int
|
||||||
|
SkipEncodingErrors bool
|
||||||
|
WriteManifest bool
|
||||||
|
IsAutoExport bool // If true, may use different policy
|
||||||
|
}
|
||||||
|
|
||||||
|
// Manifest tracks export completeness and failures
|
||||||
|
type Manifest struct {
|
||||||
|
ExportedCount int `json:"exported_count"`
|
||||||
|
FailedIssues []FailedIssue `json:"failed_issues,omitempty"`
|
||||||
|
PartialData []string `json:"partial_data,omitempty"` // e.g., ["labels", "comments"]
|
||||||
|
Warnings []string `json:"warnings,omitempty"`
|
||||||
|
Complete bool `json:"complete"`
|
||||||
|
ExportedAt time.Time `json:"exported_at"`
|
||||||
|
ErrorPolicy string `json:"error_policy"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// FailedIssue tracks a single issue that failed to export
|
||||||
|
type FailedIssue struct {
|
||||||
|
IssueID string `json:"issue_id"`
|
||||||
|
Reason string `json:"reason"`
|
||||||
|
MissingData []string `json:"missing_data,omitempty"` // e.g., ["labels", "comments"]
|
||||||
|
}
|
||||||
|
|
||||||
|
// RetryWithBackoff wraps a function with retry logic
|
||||||
|
func RetryWithBackoff(ctx context.Context, attempts int, initialBackoffMS int, desc string, fn func() error) error {
|
||||||
|
if attempts < 1 {
|
||||||
|
attempts = 1
|
||||||
|
}
|
||||||
|
|
||||||
|
var lastErr error
|
||||||
|
backoff := time.Duration(initialBackoffMS) * time.Millisecond
|
||||||
|
|
||||||
|
for attempt := 1; attempt <= attempts; attempt++ {
|
||||||
|
err := fn()
|
||||||
|
if err == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
lastErr = err
|
||||||
|
|
||||||
|
// Don't retry on context cancellation
|
||||||
|
if ctx.Err() != nil {
|
||||||
|
return ctx.Err()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Don't wait after last attempt
|
||||||
|
if attempt == attempts {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
|
||||||
|
select {
|
||||||
|
case <-ctx.Done():
|
||||||
|
return ctx.Err()
|
||||||
|
case <-time.After(backoff):
|
||||||
|
backoff *= 2 // Exponential backoff
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if attempts > 1 {
|
||||||
|
return fmt.Errorf("%s failed after %d attempts: %w", desc, attempts, lastErr)
|
||||||
|
}
|
||||||
|
return lastErr
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsValid checks if the policy is a valid value
|
||||||
|
func (p ErrorPolicy) IsValid() bool {
|
||||||
|
switch p {
|
||||||
|
case PolicyStrict, PolicyBestEffort, PolicyPartial, PolicyRequiredCore:
|
||||||
|
return true
|
||||||
|
default:
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// String implements fmt.Stringer
|
||||||
|
func (p ErrorPolicy) String() string {
|
||||||
|
return string(p)
|
||||||
|
}
|
||||||
176
internal/export/policy_test.go
Normal file
176
internal/export/policy_test.go
Normal file
@@ -0,0 +1,176 @@
|
|||||||
|
package export
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"errors"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestRetryWithBackoff(t *testing.T) {
|
||||||
|
ctx := context.Background()
|
||||||
|
|
||||||
|
t.Run("succeeds first try", func(t *testing.T) {
|
||||||
|
attempts := 0
|
||||||
|
err := RetryWithBackoff(ctx, 3, 100, "test", func() error {
|
||||||
|
attempts++
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Errorf("expected no error, got %v", err)
|
||||||
|
}
|
||||||
|
if attempts != 1 {
|
||||||
|
t.Errorf("expected 1 attempt, got %d", attempts)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("succeeds after retries", func(t *testing.T) {
|
||||||
|
attempts := 0
|
||||||
|
err := RetryWithBackoff(ctx, 3, 10, "test", func() error {
|
||||||
|
attempts++
|
||||||
|
if attempts < 3 {
|
||||||
|
return errors.New("transient error")
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Errorf("expected no error, got %v", err)
|
||||||
|
}
|
||||||
|
if attempts != 3 {
|
||||||
|
t.Errorf("expected 3 attempts, got %d", attempts)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("fails after max retries", func(t *testing.T) {
|
||||||
|
attempts := 0
|
||||||
|
err := RetryWithBackoff(ctx, 3, 10, "test", func() error {
|
||||||
|
attempts++
|
||||||
|
return errors.New("persistent error")
|
||||||
|
})
|
||||||
|
if err == nil {
|
||||||
|
t.Error("expected error, got nil")
|
||||||
|
}
|
||||||
|
if attempts != 3 {
|
||||||
|
t.Errorf("expected 3 attempts, got %d", attempts)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("respects context cancellation", func(t *testing.T) {
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), 50*time.Millisecond)
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
attempts := 0
|
||||||
|
err := RetryWithBackoff(ctx, 10, 100, "test", func() error {
|
||||||
|
attempts++
|
||||||
|
return errors.New("error")
|
||||||
|
})
|
||||||
|
if err != context.DeadlineExceeded {
|
||||||
|
t.Errorf("expected DeadlineExceeded, got %v", err)
|
||||||
|
}
|
||||||
|
// Should stop before reaching max retries due to timeout
|
||||||
|
if attempts >= 10 {
|
||||||
|
t.Errorf("expected fewer than 10 attempts due to timeout, got %d", attempts)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestErrorPolicy(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
policy ErrorPolicy
|
||||||
|
valid bool
|
||||||
|
}{
|
||||||
|
{"strict", PolicyStrict, true},
|
||||||
|
{"best-effort", PolicyBestEffort, true},
|
||||||
|
{"partial", PolicyPartial, true},
|
||||||
|
{"required-core", PolicyRequiredCore, true},
|
||||||
|
{"invalid", ErrorPolicy("invalid"), false},
|
||||||
|
{"empty", ErrorPolicy(""), false},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
if got := tt.policy.IsValid(); got != tt.valid {
|
||||||
|
t.Errorf("IsValid() = %v, want %v", got, tt.valid)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestFetchWithPolicy(t *testing.T) {
|
||||||
|
ctx := context.Background()
|
||||||
|
|
||||||
|
t.Run("strict policy fails fast", func(t *testing.T) {
|
||||||
|
cfg := &Config{
|
||||||
|
Policy: PolicyStrict,
|
||||||
|
RetryAttempts: 1,
|
||||||
|
RetryBackoffMS: 10,
|
||||||
|
}
|
||||||
|
result := FetchWithPolicy(ctx, cfg, DataTypeCore, "test", func() error {
|
||||||
|
return errors.New("test error")
|
||||||
|
})
|
||||||
|
if result.Err == nil {
|
||||||
|
t.Error("expected error, got nil")
|
||||||
|
}
|
||||||
|
if result.Success {
|
||||||
|
t.Error("expected Success=false")
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("best-effort policy skips errors", func(t *testing.T) {
|
||||||
|
cfg := &Config{
|
||||||
|
Policy: PolicyBestEffort,
|
||||||
|
RetryAttempts: 1,
|
||||||
|
RetryBackoffMS: 10,
|
||||||
|
}
|
||||||
|
result := FetchWithPolicy(ctx, cfg, DataTypeLabels, "test", func() error {
|
||||||
|
return errors.New("test error")
|
||||||
|
})
|
||||||
|
if result.Err != nil {
|
||||||
|
t.Errorf("expected no error in best-effort, got %v", result.Err)
|
||||||
|
}
|
||||||
|
if result.Success {
|
||||||
|
t.Error("expected Success=false")
|
||||||
|
}
|
||||||
|
if len(result.Warnings) == 0 {
|
||||||
|
t.Error("expected warnings")
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("required-core fails on core data", func(t *testing.T) {
|
||||||
|
cfg := &Config{
|
||||||
|
Policy: PolicyRequiredCore,
|
||||||
|
RetryAttempts: 1,
|
||||||
|
RetryBackoffMS: 10,
|
||||||
|
}
|
||||||
|
result := FetchWithPolicy(ctx, cfg, DataTypeCore, "test", func() error {
|
||||||
|
return errors.New("test error")
|
||||||
|
})
|
||||||
|
if result.Err == nil {
|
||||||
|
t.Error("expected error for core data, got nil")
|
||||||
|
}
|
||||||
|
if result.Success {
|
||||||
|
t.Error("expected Success=false")
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("required-core skips enrichment errors", func(t *testing.T) {
|
||||||
|
cfg := &Config{
|
||||||
|
Policy: PolicyRequiredCore,
|
||||||
|
RetryAttempts: 1,
|
||||||
|
RetryBackoffMS: 10,
|
||||||
|
}
|
||||||
|
result := FetchWithPolicy(ctx, cfg, DataTypeLabels, "test", func() error {
|
||||||
|
return errors.New("test error")
|
||||||
|
})
|
||||||
|
if result.Err != nil {
|
||||||
|
t.Errorf("expected no error for enrichment, got %v", result.Err)
|
||||||
|
}
|
||||||
|
if result.Success {
|
||||||
|
t.Error("expected Success=false")
|
||||||
|
}
|
||||||
|
if len(result.Warnings) == 0 {
|
||||||
|
t.Error("expected warnings")
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
@@ -13,6 +13,7 @@ import (
|
|||||||
|
|
||||||
"github.com/steveyegge/beads/internal/autoimport"
|
"github.com/steveyegge/beads/internal/autoimport"
|
||||||
"github.com/steveyegge/beads/internal/debug"
|
"github.com/steveyegge/beads/internal/debug"
|
||||||
|
"github.com/steveyegge/beads/internal/export"
|
||||||
"github.com/steveyegge/beads/internal/importer"
|
"github.com/steveyegge/beads/internal/importer"
|
||||||
"github.com/steveyegge/beads/internal/storage"
|
"github.com/steveyegge/beads/internal/storage"
|
||||||
"github.com/steveyegge/beads/internal/storage/sqlite"
|
"github.com/steveyegge/beads/internal/storage/sqlite"
|
||||||
@@ -30,10 +31,24 @@ func (s *Server) handleExport(req *Request) Response {
|
|||||||
}
|
}
|
||||||
|
|
||||||
store := s.storage
|
store := s.storage
|
||||||
|
|
||||||
ctx := s.reqCtx(req)
|
ctx := s.reqCtx(req)
|
||||||
|
|
||||||
// Get all issues
|
// Load export configuration (user-initiated export, not auto)
|
||||||
|
cfg, err := export.LoadConfig(ctx, store, false)
|
||||||
|
if err != nil {
|
||||||
|
return Response{
|
||||||
|
Success: false,
|
||||||
|
Error: fmt.Sprintf("failed to load export config: %v", err),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Initialize manifest if configured
|
||||||
|
var manifest *export.Manifest
|
||||||
|
if cfg.WriteManifest {
|
||||||
|
manifest = export.NewManifest(cfg.Policy)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get all issues (core operation, always fail-fast)
|
||||||
issues, err := store.SearchIssues(ctx, "", types.IssueFilter{})
|
issues, err := store.SearchIssues(ctx, "", types.IssueFilter{})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return Response{
|
return Response{
|
||||||
@@ -47,40 +62,73 @@ func (s *Server) handleExport(req *Request) Response {
|
|||||||
return issues[i].ID < issues[j].ID
|
return issues[i].ID < issues[j].ID
|
||||||
})
|
})
|
||||||
|
|
||||||
// Populate dependencies for all issues (avoid N+1)
|
// Populate dependencies for all issues (core data)
|
||||||
allDeps, err := store.GetAllDependencyRecords(ctx)
|
var allDeps map[string][]*types.Dependency
|
||||||
if err != nil {
|
result := export.FetchWithPolicy(ctx, cfg, export.DataTypeCore, "get dependencies", func() error {
|
||||||
|
var err error
|
||||||
|
allDeps, err = store.GetAllDependencyRecords(ctx)
|
||||||
|
return err
|
||||||
|
})
|
||||||
|
if result.Err != nil {
|
||||||
return Response{
|
return Response{
|
||||||
Success: false,
|
Success: false,
|
||||||
Error: fmt.Sprintf("failed to get dependencies: %v", err),
|
Error: fmt.Sprintf("failed to get dependencies: %v", result.Err),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
for _, issue := range issues {
|
for _, issue := range issues {
|
||||||
issue.Dependencies = allDeps[issue.ID]
|
issue.Dependencies = allDeps[issue.ID]
|
||||||
}
|
}
|
||||||
|
|
||||||
// Populate labels for all issues (avoid N+1)
|
// Populate labels for all issues (enrichment data)
|
||||||
issueIDs := make([]string, len(issues))
|
issueIDs := make([]string, len(issues))
|
||||||
for i, issue := range issues {
|
for i, issue := range issues {
|
||||||
issueIDs[i] = issue.ID
|
issueIDs[i] = issue.ID
|
||||||
}
|
}
|
||||||
allLabels, err := store.GetLabelsForIssues(ctx, issueIDs)
|
var allLabels map[string][]string
|
||||||
if err != nil {
|
result = export.FetchWithPolicy(ctx, cfg, export.DataTypeLabels, "get labels", func() error {
|
||||||
|
var err error
|
||||||
|
allLabels, err = store.GetLabelsForIssues(ctx, issueIDs)
|
||||||
|
return err
|
||||||
|
})
|
||||||
|
if result.Err != nil {
|
||||||
return Response{
|
return Response{
|
||||||
Success: false,
|
Success: false,
|
||||||
Error: fmt.Sprintf("failed to get labels: %v", err),
|
Error: fmt.Sprintf("failed to get labels: %v", result.Err),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !result.Success {
|
||||||
|
// Labels fetch failed but policy allows continuing
|
||||||
|
allLabels = make(map[string][]string) // Empty map
|
||||||
|
if manifest != nil {
|
||||||
|
manifest.PartialData = append(manifest.PartialData, "labels")
|
||||||
|
manifest.Warnings = append(manifest.Warnings, result.Warnings...)
|
||||||
|
manifest.Complete = false
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
for _, issue := range issues {
|
for _, issue := range issues {
|
||||||
issue.Labels = allLabels[issue.ID]
|
issue.Labels = allLabels[issue.ID]
|
||||||
}
|
}
|
||||||
|
|
||||||
// Populate comments for all issues (avoid N+1)
|
// Populate comments for all issues (enrichment data)
|
||||||
allComments, err := store.GetCommentsForIssues(ctx, issueIDs)
|
var allComments map[string][]*types.Comment
|
||||||
if err != nil {
|
result = export.FetchWithPolicy(ctx, cfg, export.DataTypeComments, "get comments", func() error {
|
||||||
|
var err error
|
||||||
|
allComments, err = store.GetCommentsForIssues(ctx, issueIDs)
|
||||||
|
return err
|
||||||
|
})
|
||||||
|
if result.Err != nil {
|
||||||
return Response{
|
return Response{
|
||||||
Success: false,
|
Success: false,
|
||||||
Error: fmt.Sprintf("failed to get comments: %v", err),
|
Error: fmt.Sprintf("failed to get comments: %v", result.Err),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !result.Success {
|
||||||
|
// Comments fetch failed but policy allows continuing
|
||||||
|
allComments = make(map[string][]*types.Comment) // Empty map
|
||||||
|
if manifest != nil {
|
||||||
|
manifest.PartialData = append(manifest.PartialData, "comments")
|
||||||
|
manifest.Warnings = append(manifest.Warnings, result.Warnings...)
|
||||||
|
manifest.Complete = false
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
for _, issue := range issues {
|
for _, issue := range issues {
|
||||||
@@ -106,8 +154,24 @@ func (s *Server) handleExport(req *Request) Response {
|
|||||||
// Write JSONL
|
// Write JSONL
|
||||||
encoder := json.NewEncoder(tempFile)
|
encoder := json.NewEncoder(tempFile)
|
||||||
exportedIDs := make([]string, 0, len(issues))
|
exportedIDs := make([]string, 0, len(issues))
|
||||||
|
var encodingWarnings []string
|
||||||
for _, issue := range issues {
|
for _, issue := range issues {
|
||||||
if err := encoder.Encode(issue); err != nil {
|
if err := encoder.Encode(issue); err != nil {
|
||||||
|
if cfg.SkipEncodingErrors {
|
||||||
|
// Skip this issue and continue
|
||||||
|
warning := fmt.Sprintf("skipped encoding issue %s: %v", issue.ID, err)
|
||||||
|
fmt.Fprintf(os.Stderr, "Warning: %s\n", warning)
|
||||||
|
encodingWarnings = append(encodingWarnings, warning)
|
||||||
|
if manifest != nil {
|
||||||
|
manifest.FailedIssues = append(manifest.FailedIssues, export.FailedIssue{
|
||||||
|
IssueID: issue.ID,
|
||||||
|
Reason: err.Error(),
|
||||||
|
})
|
||||||
|
manifest.Complete = false
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
// Fail-fast on encoding errors
|
||||||
return Response{
|
return Response{
|
||||||
Success: false,
|
Success: false,
|
||||||
Error: fmt.Sprintf("failed to encode issue %s: %v", issue.ID, err),
|
Error: fmt.Sprintf("failed to encode issue %s: %v", issue.ID, err),
|
||||||
@@ -139,11 +203,25 @@ func (s *Server) handleExport(req *Request) Response {
|
|||||||
fmt.Fprintf(os.Stderr, "Warning: failed to clear dirty flags: %v\n", err)
|
fmt.Fprintf(os.Stderr, "Warning: failed to clear dirty flags: %v\n", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
result := map[string]interface{}{
|
// Write manifest if configured
|
||||||
|
if manifest != nil {
|
||||||
|
manifest.ExportedCount = len(exportedIDs)
|
||||||
|
manifest.Warnings = append(manifest.Warnings, encodingWarnings...)
|
||||||
|
if err := export.WriteManifest(exportArgs.JSONLPath, manifest); err != nil {
|
||||||
|
// Non-fatal, just log
|
||||||
|
fmt.Fprintf(os.Stderr, "Warning: failed to write manifest: %v\n", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
responseData := map[string]interface{}{
|
||||||
"exported_count": len(exportedIDs),
|
"exported_count": len(exportedIDs),
|
||||||
"path": exportArgs.JSONLPath,
|
"path": exportArgs.JSONLPath,
|
||||||
|
"skipped_count": len(encodingWarnings),
|
||||||
}
|
}
|
||||||
data, _ := json.Marshal(result)
|
if len(encodingWarnings) > 0 {
|
||||||
|
responseData["warnings"] = encodingWarnings
|
||||||
|
}
|
||||||
|
data, _ := json.Marshal(responseData)
|
||||||
return Response{
|
return Response{
|
||||||
Success: true,
|
Success: true,
|
||||||
Data: data,
|
Data: data,
|
||||||
@@ -379,6 +457,18 @@ func (s *Server) triggerExport(ctx context.Context, store storage.Storage, dbPat
|
|||||||
return fmt.Errorf("storage is not SQLiteStorage")
|
return fmt.Errorf("storage is not SQLiteStorage")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Load export configuration (auto-export mode)
|
||||||
|
cfg, err := export.LoadConfig(ctx, store, true)
|
||||||
|
if err != nil {
|
||||||
|
// Fall back to defaults if config load fails
|
||||||
|
cfg = &export.Config{
|
||||||
|
Policy: export.DefaultAutoExportPolicy,
|
||||||
|
RetryAttempts: export.DefaultRetryAttempts,
|
||||||
|
RetryBackoffMS: export.DefaultRetryBackoffMS,
|
||||||
|
IsAutoExport: true,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Export to JSONL (this will update the file with remapped IDs)
|
// Export to JSONL (this will update the file with remapped IDs)
|
||||||
allIssues, err := sqliteStore.SearchIssues(ctx, "", types.IssueFilter{})
|
allIssues, err := sqliteStore.SearchIssues(ctx, "", types.IssueFilter{})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -393,32 +483,55 @@ func (s *Server) triggerExport(ctx context.Context, store storage.Storage, dbPat
|
|||||||
// CRITICAL: Populate all related data to prevent data loss
|
// CRITICAL: Populate all related data to prevent data loss
|
||||||
// This mirrors the logic in handleExport
|
// This mirrors the logic in handleExport
|
||||||
|
|
||||||
// Populate dependencies for all issues (avoid N+1 queries)
|
// Populate dependencies for all issues (core data)
|
||||||
allDeps, err := store.GetAllDependencyRecords(ctx)
|
var allDeps map[string][]*types.Dependency
|
||||||
if err != nil {
|
result := export.FetchWithPolicy(ctx, cfg, export.DataTypeCore, "get dependencies", func() error {
|
||||||
return fmt.Errorf("failed to get dependencies: %w", err)
|
var err error
|
||||||
|
allDeps, err = store.GetAllDependencyRecords(ctx)
|
||||||
|
return err
|
||||||
|
})
|
||||||
|
if result.Err != nil {
|
||||||
|
return fmt.Errorf("failed to get dependencies: %w", result.Err)
|
||||||
}
|
}
|
||||||
for _, issue := range allIssues {
|
for _, issue := range allIssues {
|
||||||
issue.Dependencies = allDeps[issue.ID]
|
issue.Dependencies = allDeps[issue.ID]
|
||||||
}
|
}
|
||||||
|
|
||||||
// Populate labels for all issues (avoid N+1 queries)
|
// Populate labels for all issues (enrichment data)
|
||||||
issueIDs := make([]string, len(allIssues))
|
issueIDs := make([]string, len(allIssues))
|
||||||
for i, issue := range allIssues {
|
for i, issue := range allIssues {
|
||||||
issueIDs[i] = issue.ID
|
issueIDs[i] = issue.ID
|
||||||
}
|
}
|
||||||
allLabels, err := store.GetLabelsForIssues(ctx, issueIDs)
|
var allLabels map[string][]string
|
||||||
if err != nil {
|
result = export.FetchWithPolicy(ctx, cfg, export.DataTypeLabels, "get labels", func() error {
|
||||||
return fmt.Errorf("failed to get labels: %w", err)
|
var err error
|
||||||
|
allLabels, err = store.GetLabelsForIssues(ctx, issueIDs)
|
||||||
|
return err
|
||||||
|
})
|
||||||
|
if result.Err != nil {
|
||||||
|
return fmt.Errorf("failed to get labels: %w", result.Err)
|
||||||
|
}
|
||||||
|
if !result.Success {
|
||||||
|
// Labels fetch failed but policy allows continuing
|
||||||
|
allLabels = make(map[string][]string) // Empty map
|
||||||
}
|
}
|
||||||
for _, issue := range allIssues {
|
for _, issue := range allIssues {
|
||||||
issue.Labels = allLabels[issue.ID]
|
issue.Labels = allLabels[issue.ID]
|
||||||
}
|
}
|
||||||
|
|
||||||
// Populate comments for all issues (avoid N+1 queries)
|
// Populate comments for all issues (enrichment data)
|
||||||
allComments, err := store.GetCommentsForIssues(ctx, issueIDs)
|
var allComments map[string][]*types.Comment
|
||||||
if err != nil {
|
result = export.FetchWithPolicy(ctx, cfg, export.DataTypeComments, "get comments", func() error {
|
||||||
return fmt.Errorf("failed to get comments: %w", err)
|
var err error
|
||||||
|
allComments, err = store.GetCommentsForIssues(ctx, issueIDs)
|
||||||
|
return err
|
||||||
|
})
|
||||||
|
if result.Err != nil {
|
||||||
|
return fmt.Errorf("failed to get comments: %w", result.Err)
|
||||||
|
}
|
||||||
|
if !result.Success {
|
||||||
|
// Comments fetch failed but policy allows continuing
|
||||||
|
allComments = make(map[string][]*types.Comment) // Empty map
|
||||||
}
|
}
|
||||||
for _, issue := range allIssues {
|
for _, issue := range allIssues {
|
||||||
issue.Comments = allComments[issue.ID]
|
issue.Comments = allComments[issue.ID]
|
||||||
|
|||||||
Reference in New Issue
Block a user