feat: Linear Integration (#655)
* Add Linear integration CLI with sync and status commands - Add `bd linear sync` for bidirectional issue sync with Linear - Add `bd linear status` to show configuration and sync state - Stub pull/push functions pending GraphQL client (bd-b6b.2) * Implement Linear GraphQL client with full sync support - Add LinearClient with auth, fetch, create, update methods - Implement pull/push operations with Beads type mapping - Clean up redundant comments and remove unused code * Add configurable data mapping and dependency sync for Linear - Add LinearMappingConfig with configurable priority/state/label/relation maps - Import parent-child and issue relations as Beads dependencies - Support custom workflow states via linear.state_map.* config * Add incremental sync support for Linear integration - Add FetchIssuesSince() method using updatedAt filter in GraphQL - Check linear.last_sync config to enable incremental pulls - Track sync mode (incremental vs full) in LinearPullStats * feat(linear): implement push updates for existing Linear issues Add FetchIssueByIdentifier method to retrieve single issues by identifier (e.g., "TEAM-123") for timestamp comparison during push. Update doPushToLinear to: - Fetch Linear issue to get internal ID and UpdatedAt timestamp - Compare timestamps: only update if local is newer - Build update payload with title, description, priority, and state - Call UpdateIssue for issues where local has newer changes Closes bd-b6b.5 * Implement Linear conflict resolution strategies - Add true conflict detection by fetching Linear timestamps via API - Implement --prefer-linear resolution (re-import from Linear) - Implement timestamp-based resolution (newer wins as default) - Fix linter issues: handle resp.Body.Close() and remove unused error return * Add Linear integration tests and documentation - Add comprehensive unit tests for Linear mapping (priority, state, labels, relations) - Update docs/CONFIG.md with Linear configuration reference - Add examples/linear-workflow guide for bidirectional sync - Remove AI section header comments from tests * Fix Linear GraphQL filter construction and improve test coverage - Refactor filter handling to combine team ID into main filter object - Add test for duplicate issue relation mapping - Add HTTP round-trip helper for testing request payload validation * Refactor Linear queries to use shared constant and add UUID validation - Extract linearIssuesQuery to deduplicate FetchIssues/FetchIssuesSince - Add linearMaxPageSize constant and UUID validation with regex - Expand test coverage for new functionality * Refactor Linear integration into internal/linear package - Extract types, client, and mapping logic from cmd/bd/linear.go - Create internal/linear/ package for better code organization - Update tests to work with new package structure * Add linear teams command to list available teams - Add FetchTeams GraphQL query to Linear client - Refactor config reading to support daemon mode - Add tests for teams listing functionality * Refactor Linear config to use getLinearConfig helper - Consolidate config/env var lookup using getLinearConfig function - Add LINEAR_TEAM_ID environment variable support - Update error messages to include env var configuration options * Add hash ID generation and improve Linear conflict detection - Add configurable hash ID mode for Linear imports (matches bd/Jira) - Improve conflict detection with content hash comparison - Enhance conflict resolution with skip/force update tracking * Fix test for updated doPushToLinear signature - Add missing skipUpdateIDs parameter to test call
This commit is contained in:
1189
cmd/bd/linear.go
Normal file
1189
cmd/bd/linear.go
Normal file
File diff suppressed because it is too large
Load Diff
1898
cmd/bd/linear_test.go
Normal file
1898
cmd/bd/linear_test.go
Normal file
File diff suppressed because it is too large
Load Diff
107
docs/CONFIG.md
107
docs/CONFIG.md
@@ -377,17 +377,108 @@ bd config set jira.type_map.task "Task"
|
|||||||
|
|
||||||
### Example: Linear Integration
|
### Example: Linear Integration
|
||||||
|
|
||||||
```bash
|
Linear integration provides bidirectional sync between bd and Linear via GraphQL API.
|
||||||
# Configure Linear connection
|
|
||||||
bd config set linear.api_token "YOUR_TOKEN"
|
|
||||||
bd config set linear.team_id "team-123"
|
|
||||||
|
|
||||||
# Map statuses
|
**Required configuration:**
|
||||||
bd config set linear.status_map.open "Backlog"
|
|
||||||
bd config set linear.status_map.in_progress "In Progress"
|
```bash
|
||||||
bd config set linear.status_map.closed "Done"
|
# API Key (can also use LINEAR_API_KEY environment variable)
|
||||||
|
bd config set linear.api_key "lin_api_YOUR_API_KEY"
|
||||||
|
|
||||||
|
# Team ID (find in Linear team settings or URL)
|
||||||
|
bd config set linear.team_id "team-uuid-here"
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**Getting your Linear credentials:**
|
||||||
|
|
||||||
|
1. **API Key**: Go to Linear → Settings → API → Personal API keys → Create key
|
||||||
|
2. **Team ID**: Go to Linear → Settings → General → Team ID (or extract from URLs)
|
||||||
|
|
||||||
|
**Priority mapping (Linear 0-4 → Beads 0-4):**
|
||||||
|
|
||||||
|
Linear and Beads both use 0-4 priority scales, but with different semantics:
|
||||||
|
- Linear: 0=no priority, 1=urgent, 2=high, 3=medium, 4=low
|
||||||
|
- Beads: 0=critical, 1=high, 2=medium, 3=low, 4=backlog
|
||||||
|
|
||||||
|
Default mapping (configurable):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bd config set linear.priority_map.0 4 # No priority -> Backlog
|
||||||
|
bd config set linear.priority_map.1 0 # Urgent -> Critical
|
||||||
|
bd config set linear.priority_map.2 1 # High -> High
|
||||||
|
bd config set linear.priority_map.3 2 # Medium -> Medium
|
||||||
|
bd config set linear.priority_map.4 3 # Low -> Low
|
||||||
|
```
|
||||||
|
|
||||||
|
**State mapping (Linear state types → Beads statuses):**
|
||||||
|
|
||||||
|
Map Linear workflow state types to Beads statuses:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bd config set linear.state_map.backlog open
|
||||||
|
bd config set linear.state_map.unstarted open
|
||||||
|
bd config set linear.state_map.started in_progress
|
||||||
|
bd config set linear.state_map.completed closed
|
||||||
|
bd config set linear.state_map.canceled closed
|
||||||
|
|
||||||
|
# For custom workflow states, use lowercase state name:
|
||||||
|
bd config set linear.state_map.in_review in_progress
|
||||||
|
bd config set linear.state_map.blocked blocked
|
||||||
|
bd config set linear.state_map.on_hold blocked
|
||||||
|
```
|
||||||
|
|
||||||
|
**Label to issue type mapping:**
|
||||||
|
|
||||||
|
Infer bd issue type from Linear labels:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bd config set linear.label_type_map.bug bug
|
||||||
|
bd config set linear.label_type_map.defect bug
|
||||||
|
bd config set linear.label_type_map.feature feature
|
||||||
|
bd config set linear.label_type_map.enhancement feature
|
||||||
|
bd config set linear.label_type_map.epic epic
|
||||||
|
bd config set linear.label_type_map.chore chore
|
||||||
|
bd config set linear.label_type_map.maintenance chore
|
||||||
|
bd config set linear.label_type_map.task task
|
||||||
|
```
|
||||||
|
|
||||||
|
**Relation type mapping (Linear relations → Beads dependencies):**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bd config set linear.relation_map.blocks blocks
|
||||||
|
bd config set linear.relation_map.blockedBy blocks
|
||||||
|
bd config set linear.relation_map.duplicate duplicates
|
||||||
|
bd config set linear.relation_map.related related
|
||||||
|
```
|
||||||
|
|
||||||
|
**Sync commands:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Bidirectional sync (pull then push, with conflict resolution)
|
||||||
|
bd linear sync
|
||||||
|
|
||||||
|
# Pull only (import from Linear)
|
||||||
|
bd linear sync --pull
|
||||||
|
|
||||||
|
# Push only (export to Linear)
|
||||||
|
bd linear sync --push
|
||||||
|
|
||||||
|
# Dry run (preview without changes)
|
||||||
|
bd linear sync --dry-run
|
||||||
|
|
||||||
|
# Conflict resolution options
|
||||||
|
bd linear sync --prefer-local # Local version wins on conflicts
|
||||||
|
bd linear sync --prefer-linear # Linear version wins on conflicts
|
||||||
|
# Default: newer timestamp wins
|
||||||
|
|
||||||
|
# Check sync status
|
||||||
|
bd linear status
|
||||||
|
```
|
||||||
|
|
||||||
|
**Automatic sync tracking:**
|
||||||
|
|
||||||
|
The `linear.last_sync` config key is automatically updated after each sync, enabling incremental sync (only fetch issues updated since last sync).
|
||||||
|
|
||||||
### Example: GitHub Integration
|
### Example: GitHub Integration
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
|||||||
529
examples/linear-workflow/README.md
Normal file
529
examples/linear-workflow/README.md
Normal file
@@ -0,0 +1,529 @@
|
|||||||
|
# Linear Integration for bd
|
||||||
|
|
||||||
|
Bidirectional synchronization between Linear and bd (beads) using the built-in `bd linear` commands.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
The Linear integration provides:
|
||||||
|
|
||||||
|
- **Pull**: Import issues from Linear into bd
|
||||||
|
- **Push**: Export bd issues to Linear
|
||||||
|
- **Bidirectional Sync**: Two-way sync with conflict resolution
|
||||||
|
- **Incremental Sync**: Only sync issues changed since last sync
|
||||||
|
- **Configurable Mappings**: Customize priority, state, label, and relation mappings
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
### 1. Get Linear Credentials
|
||||||
|
|
||||||
|
1. **API Key**: Go to Linear → Settings → API → Personal API keys → Create key
|
||||||
|
2. **Team ID**: Go to Linear → Settings → General → find the Team ID (UUID format)
|
||||||
|
|
||||||
|
### 2. Configure bd
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Set API key (or use LINEAR_API_KEY environment variable)
|
||||||
|
bd config set linear.api_key "lin_api_YOUR_API_KEY_HERE"
|
||||||
|
|
||||||
|
# Set team ID
|
||||||
|
bd config set linear.team_id "YOUR_TEAM_UUID"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Sync with Linear
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check configuration status
|
||||||
|
bd linear status
|
||||||
|
|
||||||
|
# Pull issues from Linear
|
||||||
|
bd linear sync --pull
|
||||||
|
|
||||||
|
# Push local issues to Linear
|
||||||
|
bd linear sync --push
|
||||||
|
|
||||||
|
# Full bidirectional sync (pull, resolve conflicts, push)
|
||||||
|
bd linear sync
|
||||||
|
```
|
||||||
|
|
||||||
|
## Authentication
|
||||||
|
|
||||||
|
### API Key
|
||||||
|
|
||||||
|
Linear uses Personal API Keys for authentication. Create one at:
|
||||||
|
**Linear → Settings → API → Personal API keys**
|
||||||
|
|
||||||
|
Store securely:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Option 1: bd config (stored in database)
|
||||||
|
bd config set linear.api_key "lin_api_..."
|
||||||
|
|
||||||
|
# Option 2: Environment variable
|
||||||
|
export LINEAR_API_KEY="lin_api_..."
|
||||||
|
```
|
||||||
|
|
||||||
|
### Team ID
|
||||||
|
|
||||||
|
Find your Team ID in Linear:
|
||||||
|
- **Settings → General** → Look for Team ID
|
||||||
|
- Or extract from URLs: `https://linear.app/YOUR_TEAM/...` → Go to team settings
|
||||||
|
|
||||||
|
## Sync Modes
|
||||||
|
|
||||||
|
### Pull Only (Linear → bd)
|
||||||
|
|
||||||
|
Import issues from Linear without pushing local changes:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bd linear sync --pull
|
||||||
|
|
||||||
|
# Filter by state
|
||||||
|
bd linear sync --pull --state open # Only open issues
|
||||||
|
bd linear sync --pull --state closed # Only closed issues
|
||||||
|
bd linear sync --pull --state all # All issues (default)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Push Only (bd → Linear)
|
||||||
|
|
||||||
|
Export local issues to Linear without pulling:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bd linear sync --push
|
||||||
|
|
||||||
|
# Create only (don't update existing Linear issues)
|
||||||
|
bd linear sync --push --create-only
|
||||||
|
|
||||||
|
# Disable automatic external_ref update
|
||||||
|
bd linear sync --push --update-refs=false
|
||||||
|
```
|
||||||
|
|
||||||
|
### Bidirectional Sync
|
||||||
|
|
||||||
|
Full two-way sync with conflict detection and resolution:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Default: newer timestamp wins conflicts
|
||||||
|
bd linear sync
|
||||||
|
|
||||||
|
# Always prefer local version on conflicts
|
||||||
|
bd linear sync --prefer-local
|
||||||
|
|
||||||
|
# Always prefer Linear version on conflicts
|
||||||
|
bd linear sync --prefer-linear
|
||||||
|
```
|
||||||
|
|
||||||
|
### Dry Run
|
||||||
|
|
||||||
|
Preview what would happen without making changes:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bd linear sync --dry-run
|
||||||
|
```
|
||||||
|
|
||||||
|
## Data Mapping
|
||||||
|
|
||||||
|
### Priority Mapping
|
||||||
|
|
||||||
|
Linear and Beads use different priority semantics:
|
||||||
|
|
||||||
|
| Linear | Meaning | Beads | Meaning |
|
||||||
|
|--------|---------|-------|---------|
|
||||||
|
| 0 | No priority | 4 | Backlog |
|
||||||
|
| 1 | Urgent | 0 | Critical |
|
||||||
|
| 2 | High | 1 | High |
|
||||||
|
| 3 | Medium | 2 | Medium |
|
||||||
|
| 4 | Low | 3 | Low |
|
||||||
|
|
||||||
|
**Default mapping** (Linear → Beads):
|
||||||
|
- 0 (no priority) → 4 (backlog)
|
||||||
|
- 1 (urgent) → 0 (critical)
|
||||||
|
- 2 (high) → 1 (high)
|
||||||
|
- 3 (medium) → 2 (medium)
|
||||||
|
- 4 (low) → 3 (low)
|
||||||
|
|
||||||
|
**Custom mappings:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Override default mappings
|
||||||
|
bd config set linear.priority_map.0 2 # No priority -> Medium (instead of Backlog)
|
||||||
|
bd config set linear.priority_map.1 1 # Urgent -> High (instead of Critical)
|
||||||
|
```
|
||||||
|
|
||||||
|
### State Mapping
|
||||||
|
|
||||||
|
Map Linear workflow states to bd statuses:
|
||||||
|
|
||||||
|
| Linear State Type | Beads Status |
|
||||||
|
|-------------------|--------------|
|
||||||
|
| backlog | open |
|
||||||
|
| unstarted | open |
|
||||||
|
| started | in_progress |
|
||||||
|
| completed | closed |
|
||||||
|
| canceled | closed |
|
||||||
|
|
||||||
|
**Custom state mappings** (for custom workflow states):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Map by state type
|
||||||
|
bd config set linear.state_map.started in_progress
|
||||||
|
|
||||||
|
# Map by state name (for custom workflow states)
|
||||||
|
bd config set linear.state_map.in_review in_progress
|
||||||
|
bd config set linear.state_map.blocked blocked
|
||||||
|
bd config set linear.state_map.on_hold blocked
|
||||||
|
bd config set linear.state_map.testing in_progress
|
||||||
|
bd config set linear.state_map.deployed closed
|
||||||
|
```
|
||||||
|
|
||||||
|
### Label to Issue Type
|
||||||
|
|
||||||
|
Infer bd issue type from Linear labels:
|
||||||
|
|
||||||
|
| Linear Label | Beads Type |
|
||||||
|
|--------------|------------|
|
||||||
|
| bug, defect | bug |
|
||||||
|
| feature, enhancement | feature |
|
||||||
|
| epic | epic |
|
||||||
|
| chore, maintenance | chore |
|
||||||
|
| task | task |
|
||||||
|
|
||||||
|
**Custom label mappings:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bd config set linear.label_type_map.incident bug
|
||||||
|
bd config set linear.label_type_map.improvement feature
|
||||||
|
bd config set linear.label_type_map.tech_debt chore
|
||||||
|
bd config set linear.label_type_map.story feature
|
||||||
|
```
|
||||||
|
|
||||||
|
### Relation Mapping
|
||||||
|
|
||||||
|
Map Linear relations to bd dependencies:
|
||||||
|
|
||||||
|
| Linear Relation | Beads Dependency |
|
||||||
|
|-----------------|------------------|
|
||||||
|
| blocks | blocks |
|
||||||
|
| blockedBy | blocks (inverted) |
|
||||||
|
| duplicate | duplicates |
|
||||||
|
| related | related |
|
||||||
|
| (parent) | parent-child |
|
||||||
|
|
||||||
|
**Custom relation mappings:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bd config set linear.relation_map.causes discovered-from
|
||||||
|
bd config set linear.relation_map.duplicate related
|
||||||
|
```
|
||||||
|
|
||||||
|
## Conflict Resolution
|
||||||
|
|
||||||
|
Conflicts occur when both local and Linear versions are modified since the last sync.
|
||||||
|
|
||||||
|
### Timestamp-based (Default)
|
||||||
|
|
||||||
|
The newer version wins:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bd linear sync # Newer timestamp wins
|
||||||
|
```
|
||||||
|
|
||||||
|
### Prefer Local
|
||||||
|
|
||||||
|
Local bd version always wins:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bd linear sync --prefer-local
|
||||||
|
```
|
||||||
|
|
||||||
|
Use when:
|
||||||
|
- Local is your source of truth
|
||||||
|
- You've made deliberate changes locally
|
||||||
|
|
||||||
|
### Prefer Linear
|
||||||
|
|
||||||
|
Linear version always wins:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bd linear sync --prefer-linear
|
||||||
|
```
|
||||||
|
|
||||||
|
Use when:
|
||||||
|
- Linear is your source of truth
|
||||||
|
- You want to accept team changes
|
||||||
|
|
||||||
|
## Workflows
|
||||||
|
|
||||||
|
### Workflow 1: Initial Import from Linear
|
||||||
|
|
||||||
|
First-time import of existing Linear issues:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Configure credentials
|
||||||
|
bd config set linear.api_key "lin_api_..."
|
||||||
|
bd config set linear.team_id "team-uuid"
|
||||||
|
|
||||||
|
# Check status
|
||||||
|
bd linear status
|
||||||
|
|
||||||
|
# Import all issues
|
||||||
|
bd linear sync --pull
|
||||||
|
|
||||||
|
# See what was imported
|
||||||
|
bd stats
|
||||||
|
bd list --json
|
||||||
|
```
|
||||||
|
|
||||||
|
### Workflow 2: Daily Sync
|
||||||
|
|
||||||
|
Regular synchronization:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Pull latest from Linear (incremental since last sync)
|
||||||
|
bd linear sync --pull
|
||||||
|
|
||||||
|
# Do local work
|
||||||
|
bd update bd-123 --status in_progress
|
||||||
|
# ... work ...
|
||||||
|
bd close bd-123 --reason "Fixed"
|
||||||
|
|
||||||
|
# Push changes to Linear
|
||||||
|
bd linear sync --push
|
||||||
|
|
||||||
|
# Or do full bidirectional sync
|
||||||
|
bd linear sync
|
||||||
|
```
|
||||||
|
|
||||||
|
### Workflow 3: Create Local Issues, Push to Linear
|
||||||
|
|
||||||
|
Create issues locally and sync to Linear:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create issue locally
|
||||||
|
bd create "Fix authentication bug" -t bug -p 1
|
||||||
|
|
||||||
|
# Push to Linear (creates new Linear issue, updates external_ref)
|
||||||
|
bd linear sync --push
|
||||||
|
|
||||||
|
# Verify
|
||||||
|
bd show bd-abc # Should have external_ref pointing to Linear
|
||||||
|
```
|
||||||
|
|
||||||
|
### Workflow 4: Migrate to bd
|
||||||
|
|
||||||
|
Full migration from Linear to bd:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Import all issues
|
||||||
|
bd linear sync --pull --state all
|
||||||
|
|
||||||
|
# Preview import
|
||||||
|
bd stats
|
||||||
|
|
||||||
|
# Continue using bd locally, push updates back to Linear
|
||||||
|
bd linear sync # Regular bidirectional sync
|
||||||
|
```
|
||||||
|
|
||||||
|
### Workflow 5: Read-Only Linear Mirror
|
||||||
|
|
||||||
|
Mirror Linear issues locally without pushing back:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Only ever pull, never push
|
||||||
|
bd linear sync --pull
|
||||||
|
|
||||||
|
# Set up a cron job or alias
|
||||||
|
alias bd-mirror="bd linear sync --pull"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Status & Debugging
|
||||||
|
|
||||||
|
### Check Sync Status
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bd linear status
|
||||||
|
```
|
||||||
|
|
||||||
|
Shows:
|
||||||
|
- Configuration status (API key, team ID)
|
||||||
|
- Last sync timestamp
|
||||||
|
- Issues with Linear links
|
||||||
|
- Issues pending push (local only)
|
||||||
|
|
||||||
|
### JSON Output
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bd linear status --json
|
||||||
|
bd linear sync --json
|
||||||
|
```
|
||||||
|
|
||||||
|
### Verbose Output
|
||||||
|
|
||||||
|
The sync command shows progress:
|
||||||
|
- Number of issues pulled/pushed
|
||||||
|
- Conflicts detected and resolved
|
||||||
|
- Errors and warnings
|
||||||
|
|
||||||
|
## Configuration Reference
|
||||||
|
|
||||||
|
All configuration keys for Linear integration:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Required
|
||||||
|
linear.api_key # Linear API key (or LINEAR_API_KEY env var)
|
||||||
|
linear.team_id # Linear team UUID
|
||||||
|
|
||||||
|
# Automatic (set by bd)
|
||||||
|
linear.last_sync # ISO8601 timestamp of last sync
|
||||||
|
|
||||||
|
# ID generation (optional)
|
||||||
|
linear.id_mode # hash (default) or db (let bd generate IDs)
|
||||||
|
linear.hash_length # Hash length 3-8 (default: 6)
|
||||||
|
|
||||||
|
# Priority mapping (Linear 0-4 to Beads 0-4)
|
||||||
|
linear.priority_map.0 # No priority -> ? (default: 4/backlog)
|
||||||
|
linear.priority_map.1 # Urgent -> ? (default: 0/critical)
|
||||||
|
linear.priority_map.2 # High -> ? (default: 1/high)
|
||||||
|
linear.priority_map.3 # Medium -> ? (default: 2/medium)
|
||||||
|
linear.priority_map.4 # Low -> ? (default: 3/low)
|
||||||
|
|
||||||
|
# State mapping (Linear state type/name to Beads status)
|
||||||
|
linear.state_map.backlog # (default: open)
|
||||||
|
linear.state_map.unstarted # (default: open)
|
||||||
|
linear.state_map.started # (default: in_progress)
|
||||||
|
linear.state_map.completed # (default: closed)
|
||||||
|
linear.state_map.canceled # (default: closed)
|
||||||
|
linear.state_map.<custom> # Map custom state names
|
||||||
|
|
||||||
|
# Label to issue type mapping
|
||||||
|
linear.label_type_map.bug # (default: bug)
|
||||||
|
linear.label_type_map.defect # (default: bug)
|
||||||
|
linear.label_type_map.feature # (default: feature)
|
||||||
|
linear.label_type_map.enhancement # (default: feature)
|
||||||
|
linear.label_type_map.epic # (default: epic)
|
||||||
|
linear.label_type_map.chore # (default: chore)
|
||||||
|
linear.label_type_map.maintenance # (default: chore)
|
||||||
|
linear.label_type_map.task # (default: task)
|
||||||
|
linear.label_type_map.<custom> # Map custom labels
|
||||||
|
|
||||||
|
# Relation mapping (Linear relation type to Beads dependency type)
|
||||||
|
linear.relation_map.blocks # (default: blocks)
|
||||||
|
linear.relation_map.blockedBy # (default: blocks)
|
||||||
|
linear.relation_map.duplicate # (default: duplicates)
|
||||||
|
linear.relation_map.related # (default: related)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### "Linear API key not configured"
|
||||||
|
|
||||||
|
Set the API key:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bd config set linear.api_key "lin_api_YOUR_KEY"
|
||||||
|
# Or
|
||||||
|
export LINEAR_API_KEY="lin_api_YOUR_KEY"
|
||||||
|
```
|
||||||
|
|
||||||
|
### "Linear team ID not configured"
|
||||||
|
|
||||||
|
Set the team ID:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bd config set linear.team_id "YOUR_TEAM_UUID"
|
||||||
|
```
|
||||||
|
|
||||||
|
### "GraphQL errors: Not authorized"
|
||||||
|
|
||||||
|
- Verify your API key is correct
|
||||||
|
- Check that the API key has access to the team
|
||||||
|
- Ensure the key hasn't been revoked
|
||||||
|
|
||||||
|
### "Rate limited"
|
||||||
|
|
||||||
|
Linear has API rate limits. The client automatically retries with exponential backoff:
|
||||||
|
- 3 retries with increasing delays
|
||||||
|
- If still failing, wait and retry later
|
||||||
|
|
||||||
|
### "Conflict detection failed"
|
||||||
|
|
||||||
|
- Check network connectivity
|
||||||
|
- Verify API key permissions
|
||||||
|
- Check `bd linear status` for configuration issues
|
||||||
|
|
||||||
|
### Sync seems slow
|
||||||
|
|
||||||
|
For large projects, initial sync fetches all issues. Subsequent syncs are incremental (only issues changed since `linear.last_sync`).
|
||||||
|
|
||||||
|
## Limitations
|
||||||
|
|
||||||
|
- **Single team**: Sync is configured per-team (one team_id per bd project)
|
||||||
|
- **No attachments**: Attachments are not synced
|
||||||
|
- **No comments**: Comments are not synced (only description)
|
||||||
|
- **Custom fields**: Linear custom fields are not mapped
|
||||||
|
- **Projects**: Linear projects are not mapped (use labels for categorization)
|
||||||
|
- **Cycles**: Linear cycles/sprints are not mapped
|
||||||
|
|
||||||
|
## See Also
|
||||||
|
|
||||||
|
- [CONFIG.md](../../docs/CONFIG.md) - Full configuration documentation
|
||||||
|
- [Jira Import Example](../jira-import/) - Similar integration for Jira
|
||||||
|
- [Linear GraphQL API](https://developers.linear.app/docs/graphql/working-with-the-graphql-api)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Example Session
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Initial setup
|
||||||
|
$ bd init --quiet
|
||||||
|
$ bd config set linear.api_key "lin_api_abc123..."
|
||||||
|
$ bd config set linear.team_id "team-uuid-456"
|
||||||
|
|
||||||
|
# Check status
|
||||||
|
$ bd linear status
|
||||||
|
Linear Sync Status
|
||||||
|
==================
|
||||||
|
|
||||||
|
Team ID: team-uuid-456
|
||||||
|
API Key: lin_...c123
|
||||||
|
Last Sync: Never
|
||||||
|
|
||||||
|
Total Issues: 0
|
||||||
|
With Linear: 0
|
||||||
|
Local Only: 0
|
||||||
|
|
||||||
|
# Pull from Linear
|
||||||
|
$ bd linear sync --pull
|
||||||
|
→ Pulling issues from Linear...
|
||||||
|
Full sync (no previous sync timestamp)
|
||||||
|
✓ Pulled 47 issues (47 created, 0 updated)
|
||||||
|
|
||||||
|
✓ Linear sync complete
|
||||||
|
|
||||||
|
# Check what we got
|
||||||
|
$ bd stats
|
||||||
|
Issues: 47 (42 open, 5 closed)
|
||||||
|
Types: 23 task, 15 bug, 7 feature, 2 epic
|
||||||
|
|
||||||
|
# Create a local issue
|
||||||
|
$ bd create "New bug from testing" -t bug -p 1
|
||||||
|
Created: bd-a1b2c3
|
||||||
|
|
||||||
|
# Push to Linear
|
||||||
|
$ bd linear sync --push
|
||||||
|
→ Pushing issues to Linear...
|
||||||
|
Created: bd-a1b2c3 -> TEAM-148
|
||||||
|
✓ Pushed 1 issues (1 created, 0 updated)
|
||||||
|
|
||||||
|
✓ Linear sync complete
|
||||||
|
|
||||||
|
# Full bidirectional sync
|
||||||
|
$ bd linear sync
|
||||||
|
→ Pulling issues from Linear...
|
||||||
|
Incremental sync since 2025-01-17 10:30:00
|
||||||
|
✓ Pulled 3 issues (0 created, 3 updated)
|
||||||
|
→ Pushing issues to Linear...
|
||||||
|
✓ Pushed 2 issues (0 created, 2 updated)
|
||||||
|
|
||||||
|
✓ Linear sync complete
|
||||||
|
```
|
||||||
85
internal/idgen/hash.go
Normal file
85
internal/idgen/hash.go
Normal file
@@ -0,0 +1,85 @@
|
|||||||
|
package idgen
|
||||||
|
|
||||||
|
import (
|
||||||
|
"crypto/sha256"
|
||||||
|
"fmt"
|
||||||
|
"math/big"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
// base36Alphabet is the character set for base36 encoding (0-9, a-z).
|
||||||
|
const base36Alphabet = "0123456789abcdefghijklmnopqrstuvwxyz"
|
||||||
|
|
||||||
|
// EncodeBase36 converts a byte slice to a base36 string of specified length.
|
||||||
|
// Matches the algorithm used for bd hash IDs.
|
||||||
|
func EncodeBase36(data []byte, length int) string {
|
||||||
|
// Convert bytes to big integer
|
||||||
|
num := new(big.Int).SetBytes(data)
|
||||||
|
|
||||||
|
// Convert to base36
|
||||||
|
var result strings.Builder
|
||||||
|
base := big.NewInt(36)
|
||||||
|
zero := big.NewInt(0)
|
||||||
|
mod := new(big.Int)
|
||||||
|
|
||||||
|
// Build the string in reverse
|
||||||
|
chars := make([]byte, 0, length)
|
||||||
|
for num.Cmp(zero) > 0 {
|
||||||
|
num.DivMod(num, base, mod)
|
||||||
|
chars = append(chars, base36Alphabet[mod.Int64()])
|
||||||
|
}
|
||||||
|
|
||||||
|
// Reverse the string
|
||||||
|
for i := len(chars) - 1; i >= 0; i-- {
|
||||||
|
result.WriteByte(chars[i])
|
||||||
|
}
|
||||||
|
|
||||||
|
// Pad with zeros if needed
|
||||||
|
str := result.String()
|
||||||
|
if len(str) < length {
|
||||||
|
str = strings.Repeat("0", length-len(str)) + str
|
||||||
|
}
|
||||||
|
|
||||||
|
// Truncate to exact length if needed (keep least significant digits)
|
||||||
|
if len(str) > length {
|
||||||
|
str = str[len(str)-length:]
|
||||||
|
}
|
||||||
|
|
||||||
|
return str
|
||||||
|
}
|
||||||
|
|
||||||
|
// GenerateHashID creates a hash-based ID for an issue.
|
||||||
|
// Uses base36 encoding (0-9, a-z) for better information density than hex.
|
||||||
|
// The length parameter is expected to be 3-8; other values fall back to a 3-char byte width.
|
||||||
|
func GenerateHashID(prefix, title, description, creator string, timestamp time.Time, length, nonce int) string {
|
||||||
|
// Combine inputs into a stable content string
|
||||||
|
// Include nonce to handle hash collisions
|
||||||
|
content := fmt.Sprintf("%s|%s|%s|%d|%d", title, description, creator, timestamp.UnixNano(), nonce)
|
||||||
|
|
||||||
|
// Hash the content
|
||||||
|
hash := sha256.Sum256([]byte(content))
|
||||||
|
|
||||||
|
// Determine how many bytes to use based on desired output length
|
||||||
|
var numBytes int
|
||||||
|
switch length {
|
||||||
|
case 3:
|
||||||
|
numBytes = 2 // 2 bytes = 16 bits ≈ 3.09 base36 chars
|
||||||
|
case 4:
|
||||||
|
numBytes = 3 // 3 bytes = 24 bits ≈ 4.63 base36 chars
|
||||||
|
case 5:
|
||||||
|
numBytes = 4 // 4 bytes = 32 bits ≈ 6.18 base36 chars
|
||||||
|
case 6:
|
||||||
|
numBytes = 4 // 4 bytes = 32 bits ≈ 6.18 base36 chars
|
||||||
|
case 7:
|
||||||
|
numBytes = 5 // 5 bytes = 40 bits ≈ 7.73 base36 chars
|
||||||
|
case 8:
|
||||||
|
numBytes = 5 // 5 bytes = 40 bits ≈ 7.73 base36 chars
|
||||||
|
default:
|
||||||
|
numBytes = 3 // default to 3 chars
|
||||||
|
}
|
||||||
|
|
||||||
|
shortHash := EncodeBase36(hash[:numBytes], length)
|
||||||
|
|
||||||
|
return fmt.Sprintf("%s-%s", prefix, shortHash)
|
||||||
|
}
|
||||||
30
internal/idgen/hash_test.go
Normal file
30
internal/idgen/hash_test.go
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
package idgen
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestGenerateHashIDMatchesJiraVector(t *testing.T) {
|
||||||
|
timestamp := time.Date(2024, 1, 2, 3, 4, 5, 6*1_000_000, time.UTC)
|
||||||
|
prefix := "bd"
|
||||||
|
title := "Fix login"
|
||||||
|
description := "Details"
|
||||||
|
creator := "jira-import"
|
||||||
|
|
||||||
|
tests := map[int]string{
|
||||||
|
3: "bd-ryl",
|
||||||
|
4: "bd-itxc",
|
||||||
|
5: "bd-9wt4w",
|
||||||
|
6: "bd-39wt4w",
|
||||||
|
7: "bd-rahb6w2",
|
||||||
|
8: "bd-7rahb6w2",
|
||||||
|
}
|
||||||
|
|
||||||
|
for length, expected := range tests {
|
||||||
|
got := GenerateHashID(prefix, title, description, creator, timestamp, length, 0)
|
||||||
|
if got != expected {
|
||||||
|
t.Fatalf("length %d: got %s, want %s", length, got, expected)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -7,6 +7,7 @@ import (
|
|||||||
"sort"
|
"sort"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
|
"github.com/steveyegge/beads/internal/linear"
|
||||||
"github.com/steveyegge/beads/internal/storage"
|
"github.com/steveyegge/beads/internal/storage"
|
||||||
"github.com/steveyegge/beads/internal/storage/sqlite"
|
"github.com/steveyegge/beads/internal/storage/sqlite"
|
||||||
"github.com/steveyegge/beads/internal/types"
|
"github.com/steveyegge/beads/internal/types"
|
||||||
@@ -29,12 +30,12 @@ const (
|
|||||||
|
|
||||||
// Options contains import configuration
|
// Options contains import configuration
|
||||||
type Options struct {
|
type Options struct {
|
||||||
DryRun bool // Preview changes without applying them
|
DryRun bool // Preview changes without applying them
|
||||||
SkipUpdate bool // Skip updating existing issues (create-only mode)
|
SkipUpdate bool // Skip updating existing issues (create-only mode)
|
||||||
Strict bool // Fail on any error (dependencies, labels, etc.)
|
Strict bool // Fail on any error (dependencies, labels, etc.)
|
||||||
RenameOnImport bool // Rename imported issues to match database prefix
|
RenameOnImport bool // Rename imported issues to match database prefix
|
||||||
SkipPrefixValidation bool // Skip prefix validation (for auto-import)
|
SkipPrefixValidation bool // Skip prefix validation (for auto-import)
|
||||||
OrphanHandling OrphanHandling // How to handle missing parent issues (default: allow)
|
OrphanHandling OrphanHandling // How to handle missing parent issues (default: allow)
|
||||||
ClearDuplicateExternalRefs bool // Clear duplicate external_ref values instead of erroring
|
ClearDuplicateExternalRefs bool // Clear duplicate external_ref values instead of erroring
|
||||||
ProtectLocalExportIDs map[string]bool // IDs from left snapshot to protect from deletion (bd-sync-deletion fix)
|
ProtectLocalExportIDs map[string]bool // IDs from left snapshot to protect from deletion (bd-sync-deletion fix)
|
||||||
}
|
}
|
||||||
@@ -78,6 +79,18 @@ func ImportIssues(ctx context.Context, dbPath string, store storage.Storage, iss
|
|||||||
MismatchPrefixes: make(map[string]int),
|
MismatchPrefixes: make(map[string]int),
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Normalize Linear external_refs to canonical form to avoid slug-based duplicates.
|
||||||
|
for _, issue := range issues {
|
||||||
|
if issue.ExternalRef == nil || *issue.ExternalRef == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if linear.IsLinearExternalRef(*issue.ExternalRef) {
|
||||||
|
if canonical, ok := linear.CanonicalizeLinearExternalRef(*issue.ExternalRef); ok {
|
||||||
|
issue.ExternalRef = &canonical
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Compute content hashes for all incoming issues (bd-95)
|
// Compute content hashes for all incoming issues (bd-95)
|
||||||
// Always recompute to avoid stale/incorrect JSONL hashes (bd-1231)
|
// Always recompute to avoid stale/incorrect JSONL hashes (bd-1231)
|
||||||
for _, issue := range issues {
|
for _, issue := range issues {
|
||||||
@@ -92,7 +105,7 @@ func ImportIssues(ctx context.Context, dbPath string, store storage.Storage, iss
|
|||||||
if needCloseStore {
|
if needCloseStore {
|
||||||
defer func() { _ = sqliteStore.Close() }()
|
defer func() { _ = sqliteStore.Close() }()
|
||||||
}
|
}
|
||||||
|
|
||||||
// Clear export_hashes before import to prevent staleness (bd-160)
|
// Clear export_hashes before import to prevent staleness (bd-160)
|
||||||
// Import operations may add/update issues, so export_hashes entries become invalid
|
// Import operations may add/update issues, so export_hashes entries become invalid
|
||||||
if !opts.DryRun {
|
if !opts.DryRun {
|
||||||
@@ -100,7 +113,7 @@ func ImportIssues(ctx context.Context, dbPath string, store storage.Storage, iss
|
|||||||
fmt.Fprintf(os.Stderr, "Warning: failed to clear export_hashes before import: %v\n", err)
|
fmt.Fprintf(os.Stderr, "Warning: failed to clear export_hashes before import: %v\n", err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Read orphan handling from config if not explicitly set
|
// Read orphan handling from config if not explicitly set
|
||||||
if opts.OrphanHandling == "" {
|
if opts.OrphanHandling == "" {
|
||||||
opts.OrphanHandling = sqliteStore.GetOrphanHandling(ctx)
|
opts.OrphanHandling = sqliteStore.GetOrphanHandling(ctx)
|
||||||
@@ -358,7 +371,7 @@ func handleRename(ctx context.Context, s *sqlite.SQLiteStorage, existing *types.
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
return "", nil
|
return "", nil
|
||||||
|
|
||||||
/* OLD CODE REMOVED (bd-8e05)
|
/* OLD CODE REMOVED (bd-8e05)
|
||||||
// Different content - this is a collision during rename
|
// Different content - this is a collision during rename
|
||||||
// Allocate a new ID for the incoming issue instead of using the desired ID
|
// Allocate a new ID for the incoming issue instead of using the desired ID
|
||||||
@@ -366,9 +379,9 @@ func handleRename(ctx context.Context, s *sqlite.SQLiteStorage, existing *types.
|
|||||||
if err != nil || prefix == "" {
|
if err != nil || prefix == "" {
|
||||||
prefix = "bd"
|
prefix = "bd"
|
||||||
}
|
}
|
||||||
|
|
||||||
oldID := existing.ID
|
oldID := existing.ID
|
||||||
|
|
||||||
// Retry up to 3 times to handle concurrent ID allocation
|
// Retry up to 3 times to handle concurrent ID allocation
|
||||||
const maxRetries = 3
|
const maxRetries = 3
|
||||||
for attempt := 0; attempt < maxRetries; attempt++ {
|
for attempt := 0; attempt < maxRetries; attempt++ {
|
||||||
@@ -376,42 +389,42 @@ func handleRename(ctx context.Context, s *sqlite.SQLiteStorage, existing *types.
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return "", fmt.Errorf("failed to generate new ID for rename collision: %w", err)
|
return "", fmt.Errorf("failed to generate new ID for rename collision: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Update incoming issue to use the new ID
|
// Update incoming issue to use the new ID
|
||||||
incoming.ID = newID
|
incoming.ID = newID
|
||||||
|
|
||||||
// Delete old ID (only on first attempt)
|
// Delete old ID (only on first attempt)
|
||||||
if attempt == 0 {
|
if attempt == 0 {
|
||||||
if err := s.DeleteIssue(ctx, oldID); err != nil {
|
if err := s.DeleteIssue(ctx, oldID); err != nil {
|
||||||
return "", fmt.Errorf("failed to delete old ID %s: %w", oldID, err)
|
return "", fmt.Errorf("failed to delete old ID %s: %w", oldID, err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Create with new ID
|
// Create with new ID
|
||||||
err = s.CreateIssue(ctx, incoming, "import-rename-collision")
|
err = s.CreateIssue(ctx, incoming, "import-rename-collision")
|
||||||
if err == nil {
|
if err == nil {
|
||||||
// Success!
|
// Success!
|
||||||
return oldID, nil
|
return oldID, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Check if it's a UNIQUE constraint error
|
// Check if it's a UNIQUE constraint error
|
||||||
if !sqlite.IsUniqueConstraintError(err) {
|
if !sqlite.IsUniqueConstraintError(err) {
|
||||||
// Not a UNIQUE constraint error, fail immediately
|
// Not a UNIQUE constraint error, fail immediately
|
||||||
return "", fmt.Errorf("failed to create renamed issue with collision resolution %s: %w", newID, err)
|
return "", fmt.Errorf("failed to create renamed issue with collision resolution %s: %w", newID, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// UNIQUE constraint error - retry with new ID
|
// UNIQUE constraint error - retry with new ID
|
||||||
if attempt == maxRetries-1 {
|
if attempt == maxRetries-1 {
|
||||||
// Last attempt failed
|
// Last attempt failed
|
||||||
return "", fmt.Errorf("failed to create renamed issue with collision resolution after %d retries: %w", maxRetries, err)
|
return "", fmt.Errorf("failed to create renamed issue with collision resolution after %d retries: %w", maxRetries, err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Note: We don't update text references here because it would be too expensive
|
// Note: We don't update text references here because it would be too expensive
|
||||||
// to scan all issues during every import. Text references to the old ID will
|
// to scan all issues during every import. Text references to the old ID will
|
||||||
// eventually be cleaned up by manual reference updates or remain as stale.
|
// eventually be cleaned up by manual reference updates or remain as stale.
|
||||||
// This is acceptable because the old ID no longer exists in the system.
|
// This is acceptable because the old ID no longer exists in the system.
|
||||||
|
|
||||||
return oldID, nil
|
return oldID, nil
|
||||||
*/
|
*/
|
||||||
}
|
}
|
||||||
@@ -462,15 +475,20 @@ func upsertIssues(ctx context.Context, sqliteStore *sqlite.SQLiteStorage, issues
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("failed to get DB issues: %w", err)
|
return fmt.Errorf("failed to get DB issues: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
dbByHash := buildHashMap(dbIssues)
|
dbByHash := buildHashMap(dbIssues)
|
||||||
dbByID := buildIDMap(dbIssues)
|
dbByID := buildIDMap(dbIssues)
|
||||||
|
|
||||||
// Build external_ref map for O(1) lookup
|
// Build external_ref map for O(1) lookup
|
||||||
dbByExternalRef := make(map[string]*types.Issue)
|
dbByExternalRef := make(map[string]*types.Issue)
|
||||||
for _, issue := range dbIssues {
|
for _, issue := range dbIssues {
|
||||||
if issue.ExternalRef != nil && *issue.ExternalRef != "" {
|
if issue.ExternalRef != nil && *issue.ExternalRef != "" {
|
||||||
dbByExternalRef[*issue.ExternalRef] = issue
|
dbByExternalRef[*issue.ExternalRef] = issue
|
||||||
|
if linear.IsLinearExternalRef(*issue.ExternalRef) {
|
||||||
|
if canonical, ok := linear.CanonicalizeLinearExternalRef(*issue.ExternalRef); ok {
|
||||||
|
dbByExternalRef[canonical] = issue
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -524,7 +542,7 @@ func upsertIssues(ctx context.Context, sqliteStore *sqlite.SQLiteStorage, issues
|
|||||||
result.Unchanged++
|
result.Unchanged++
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
// Build updates map
|
// Build updates map
|
||||||
updates := make(map[string]interface{})
|
updates := make(map[string]interface{})
|
||||||
updates["title"] = incoming.Title
|
updates["title"] = incoming.Title
|
||||||
@@ -536,19 +554,19 @@ func upsertIssues(ctx context.Context, sqliteStore *sqlite.SQLiteStorage, issues
|
|||||||
updates["acceptance_criteria"] = incoming.AcceptanceCriteria
|
updates["acceptance_criteria"] = incoming.AcceptanceCriteria
|
||||||
updates["notes"] = incoming.Notes
|
updates["notes"] = incoming.Notes
|
||||||
updates["closed_at"] = incoming.ClosedAt
|
updates["closed_at"] = incoming.ClosedAt
|
||||||
|
|
||||||
if incoming.Assignee != "" {
|
if incoming.Assignee != "" {
|
||||||
updates["assignee"] = incoming.Assignee
|
updates["assignee"] = incoming.Assignee
|
||||||
} else {
|
} else {
|
||||||
updates["assignee"] = nil
|
updates["assignee"] = nil
|
||||||
}
|
}
|
||||||
|
|
||||||
if incoming.ExternalRef != nil && *incoming.ExternalRef != "" {
|
if incoming.ExternalRef != nil && *incoming.ExternalRef != "" {
|
||||||
updates["external_ref"] = *incoming.ExternalRef
|
updates["external_ref"] = *incoming.ExternalRef
|
||||||
} else {
|
} else {
|
||||||
updates["external_ref"] = nil
|
updates["external_ref"] = nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Only update if data actually changed
|
// Only update if data actually changed
|
||||||
if IssueDataChanged(existing, updates) {
|
if IssueDataChanged(existing, updates) {
|
||||||
if err := sqliteStore.UpdateIssue(ctx, existing.ID, updates, "import"); err != nil {
|
if err := sqliteStore.UpdateIssue(ctx, existing.ID, updates, "import"); err != nil {
|
||||||
@@ -617,7 +635,7 @@ func upsertIssues(ctx context.Context, sqliteStore *sqlite.SQLiteStorage, issues
|
|||||||
result.Unchanged++
|
result.Unchanged++
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
// Build updates map
|
// Build updates map
|
||||||
updates := make(map[string]interface{})
|
updates := make(map[string]interface{})
|
||||||
updates["title"] = incoming.Title
|
updates["title"] = incoming.Title
|
||||||
@@ -628,19 +646,19 @@ func upsertIssues(ctx context.Context, sqliteStore *sqlite.SQLiteStorage, issues
|
|||||||
updates["design"] = incoming.Design
|
updates["design"] = incoming.Design
|
||||||
updates["acceptance_criteria"] = incoming.AcceptanceCriteria
|
updates["acceptance_criteria"] = incoming.AcceptanceCriteria
|
||||||
updates["notes"] = incoming.Notes
|
updates["notes"] = incoming.Notes
|
||||||
updates["closed_at"] = incoming.ClosedAt
|
updates["closed_at"] = incoming.ClosedAt
|
||||||
|
|
||||||
if incoming.Assignee != "" {
|
if incoming.Assignee != "" {
|
||||||
updates["assignee"] = incoming.Assignee
|
updates["assignee"] = incoming.Assignee
|
||||||
} else {
|
} else {
|
||||||
updates["assignee"] = nil
|
updates["assignee"] = nil
|
||||||
}
|
}
|
||||||
|
|
||||||
if incoming.ExternalRef != nil && *incoming.ExternalRef != "" {
|
if incoming.ExternalRef != nil && *incoming.ExternalRef != "" {
|
||||||
updates["external_ref"] = *incoming.ExternalRef
|
updates["external_ref"] = *incoming.ExternalRef
|
||||||
} else {
|
} else {
|
||||||
updates["external_ref"] = nil
|
updates["external_ref"] = nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Only update if data actually changed
|
// Only update if data actually changed
|
||||||
if IssueDataChanged(existingWithID, updates) {
|
if IssueDataChanged(existingWithID, updates) {
|
||||||
@@ -660,62 +678,62 @@ func upsertIssues(ctx context.Context, sqliteStore *sqlite.SQLiteStorage, issues
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Filter out orphaned issues if orphan_handling is set to skip (bd-ckej)
|
// Filter out orphaned issues if orphan_handling is set to skip (bd-ckej)
|
||||||
// Pre-filter before batch creation to prevent orphans from being created then ID-cleared
|
// Pre-filter before batch creation to prevent orphans from being created then ID-cleared
|
||||||
if opts.OrphanHandling == sqlite.OrphanSkip {
|
if opts.OrphanHandling == sqlite.OrphanSkip {
|
||||||
var filteredNewIssues []*types.Issue
|
var filteredNewIssues []*types.Issue
|
||||||
for _, issue := range newIssues {
|
for _, issue := range newIssues {
|
||||||
// Check if this is a hierarchical child whose parent doesn't exist
|
// Check if this is a hierarchical child whose parent doesn't exist
|
||||||
if strings.Contains(issue.ID, ".") {
|
if strings.Contains(issue.ID, ".") {
|
||||||
lastDot := strings.LastIndex(issue.ID, ".")
|
lastDot := strings.LastIndex(issue.ID, ".")
|
||||||
parentID := issue.ID[:lastDot]
|
parentID := issue.ID[:lastDot]
|
||||||
|
|
||||||
// Check if parent exists in either existing DB issues or in newIssues batch
|
// Check if parent exists in either existing DB issues or in newIssues batch
|
||||||
var parentExists bool
|
var parentExists bool
|
||||||
for _, dbIssue := range dbIssues {
|
for _, dbIssue := range dbIssues {
|
||||||
if dbIssue.ID == parentID {
|
if dbIssue.ID == parentID {
|
||||||
parentExists = true
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if !parentExists {
|
|
||||||
for _, newIssue := range newIssues {
|
|
||||||
if newIssue.ID == parentID {
|
|
||||||
parentExists = true
|
parentExists = true
|
||||||
break
|
break
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
if !parentExists {
|
||||||
|
for _, newIssue := range newIssues {
|
||||||
|
if newIssue.ID == parentID {
|
||||||
|
parentExists = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if !parentExists {
|
||||||
|
// Skip this orphaned issue
|
||||||
|
result.Skipped++
|
||||||
|
continue
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
filteredNewIssues = append(filteredNewIssues, issue)
|
||||||
if !parentExists {
|
|
||||||
// Skip this orphaned issue
|
|
||||||
result.Skipped++
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
filteredNewIssues = append(filteredNewIssues, issue)
|
newIssues = filteredNewIssues
|
||||||
}
|
}
|
||||||
newIssues = filteredNewIssues
|
|
||||||
}
|
|
||||||
|
|
||||||
// Batch create all new issues
|
// Batch create all new issues
|
||||||
// Sort by hierarchy depth to ensure parents are created before children
|
// Sort by hierarchy depth to ensure parents are created before children
|
||||||
if len(newIssues) > 0 {
|
if len(newIssues) > 0 {
|
||||||
sort.Slice(newIssues, func(i, j int) bool {
|
sort.Slice(newIssues, func(i, j int) bool {
|
||||||
depthI := strings.Count(newIssues[i].ID, ".")
|
depthI := strings.Count(newIssues[i].ID, ".")
|
||||||
depthJ := strings.Count(newIssues[j].ID, ".")
|
depthJ := strings.Count(newIssues[j].ID, ".")
|
||||||
if depthI != depthJ {
|
if depthI != depthJ {
|
||||||
return depthI < depthJ // Shallower first
|
return depthI < depthJ // Shallower first
|
||||||
}
|
}
|
||||||
return newIssues[i].ID < newIssues[j].ID // Stable sort
|
return newIssues[i].ID < newIssues[j].ID // Stable sort
|
||||||
})
|
})
|
||||||
|
|
||||||
// Create in batches by depth level (max depth 3)
|
// Create in batches by depth level (max depth 3)
|
||||||
for depth := 0; depth <= 3; depth++ {
|
for depth := 0; depth <= 3; depth++ {
|
||||||
var batchForDepth []*types.Issue
|
var batchForDepth []*types.Issue
|
||||||
for _, issue := range newIssues {
|
for _, issue := range newIssues {
|
||||||
if strings.Count(issue.ID, ".") == depth {
|
if strings.Count(issue.ID, ".") == depth {
|
||||||
batchForDepth = append(batchForDepth, issue)
|
batchForDepth = append(batchForDepth, issue)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if len(batchForDepth) > 0 {
|
if len(batchForDepth) > 0 {
|
||||||
@@ -876,7 +894,7 @@ func GetPrefixList(prefixes map[string]int) []string {
|
|||||||
|
|
||||||
func validateNoDuplicateExternalRefs(issues []*types.Issue, clearDuplicates bool, result *Result) error {
|
func validateNoDuplicateExternalRefs(issues []*types.Issue, clearDuplicates bool, result *Result) error {
|
||||||
seen := make(map[string][]string)
|
seen := make(map[string][]string)
|
||||||
|
|
||||||
for _, issue := range issues {
|
for _, issue := range issues {
|
||||||
if issue.ExternalRef != nil && *issue.ExternalRef != "" {
|
if issue.ExternalRef != nil && *issue.ExternalRef != "" {
|
||||||
ref := *issue.ExternalRef
|
ref := *issue.ExternalRef
|
||||||
@@ -910,7 +928,7 @@ func validateNoDuplicateExternalRefs(issues []*types.Issue, clearDuplicates bool
|
|||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
sort.Strings(duplicates)
|
sort.Strings(duplicates)
|
||||||
return fmt.Errorf("batch import contains duplicate external_ref values:\n%s\n\nUse --clear-duplicate-external-refs to automatically clear duplicates", strings.Join(duplicates, "\n"))
|
return fmt.Errorf("batch import contains duplicate external_ref values:\n%s\n\nUse --clear-duplicate-external-refs to automatically clear duplicates", strings.Join(duplicates, "\n"))
|
||||||
}
|
}
|
||||||
|
|||||||
669
internal/linear/client.go
Normal file
669
internal/linear/client.go
Normal file
@@ -0,0 +1,669 @@
|
|||||||
|
package linear
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"context"
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"net/http"
|
||||||
|
"net/url"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/steveyegge/beads/internal/types"
|
||||||
|
)
|
||||||
|
|
||||||
|
// IssuesQuery is the GraphQL query for fetching issues with all required fields.
|
||||||
|
// Used by both FetchIssues and FetchIssuesSince for consistency.
|
||||||
|
const IssuesQuery = `
|
||||||
|
query Issues($filter: IssueFilter!, $first: Int!, $after: String) {
|
||||||
|
issues(
|
||||||
|
first: $first
|
||||||
|
after: $after
|
||||||
|
filter: $filter
|
||||||
|
) {
|
||||||
|
nodes {
|
||||||
|
id
|
||||||
|
identifier
|
||||||
|
title
|
||||||
|
description
|
||||||
|
url
|
||||||
|
priority
|
||||||
|
state {
|
||||||
|
id
|
||||||
|
name
|
||||||
|
type
|
||||||
|
}
|
||||||
|
assignee {
|
||||||
|
id
|
||||||
|
name
|
||||||
|
email
|
||||||
|
displayName
|
||||||
|
}
|
||||||
|
labels {
|
||||||
|
nodes {
|
||||||
|
id
|
||||||
|
name
|
||||||
|
}
|
||||||
|
}
|
||||||
|
parent {
|
||||||
|
id
|
||||||
|
identifier
|
||||||
|
}
|
||||||
|
relations {
|
||||||
|
nodes {
|
||||||
|
id
|
||||||
|
type
|
||||||
|
relatedIssue {
|
||||||
|
id
|
||||||
|
identifier
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
createdAt
|
||||||
|
updatedAt
|
||||||
|
completedAt
|
||||||
|
}
|
||||||
|
pageInfo {
|
||||||
|
hasNextPage
|
||||||
|
endCursor
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
`
|
||||||
|
|
||||||
|
// NewClient creates a new Linear client with the given API key and team ID.
|
||||||
|
func NewClient(apiKey, teamID string) *Client {
|
||||||
|
return &Client{
|
||||||
|
APIKey: apiKey,
|
||||||
|
TeamID: teamID,
|
||||||
|
Endpoint: DefaultAPIEndpoint,
|
||||||
|
HTTPClient: &http.Client{
|
||||||
|
Timeout: DefaultTimeout,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// WithEndpoint returns a new client configured to use the specified endpoint.
|
||||||
|
// This is useful for testing with mock servers or connecting to self-hosted instances.
|
||||||
|
func (c *Client) WithEndpoint(endpoint string) *Client {
|
||||||
|
return &Client{
|
||||||
|
APIKey: c.APIKey,
|
||||||
|
TeamID: c.TeamID,
|
||||||
|
Endpoint: endpoint,
|
||||||
|
HTTPClient: c.HTTPClient,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// WithHTTPClient returns a new client configured to use the specified HTTP client.
|
||||||
|
// This is useful for testing or customizing timeouts and transport settings.
|
||||||
|
func (c *Client) WithHTTPClient(httpClient *http.Client) *Client {
|
||||||
|
return &Client{
|
||||||
|
APIKey: c.APIKey,
|
||||||
|
TeamID: c.TeamID,
|
||||||
|
Endpoint: c.Endpoint,
|
||||||
|
HTTPClient: httpClient,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Execute sends a GraphQL request to the Linear API.
|
||||||
|
// Handles rate limiting with exponential backoff.
|
||||||
|
func (c *Client) Execute(ctx context.Context, req *GraphQLRequest) (json.RawMessage, error) {
|
||||||
|
body, err := json.Marshal(req)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to marshal request: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var lastErr error
|
||||||
|
for attempt := 0; attempt <= MaxRetries; attempt++ {
|
||||||
|
httpReq, err := http.NewRequestWithContext(ctx, "POST", c.Endpoint, bytes.NewReader(body))
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create request: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
httpReq.Header.Set("Content-Type", "application/json")
|
||||||
|
httpReq.Header.Set("Authorization", c.APIKey)
|
||||||
|
|
||||||
|
resp, err := c.HTTPClient.Do(httpReq)
|
||||||
|
if err != nil {
|
||||||
|
lastErr = fmt.Errorf("request failed (attempt %d/%d): %w", attempt+1, MaxRetries+1, err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
respBody, err := io.ReadAll(resp.Body)
|
||||||
|
_ = resp.Body.Close()
|
||||||
|
if err != nil {
|
||||||
|
lastErr = fmt.Errorf("failed to read response (attempt %d/%d): %w", attempt+1, MaxRetries+1, err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
if resp.StatusCode == http.StatusTooManyRequests {
|
||||||
|
delay := RetryDelay * time.Duration(1<<attempt) // Exponential backoff
|
||||||
|
lastErr = fmt.Errorf("rate limited (attempt %d/%d), retrying after %v", attempt+1, MaxRetries+1, delay)
|
||||||
|
select {
|
||||||
|
case <-ctx.Done():
|
||||||
|
return nil, ctx.Err()
|
||||||
|
case <-time.After(delay):
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
|
||||||
|
return nil, fmt.Errorf("API error: %s (status %d)", string(respBody), resp.StatusCode)
|
||||||
|
}
|
||||||
|
|
||||||
|
var gqlResp struct {
|
||||||
|
Data json.RawMessage `json:"data"`
|
||||||
|
Errors []GraphQLError `json:"errors,omitempty"`
|
||||||
|
}
|
||||||
|
if err := json.Unmarshal(respBody, &gqlResp); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to parse response: %w (body: %s)", err, string(respBody))
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(gqlResp.Errors) > 0 {
|
||||||
|
errMsgs := make([]string, len(gqlResp.Errors))
|
||||||
|
for i, e := range gqlResp.Errors {
|
||||||
|
errMsgs[i] = e.Message
|
||||||
|
}
|
||||||
|
return nil, fmt.Errorf("GraphQL errors: %s", strings.Join(errMsgs, "; "))
|
||||||
|
}
|
||||||
|
|
||||||
|
return gqlResp.Data, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil, fmt.Errorf("max retries (%d) exceeded: %w", MaxRetries+1, lastErr)
|
||||||
|
}
|
||||||
|
|
||||||
|
// FetchIssues retrieves issues from Linear with optional filtering by state.
|
||||||
|
// state can be: "open" (unstarted/started), "closed" (completed/canceled), or "all".
|
||||||
|
func (c *Client) FetchIssues(ctx context.Context, state string) ([]Issue, error) {
|
||||||
|
var allIssues []Issue
|
||||||
|
var cursor string
|
||||||
|
|
||||||
|
filter := map[string]interface{}{
|
||||||
|
"team": map[string]interface{}{
|
||||||
|
"id": map[string]interface{}{
|
||||||
|
"eq": c.TeamID,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
switch state {
|
||||||
|
case "open":
|
||||||
|
filter["state"] = map[string]interface{}{
|
||||||
|
"type": map[string]interface{}{
|
||||||
|
"in": []string{"backlog", "unstarted", "started"},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
case "closed":
|
||||||
|
filter["state"] = map[string]interface{}{
|
||||||
|
"type": map[string]interface{}{
|
||||||
|
"in": []string{"completed", "canceled"},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for {
|
||||||
|
variables := map[string]interface{}{
|
||||||
|
"filter": filter,
|
||||||
|
"first": MaxPageSize,
|
||||||
|
}
|
||||||
|
if cursor != "" {
|
||||||
|
variables["after"] = cursor
|
||||||
|
}
|
||||||
|
|
||||||
|
req := &GraphQLRequest{
|
||||||
|
Query: IssuesQuery,
|
||||||
|
Variables: variables,
|
||||||
|
}
|
||||||
|
|
||||||
|
data, err := c.Execute(ctx, req)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to fetch issues: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var issuesResp IssuesResponse
|
||||||
|
if err := json.Unmarshal(data, &issuesResp); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to parse issues response: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
allIssues = append(allIssues, issuesResp.Issues.Nodes...)
|
||||||
|
|
||||||
|
if !issuesResp.Issues.PageInfo.HasNextPage {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
cursor = issuesResp.Issues.PageInfo.EndCursor
|
||||||
|
}
|
||||||
|
|
||||||
|
return allIssues, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// FetchIssuesSince retrieves issues from Linear that have been updated since the given time.
|
||||||
|
// This enables incremental sync by only fetching issues modified after the last sync.
|
||||||
|
// The state parameter can be: "open", "closed", or "all".
|
||||||
|
func (c *Client) FetchIssuesSince(ctx context.Context, state string, since time.Time) ([]Issue, error) {
|
||||||
|
var allIssues []Issue
|
||||||
|
var cursor string
|
||||||
|
|
||||||
|
// Build the filter with team and updatedAt constraint.
|
||||||
|
// Linear uses ISO8601 format for date comparisons.
|
||||||
|
sinceStr := since.UTC().Format(time.RFC3339)
|
||||||
|
filter := map[string]interface{}{
|
||||||
|
"team": map[string]interface{}{
|
||||||
|
"id": map[string]interface{}{
|
||||||
|
"eq": c.TeamID,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
"updatedAt": map[string]interface{}{
|
||||||
|
"gte": sinceStr,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add state filter if specified
|
||||||
|
switch state {
|
||||||
|
case "open":
|
||||||
|
filter["state"] = map[string]interface{}{
|
||||||
|
"type": map[string]interface{}{
|
||||||
|
"in": []string{"backlog", "unstarted", "started"},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
case "closed":
|
||||||
|
filter["state"] = map[string]interface{}{
|
||||||
|
"type": map[string]interface{}{
|
||||||
|
"in": []string{"completed", "canceled"},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for {
|
||||||
|
variables := map[string]interface{}{
|
||||||
|
"filter": filter,
|
||||||
|
"first": MaxPageSize,
|
||||||
|
}
|
||||||
|
if cursor != "" {
|
||||||
|
variables["after"] = cursor
|
||||||
|
}
|
||||||
|
|
||||||
|
req := &GraphQLRequest{
|
||||||
|
Query: IssuesQuery,
|
||||||
|
Variables: variables,
|
||||||
|
}
|
||||||
|
|
||||||
|
data, err := c.Execute(ctx, req)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to fetch issues since %s: %w", sinceStr, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var issuesResp IssuesResponse
|
||||||
|
if err := json.Unmarshal(data, &issuesResp); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to parse issues response: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
allIssues = append(allIssues, issuesResp.Issues.Nodes...)
|
||||||
|
|
||||||
|
if !issuesResp.Issues.PageInfo.HasNextPage {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
cursor = issuesResp.Issues.PageInfo.EndCursor
|
||||||
|
}
|
||||||
|
|
||||||
|
return allIssues, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetTeamStates fetches the workflow states for the configured team.
|
||||||
|
func (c *Client) GetTeamStates(ctx context.Context) ([]State, error) {
|
||||||
|
query := `
|
||||||
|
query TeamStates($teamId: String!) {
|
||||||
|
team(id: $teamId) {
|
||||||
|
id
|
||||||
|
states {
|
||||||
|
nodes {
|
||||||
|
id
|
||||||
|
name
|
||||||
|
type
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
`
|
||||||
|
|
||||||
|
req := &GraphQLRequest{
|
||||||
|
Query: query,
|
||||||
|
Variables: map[string]interface{}{
|
||||||
|
"teamId": c.TeamID,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
data, err := c.Execute(ctx, req)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to fetch team states: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var teamResp TeamResponse
|
||||||
|
if err := json.Unmarshal(data, &teamResp); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to parse team states response: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if teamResp.Team.States == nil {
|
||||||
|
return nil, fmt.Errorf("no states found for team")
|
||||||
|
}
|
||||||
|
|
||||||
|
return teamResp.Team.States.Nodes, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreateIssue creates a new issue in Linear.
|
||||||
|
func (c *Client) CreateIssue(ctx context.Context, title, description string, priority int, stateID string, labelIDs []string) (*Issue, error) {
|
||||||
|
query := `
|
||||||
|
mutation CreateIssue($input: IssueCreateInput!) {
|
||||||
|
issueCreate(input: $input) {
|
||||||
|
success
|
||||||
|
issue {
|
||||||
|
id
|
||||||
|
identifier
|
||||||
|
title
|
||||||
|
description
|
||||||
|
url
|
||||||
|
priority
|
||||||
|
state {
|
||||||
|
id
|
||||||
|
name
|
||||||
|
type
|
||||||
|
}
|
||||||
|
createdAt
|
||||||
|
updatedAt
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
`
|
||||||
|
|
||||||
|
input := map[string]interface{}{
|
||||||
|
"teamId": c.TeamID,
|
||||||
|
"title": title,
|
||||||
|
"description": description,
|
||||||
|
}
|
||||||
|
|
||||||
|
if priority > 0 {
|
||||||
|
input["priority"] = priority
|
||||||
|
}
|
||||||
|
|
||||||
|
if stateID != "" {
|
||||||
|
input["stateId"] = stateID
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(labelIDs) > 0 {
|
||||||
|
input["labelIds"] = labelIDs
|
||||||
|
}
|
||||||
|
|
||||||
|
req := &GraphQLRequest{
|
||||||
|
Query: query,
|
||||||
|
Variables: map[string]interface{}{
|
||||||
|
"input": input,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
data, err := c.Execute(ctx, req)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create issue: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var createResp IssueCreateResponse
|
||||||
|
if err := json.Unmarshal(data, &createResp); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to parse create response: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if !createResp.IssueCreate.Success {
|
||||||
|
return nil, fmt.Errorf("issue creation reported as unsuccessful")
|
||||||
|
}
|
||||||
|
|
||||||
|
return &createResp.IssueCreate.Issue, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateIssue updates an existing issue in Linear.
|
||||||
|
func (c *Client) UpdateIssue(ctx context.Context, issueID string, updates map[string]interface{}) (*Issue, error) {
|
||||||
|
query := `
|
||||||
|
mutation UpdateIssue($id: String!, $input: IssueUpdateInput!) {
|
||||||
|
issueUpdate(id: $id, input: $input) {
|
||||||
|
success
|
||||||
|
issue {
|
||||||
|
id
|
||||||
|
identifier
|
||||||
|
title
|
||||||
|
description
|
||||||
|
url
|
||||||
|
priority
|
||||||
|
state {
|
||||||
|
id
|
||||||
|
name
|
||||||
|
type
|
||||||
|
}
|
||||||
|
updatedAt
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
`
|
||||||
|
|
||||||
|
req := &GraphQLRequest{
|
||||||
|
Query: query,
|
||||||
|
Variables: map[string]interface{}{
|
||||||
|
"id": issueID,
|
||||||
|
"input": updates,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
data, err := c.Execute(ctx, req)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to update issue: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var updateResp IssueUpdateResponse
|
||||||
|
if err := json.Unmarshal(data, &updateResp); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to parse update response: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if !updateResp.IssueUpdate.Success {
|
||||||
|
return nil, fmt.Errorf("issue update reported as unsuccessful")
|
||||||
|
}
|
||||||
|
|
||||||
|
return &updateResp.IssueUpdate.Issue, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// FetchIssueByIdentifier retrieves a single issue from Linear by its identifier (e.g., "TEAM-123").
|
||||||
|
// Returns nil if the issue is not found.
|
||||||
|
func (c *Client) FetchIssueByIdentifier(ctx context.Context, identifier string) (*Issue, error) {
|
||||||
|
query := `
|
||||||
|
query IssueByIdentifier($filter: IssueFilter!) {
|
||||||
|
issues(filter: $filter, first: 1) {
|
||||||
|
nodes {
|
||||||
|
id
|
||||||
|
identifier
|
||||||
|
title
|
||||||
|
description
|
||||||
|
url
|
||||||
|
priority
|
||||||
|
state {
|
||||||
|
id
|
||||||
|
name
|
||||||
|
type
|
||||||
|
}
|
||||||
|
assignee {
|
||||||
|
id
|
||||||
|
name
|
||||||
|
email
|
||||||
|
displayName
|
||||||
|
}
|
||||||
|
labels {
|
||||||
|
nodes {
|
||||||
|
id
|
||||||
|
name
|
||||||
|
}
|
||||||
|
}
|
||||||
|
createdAt
|
||||||
|
updatedAt
|
||||||
|
completedAt
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
`
|
||||||
|
|
||||||
|
// Build filter to search by identifier number and team prefix
|
||||||
|
// Linear identifiers look like "TEAM-123", we filter by number
|
||||||
|
// and validate the full identifier in the results
|
||||||
|
variables := map[string]interface{}{
|
||||||
|
"filter": map[string]interface{}{
|
||||||
|
"team": map[string]interface{}{
|
||||||
|
"id": map[string]interface{}{
|
||||||
|
"eq": c.TeamID,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
// Extract the issue number from identifier (e.g., "123" from "TEAM-123")
|
||||||
|
parts := strings.Split(identifier, "-")
|
||||||
|
if len(parts) >= 2 {
|
||||||
|
if number, err := strconv.Atoi(parts[len(parts)-1]); err == nil {
|
||||||
|
// Add number filter for more precise matching
|
||||||
|
variables["filter"].(map[string]interface{})["number"] = map[string]interface{}{
|
||||||
|
"eq": number,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
req := &GraphQLRequest{
|
||||||
|
Query: query,
|
||||||
|
Variables: variables,
|
||||||
|
}
|
||||||
|
|
||||||
|
data, err := c.Execute(ctx, req)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to fetch issue by identifier: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var issuesResp IssuesResponse
|
||||||
|
if err := json.Unmarshal(data, &issuesResp); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to parse issues response: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Find the exact match by identifier (in case of partial matches)
|
||||||
|
for _, issue := range issuesResp.Issues.Nodes {
|
||||||
|
if issue.Identifier == identifier {
|
||||||
|
return &issue, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil, nil // Issue not found
|
||||||
|
}
|
||||||
|
|
||||||
|
// BuildStateCache fetches and caches team states.
|
||||||
|
func BuildStateCache(ctx context.Context, client *Client) (*StateCache, error) {
|
||||||
|
states, err := client.GetTeamStates(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
cache := &StateCache{
|
||||||
|
States: states,
|
||||||
|
StatesByID: make(map[string]State),
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, s := range states {
|
||||||
|
cache.StatesByID[s.ID] = s
|
||||||
|
if cache.OpenStateID == "" && (s.Type == "unstarted" || s.Type == "backlog") {
|
||||||
|
cache.OpenStateID = s.ID
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return cache, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// FindStateForBeadsStatus returns the best Linear state ID for a Beads status.
|
||||||
|
func (sc *StateCache) FindStateForBeadsStatus(status types.Status) string {
|
||||||
|
targetType := StatusToLinearStateType(status)
|
||||||
|
|
||||||
|
for _, s := range sc.States {
|
||||||
|
if s.Type == targetType {
|
||||||
|
return s.ID
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(sc.States) > 0 {
|
||||||
|
return sc.States[0].ID
|
||||||
|
}
|
||||||
|
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExtractLinearIdentifier extracts the Linear issue identifier (e.g., "TEAM-123") from a Linear URL.
|
||||||
|
func ExtractLinearIdentifier(url string) string {
|
||||||
|
// Linear URLs look like: https://linear.app/team/issue/TEAM-123/title
|
||||||
|
// We want to extract "TEAM-123"
|
||||||
|
parts := strings.Split(url, "/")
|
||||||
|
for i, part := range parts {
|
||||||
|
if part == "issue" && i+1 < len(parts) {
|
||||||
|
return parts[i+1]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
// CanonicalizeLinearExternalRef returns a stable Linear issue URL without the slug.
|
||||||
|
// Example: https://linear.app/team/issue/TEAM-123/title -> https://linear.app/team/issue/TEAM-123
|
||||||
|
// Returns ok=false if the URL isn't a recognizable Linear issue URL.
|
||||||
|
func CanonicalizeLinearExternalRef(externalRef string) (canonical string, ok bool) {
|
||||||
|
if externalRef == "" || !IsLinearExternalRef(externalRef) {
|
||||||
|
return "", false
|
||||||
|
}
|
||||||
|
|
||||||
|
parsed, err := url.Parse(externalRef)
|
||||||
|
if err != nil || parsed.Scheme == "" || parsed.Host == "" {
|
||||||
|
return "", false
|
||||||
|
}
|
||||||
|
|
||||||
|
segments := strings.Split(parsed.Path, "/")
|
||||||
|
for i, segment := range segments {
|
||||||
|
if segment == "issue" && i+1 < len(segments) && segments[i+1] != "" {
|
||||||
|
path := "/" + strings.Join(segments[1:i+2], "/")
|
||||||
|
return fmt.Sprintf("%s://%s%s", parsed.Scheme, parsed.Host, path), true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return "", false
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsLinearExternalRef checks if an external_ref URL is a Linear issue URL.
|
||||||
|
func IsLinearExternalRef(externalRef string) bool {
|
||||||
|
return strings.Contains(externalRef, "linear.app/") && strings.Contains(externalRef, "/issue/")
|
||||||
|
}
|
||||||
|
|
||||||
|
// FetchTeams retrieves all teams accessible with the current API key.
|
||||||
|
// This is useful for discovering the team ID needed for configuration.
|
||||||
|
func (c *Client) FetchTeams(ctx context.Context) ([]Team, error) {
|
||||||
|
query := `
|
||||||
|
query {
|
||||||
|
teams {
|
||||||
|
nodes {
|
||||||
|
id
|
||||||
|
name
|
||||||
|
key
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
`
|
||||||
|
|
||||||
|
req := &GraphQLRequest{
|
||||||
|
Query: query,
|
||||||
|
}
|
||||||
|
|
||||||
|
data, err := c.Execute(ctx, req)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to fetch teams: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var teamsResp TeamsResponse
|
||||||
|
if err := json.Unmarshal(data, &teamsResp); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to parse teams response: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return teamsResp.Teams.Nodes, nil
|
||||||
|
}
|
||||||
41
internal/linear/client_test.go
Normal file
41
internal/linear/client_test.go
Normal file
@@ -0,0 +1,41 @@
|
|||||||
|
package linear
|
||||||
|
|
||||||
|
import "testing"
|
||||||
|
|
||||||
|
func TestCanonicalizeLinearExternalRef(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
externalRef string
|
||||||
|
want string
|
||||||
|
ok bool
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "slugged url",
|
||||||
|
externalRef: "https://linear.app/crown-dev/issue/BEA-93/updated-title-for-beads",
|
||||||
|
want: "https://linear.app/crown-dev/issue/BEA-93",
|
||||||
|
ok: true,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "canonical url",
|
||||||
|
externalRef: "https://linear.app/crown-dev/issue/BEA-93",
|
||||||
|
want: "https://linear.app/crown-dev/issue/BEA-93",
|
||||||
|
ok: true,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "not linear",
|
||||||
|
externalRef: "https://example.com/issues/BEA-93",
|
||||||
|
want: "",
|
||||||
|
ok: false,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
got, ok := CanonicalizeLinearExternalRef(tt.externalRef)
|
||||||
|
if ok != tt.ok {
|
||||||
|
t.Fatalf("%s: ok=%v, want %v", tt.name, ok, tt.ok)
|
||||||
|
}
|
||||||
|
if got != tt.want {
|
||||||
|
t.Fatalf("%s: got %q, want %q", tt.name, got, tt.want)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
546
internal/linear/mapping.go
Normal file
546
internal/linear/mapping.go
Normal file
@@ -0,0 +1,546 @@
|
|||||||
|
package linear
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/steveyegge/beads/internal/idgen"
|
||||||
|
"github.com/steveyegge/beads/internal/types"
|
||||||
|
)
|
||||||
|
|
||||||
|
// IDGenerationOptions configures Linear hash ID generation.
|
||||||
|
type IDGenerationOptions struct {
|
||||||
|
BaseLength int // Starting hash length (3-8)
|
||||||
|
MaxLength int // Maximum hash length (3-8)
|
||||||
|
UsedIDs map[string]bool // Pre-populated set to avoid collisions (e.g., DB IDs)
|
||||||
|
}
|
||||||
|
|
||||||
|
// BuildLinearDescription formats a Beads issue for Linear's description field.
|
||||||
|
// This mirrors the payload used during push to keep hash comparisons consistent.
|
||||||
|
func BuildLinearDescription(issue *types.Issue) string {
|
||||||
|
description := issue.Description
|
||||||
|
if issue.AcceptanceCriteria != "" {
|
||||||
|
description += "\n\n## Acceptance Criteria\n" + issue.AcceptanceCriteria
|
||||||
|
}
|
||||||
|
if issue.Design != "" {
|
||||||
|
description += "\n\n## Design\n" + issue.Design
|
||||||
|
}
|
||||||
|
if issue.Notes != "" {
|
||||||
|
description += "\n\n## Notes\n" + issue.Notes
|
||||||
|
}
|
||||||
|
return description
|
||||||
|
}
|
||||||
|
|
||||||
|
// NormalizeIssueForLinearHash returns a copy of the issue using Linear's description
|
||||||
|
// formatting and clears fields not present in Linear's model to avoid false conflicts.
|
||||||
|
func NormalizeIssueForLinearHash(issue *types.Issue) *types.Issue {
|
||||||
|
normalized := *issue
|
||||||
|
normalized.Description = BuildLinearDescription(issue)
|
||||||
|
normalized.AcceptanceCriteria = ""
|
||||||
|
normalized.Design = ""
|
||||||
|
normalized.Notes = ""
|
||||||
|
if normalized.ExternalRef != nil && IsLinearExternalRef(*normalized.ExternalRef) {
|
||||||
|
if canonical, ok := CanonicalizeLinearExternalRef(*normalized.ExternalRef); ok {
|
||||||
|
normalized.ExternalRef = &canonical
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return &normalized
|
||||||
|
}
|
||||||
|
|
||||||
|
// GenerateIssueIDs generates unique hash-based IDs for issues that don't have one.
|
||||||
|
// Tracks used IDs to prevent collisions within the batch (and optionally against existing IDs).
|
||||||
|
// The creator parameter is used as part of the hash input (e.g., "linear-import").
|
||||||
|
func GenerateIssueIDs(issues []*types.Issue, prefix, creator string, opts IDGenerationOptions) error {
|
||||||
|
usedIDs := opts.UsedIDs
|
||||||
|
if usedIDs == nil {
|
||||||
|
usedIDs = make(map[string]bool)
|
||||||
|
}
|
||||||
|
|
||||||
|
baseLength := opts.BaseLength
|
||||||
|
if baseLength == 0 {
|
||||||
|
baseLength = 6
|
||||||
|
}
|
||||||
|
maxLength := opts.MaxLength
|
||||||
|
if maxLength == 0 {
|
||||||
|
maxLength = 8
|
||||||
|
}
|
||||||
|
if baseLength < 3 {
|
||||||
|
baseLength = 3
|
||||||
|
}
|
||||||
|
if maxLength > 8 {
|
||||||
|
maxLength = 8
|
||||||
|
}
|
||||||
|
if baseLength > maxLength {
|
||||||
|
baseLength = maxLength
|
||||||
|
}
|
||||||
|
|
||||||
|
// First pass: record existing IDs
|
||||||
|
for _, issue := range issues {
|
||||||
|
if issue.ID != "" {
|
||||||
|
usedIDs[issue.ID] = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Second pass: generate IDs for issues without one
|
||||||
|
for _, issue := range issues {
|
||||||
|
if issue.ID != "" {
|
||||||
|
continue // Already has an ID
|
||||||
|
}
|
||||||
|
|
||||||
|
var generated bool
|
||||||
|
for length := baseLength; length <= maxLength && !generated; length++ {
|
||||||
|
for nonce := 0; nonce < 10; nonce++ {
|
||||||
|
candidate := idgen.GenerateHashID(
|
||||||
|
prefix,
|
||||||
|
issue.Title,
|
||||||
|
issue.Description,
|
||||||
|
creator,
|
||||||
|
issue.CreatedAt,
|
||||||
|
length,
|
||||||
|
nonce,
|
||||||
|
)
|
||||||
|
|
||||||
|
if !usedIDs[candidate] {
|
||||||
|
issue.ID = candidate
|
||||||
|
usedIDs[candidate] = true
|
||||||
|
generated = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if !generated {
|
||||||
|
return fmt.Errorf("failed to generate unique ID for issue '%s' after trying lengths %d-%d with 10 nonces each",
|
||||||
|
issue.Title, baseLength, maxLength)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// MappingConfig holds configurable mappings between Linear and Beads.
|
||||||
|
// All maps use lowercase keys for case-insensitive matching.
|
||||||
|
type MappingConfig struct {
|
||||||
|
// PriorityMap maps Linear priority (0-4) to Beads priority (0-4).
|
||||||
|
// Key is Linear priority as string, value is Beads priority.
|
||||||
|
PriorityMap map[string]int
|
||||||
|
|
||||||
|
// StateMap maps Linear state types/names to Beads statuses.
|
||||||
|
// Key is lowercase state type or name, value is Beads status string.
|
||||||
|
StateMap map[string]string
|
||||||
|
|
||||||
|
// LabelTypeMap maps Linear label names to Beads issue types.
|
||||||
|
// Key is lowercase label name, value is Beads issue type.
|
||||||
|
LabelTypeMap map[string]string
|
||||||
|
|
||||||
|
// RelationMap maps Linear relation types to Beads dependency types.
|
||||||
|
// Key is Linear relation type, value is Beads dependency type.
|
||||||
|
RelationMap map[string]string
|
||||||
|
}
|
||||||
|
|
||||||
|
// DefaultMappingConfig returns sensible default mappings.
|
||||||
|
func DefaultMappingConfig() *MappingConfig {
|
||||||
|
return &MappingConfig{
|
||||||
|
// Linear priority: 0=none, 1=urgent, 2=high, 3=medium, 4=low
|
||||||
|
// Beads priority: 0=critical, 1=high, 2=medium, 3=low, 4=backlog
|
||||||
|
PriorityMap: map[string]int{
|
||||||
|
"0": 4, // No priority -> Backlog
|
||||||
|
"1": 0, // Urgent -> Critical
|
||||||
|
"2": 1, // High -> High
|
||||||
|
"3": 2, // Medium -> Medium
|
||||||
|
"4": 3, // Low -> Low
|
||||||
|
},
|
||||||
|
// Linear state types: backlog, unstarted, started, completed, canceled
|
||||||
|
StateMap: map[string]string{
|
||||||
|
"backlog": "open",
|
||||||
|
"unstarted": "open",
|
||||||
|
"started": "in_progress",
|
||||||
|
"completed": "closed",
|
||||||
|
"canceled": "closed",
|
||||||
|
},
|
||||||
|
// Label patterns for issue type inference
|
||||||
|
LabelTypeMap: map[string]string{
|
||||||
|
"bug": "bug",
|
||||||
|
"defect": "bug",
|
||||||
|
"feature": "feature",
|
||||||
|
"enhancement": "feature",
|
||||||
|
"epic": "epic",
|
||||||
|
"chore": "chore",
|
||||||
|
"maintenance": "chore",
|
||||||
|
"task": "task",
|
||||||
|
},
|
||||||
|
// Linear relation types to Beads dependency types
|
||||||
|
RelationMap: map[string]string{
|
||||||
|
"blocks": "blocks",
|
||||||
|
"blockedBy": "blocks", // Inverse: the related issue blocks this one
|
||||||
|
"duplicate": "duplicates",
|
||||||
|
"related": "related",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ConfigLoader is an interface for loading configuration values.
|
||||||
|
// This allows the mapping package to be decoupled from the storage layer.
|
||||||
|
type ConfigLoader interface {
|
||||||
|
GetAllConfig() (map[string]string, error)
|
||||||
|
}
|
||||||
|
|
||||||
|
// LoadMappingConfig loads mapping configuration from a config loader.
|
||||||
|
// Config keys follow the pattern: linear.<category>_map.<key> = <value>
|
||||||
|
// Examples:
|
||||||
|
//
|
||||||
|
// linear.priority_map.0 = 4 (Linear "no priority" -> Beads backlog)
|
||||||
|
// linear.state_map.started = in_progress
|
||||||
|
// linear.label_type_map.bug = bug
|
||||||
|
// linear.relation_map.blocks = blocks
|
||||||
|
func LoadMappingConfig(loader ConfigLoader) *MappingConfig {
|
||||||
|
config := DefaultMappingConfig()
|
||||||
|
|
||||||
|
if loader == nil {
|
||||||
|
return config
|
||||||
|
}
|
||||||
|
|
||||||
|
// Load all config keys and filter for linear mappings
|
||||||
|
allConfig, err := loader.GetAllConfig()
|
||||||
|
if err != nil {
|
||||||
|
return config
|
||||||
|
}
|
||||||
|
|
||||||
|
for key, value := range allConfig {
|
||||||
|
// Parse priority mappings: linear.priority_map.<linear_priority>
|
||||||
|
if strings.HasPrefix(key, "linear.priority_map.") {
|
||||||
|
linearPriority := strings.TrimPrefix(key, "linear.priority_map.")
|
||||||
|
if beadsPriority, err := parseIntValue(value); err == nil {
|
||||||
|
config.PriorityMap[linearPriority] = beadsPriority
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse state mappings: linear.state_map.<state_type_or_name>
|
||||||
|
if strings.HasPrefix(key, "linear.state_map.") {
|
||||||
|
stateKey := strings.ToLower(strings.TrimPrefix(key, "linear.state_map."))
|
||||||
|
config.StateMap[stateKey] = value
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse label-to-type mappings: linear.label_type_map.<label_name>
|
||||||
|
if strings.HasPrefix(key, "linear.label_type_map.") {
|
||||||
|
labelKey := strings.ToLower(strings.TrimPrefix(key, "linear.label_type_map."))
|
||||||
|
config.LabelTypeMap[labelKey] = value
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse relation mappings: linear.relation_map.<relation_type>
|
||||||
|
if strings.HasPrefix(key, "linear.relation_map.") {
|
||||||
|
relationType := strings.TrimPrefix(key, "linear.relation_map.")
|
||||||
|
config.RelationMap[relationType] = value
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return config
|
||||||
|
}
|
||||||
|
|
||||||
|
// parseIntValue safely parses an integer from a string config value.
|
||||||
|
func parseIntValue(s string) (int, error) {
|
||||||
|
var v int
|
||||||
|
_, err := fmt.Sscanf(s, "%d", &v)
|
||||||
|
return v, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// PriorityToBeads maps Linear priority (0-4) to Beads priority (0-4).
|
||||||
|
// Linear: 0=no priority, 1=urgent, 2=high, 3=medium, 4=low
|
||||||
|
// Beads: 0=critical, 1=high, 2=medium, 3=low, 4=backlog
|
||||||
|
// Uses configurable mapping from linear.priority_map.* config.
|
||||||
|
func PriorityToBeads(linearPriority int, config *MappingConfig) int {
|
||||||
|
key := fmt.Sprintf("%d", linearPriority)
|
||||||
|
if beadsPriority, ok := config.PriorityMap[key]; ok {
|
||||||
|
return beadsPriority
|
||||||
|
}
|
||||||
|
// Fallback to default mapping if not configured
|
||||||
|
return 2 // Default to Medium
|
||||||
|
}
|
||||||
|
|
||||||
|
// PriorityToLinear maps Beads priority (0-4) to Linear priority (0-4).
|
||||||
|
// Uses configurable mapping by inverting linear.priority_map.* config.
|
||||||
|
func PriorityToLinear(beadsPriority int, config *MappingConfig) int {
|
||||||
|
// Build inverse map from config
|
||||||
|
inverseMap := make(map[int]int)
|
||||||
|
for linearKey, beadsVal := range config.PriorityMap {
|
||||||
|
var linearVal int
|
||||||
|
if _, err := fmt.Sscanf(linearKey, "%d", &linearVal); err == nil {
|
||||||
|
inverseMap[beadsVal] = linearVal
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if linearPriority, ok := inverseMap[beadsPriority]; ok {
|
||||||
|
return linearPriority
|
||||||
|
}
|
||||||
|
// Fallback to default mapping if not found
|
||||||
|
return 3 // Default to Medium
|
||||||
|
}
|
||||||
|
|
||||||
|
// StateToBeadsStatus maps Linear state type to Beads status.
|
||||||
|
// Checks both state type (backlog, unstarted, etc.) and state name for custom workflows.
|
||||||
|
// Uses configurable mapping from linear.state_map.* config.
|
||||||
|
func StateToBeadsStatus(state *State, config *MappingConfig) types.Status {
|
||||||
|
if state == nil {
|
||||||
|
return types.StatusOpen
|
||||||
|
}
|
||||||
|
|
||||||
|
// First, try to match by state type (preferred)
|
||||||
|
stateType := strings.ToLower(state.Type)
|
||||||
|
if statusStr, ok := config.StateMap[stateType]; ok {
|
||||||
|
return ParseBeadsStatus(statusStr)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Then try to match by state name (for custom workflow states)
|
||||||
|
stateName := strings.ToLower(state.Name)
|
||||||
|
if statusStr, ok := config.StateMap[stateName]; ok {
|
||||||
|
return ParseBeadsStatus(statusStr)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Default fallback
|
||||||
|
return types.StatusOpen
|
||||||
|
}
|
||||||
|
|
||||||
|
// ParseBeadsStatus converts a status string to types.Status.
|
||||||
|
func ParseBeadsStatus(s string) types.Status {
|
||||||
|
switch strings.ToLower(s) {
|
||||||
|
case "open":
|
||||||
|
return types.StatusOpen
|
||||||
|
case "in_progress", "in-progress", "inprogress":
|
||||||
|
return types.StatusInProgress
|
||||||
|
case "blocked":
|
||||||
|
return types.StatusBlocked
|
||||||
|
case "closed":
|
||||||
|
return types.StatusClosed
|
||||||
|
default:
|
||||||
|
return types.StatusOpen
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// StatusToLinearStateType converts Beads status to Linear state type for filtering.
|
||||||
|
// This is used when pushing issues to Linear to find the appropriate state.
|
||||||
|
func StatusToLinearStateType(status types.Status) string {
|
||||||
|
switch status {
|
||||||
|
case types.StatusOpen:
|
||||||
|
return "unstarted"
|
||||||
|
case types.StatusInProgress:
|
||||||
|
return "started"
|
||||||
|
case types.StatusBlocked:
|
||||||
|
return "started" // Linear doesn't have blocked state type
|
||||||
|
case types.StatusClosed:
|
||||||
|
return "completed"
|
||||||
|
default:
|
||||||
|
return "unstarted"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// LabelToIssueType infers issue type from label names.
|
||||||
|
// Uses configurable mapping from linear.label_type_map.* config.
|
||||||
|
func LabelToIssueType(labels *Labels, config *MappingConfig) types.IssueType {
|
||||||
|
if labels == nil {
|
||||||
|
return types.TypeTask
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, label := range labels.Nodes {
|
||||||
|
labelName := strings.ToLower(label.Name)
|
||||||
|
|
||||||
|
// Check exact match first
|
||||||
|
if issueType, ok := config.LabelTypeMap[labelName]; ok {
|
||||||
|
return ParseIssueType(issueType)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if label contains any mapped keyword
|
||||||
|
for keyword, issueType := range config.LabelTypeMap {
|
||||||
|
if strings.Contains(labelName, keyword) {
|
||||||
|
return ParseIssueType(issueType)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return types.TypeTask // Default
|
||||||
|
}
|
||||||
|
|
||||||
|
// ParseIssueType converts an issue type string to types.IssueType.
|
||||||
|
func ParseIssueType(s string) types.IssueType {
|
||||||
|
switch strings.ToLower(s) {
|
||||||
|
case "bug":
|
||||||
|
return types.TypeBug
|
||||||
|
case "feature":
|
||||||
|
return types.TypeFeature
|
||||||
|
case "task":
|
||||||
|
return types.TypeTask
|
||||||
|
case "epic":
|
||||||
|
return types.TypeEpic
|
||||||
|
case "chore":
|
||||||
|
return types.TypeChore
|
||||||
|
default:
|
||||||
|
return types.TypeTask
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// RelationToBeadsDep converts a Linear relation to a Beads dependency type.
|
||||||
|
// Uses configurable mapping from linear.relation_map.* config.
|
||||||
|
func RelationToBeadsDep(relationType string, config *MappingConfig) string {
|
||||||
|
if depType, ok := config.RelationMap[relationType]; ok {
|
||||||
|
return depType
|
||||||
|
}
|
||||||
|
return "related" // Default fallback
|
||||||
|
}
|
||||||
|
|
||||||
|
// IssueToBeads converts a Linear issue to a Beads issue.
|
||||||
|
func IssueToBeads(li *Issue, config *MappingConfig) *IssueConversion {
|
||||||
|
createdAt, err := time.Parse(time.RFC3339, li.CreatedAt)
|
||||||
|
if err != nil {
|
||||||
|
createdAt = time.Now()
|
||||||
|
}
|
||||||
|
|
||||||
|
updatedAt, err := time.Parse(time.RFC3339, li.UpdatedAt)
|
||||||
|
if err != nil {
|
||||||
|
updatedAt = time.Now()
|
||||||
|
}
|
||||||
|
|
||||||
|
issue := &types.Issue{
|
||||||
|
Title: li.Title,
|
||||||
|
Description: li.Description,
|
||||||
|
Priority: PriorityToBeads(li.Priority, config),
|
||||||
|
IssueType: LabelToIssueType(li.Labels, config),
|
||||||
|
CreatedAt: createdAt,
|
||||||
|
UpdatedAt: updatedAt,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Map state using configurable mapping
|
||||||
|
issue.Status = StateToBeadsStatus(li.State, config)
|
||||||
|
|
||||||
|
if li.CompletedAt != "" {
|
||||||
|
completedAt, err := time.Parse(time.RFC3339, li.CompletedAt)
|
||||||
|
if err == nil {
|
||||||
|
issue.ClosedAt = &completedAt
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if li.Assignee != nil {
|
||||||
|
if li.Assignee.Email != "" {
|
||||||
|
issue.Assignee = li.Assignee.Email
|
||||||
|
} else {
|
||||||
|
issue.Assignee = li.Assignee.Name
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Copy labels (bidirectional sync preserves all labels)
|
||||||
|
if li.Labels != nil {
|
||||||
|
for _, label := range li.Labels.Nodes {
|
||||||
|
issue.Labels = append(issue.Labels, label.Name)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
externalRef := li.URL
|
||||||
|
if canonical, ok := CanonicalizeLinearExternalRef(externalRef); ok {
|
||||||
|
externalRef = canonical
|
||||||
|
}
|
||||||
|
issue.ExternalRef = &externalRef
|
||||||
|
|
||||||
|
// Collect dependencies to be created after all issues are imported
|
||||||
|
var deps []DependencyInfo
|
||||||
|
|
||||||
|
// Map parent-child relationship
|
||||||
|
if li.Parent != nil {
|
||||||
|
deps = append(deps, DependencyInfo{
|
||||||
|
FromLinearID: li.Identifier,
|
||||||
|
ToLinearID: li.Parent.Identifier,
|
||||||
|
Type: "parent-child",
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// Map relations to dependencies
|
||||||
|
if li.Relations != nil {
|
||||||
|
for _, rel := range li.Relations.Nodes {
|
||||||
|
depType := RelationToBeadsDep(rel.Type, config)
|
||||||
|
|
||||||
|
// For "blockedBy", we invert the direction since the related issue blocks this one
|
||||||
|
if rel.Type == "blockedBy" {
|
||||||
|
deps = append(deps, DependencyInfo{
|
||||||
|
FromLinearID: li.Identifier,
|
||||||
|
ToLinearID: rel.RelatedIssue.Identifier,
|
||||||
|
Type: depType,
|
||||||
|
})
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// For "blocks", the related issue is blocked by this one.
|
||||||
|
if rel.Type == "blocks" {
|
||||||
|
deps = append(deps, DependencyInfo{
|
||||||
|
FromLinearID: rel.RelatedIssue.Identifier,
|
||||||
|
ToLinearID: li.Identifier,
|
||||||
|
Type: depType,
|
||||||
|
})
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// For "duplicate" and "related", treat this issue as the source.
|
||||||
|
deps = append(deps, DependencyInfo{
|
||||||
|
FromLinearID: li.Identifier,
|
||||||
|
ToLinearID: rel.RelatedIssue.Identifier,
|
||||||
|
Type: depType,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return &IssueConversion{
|
||||||
|
Issue: issue,
|
||||||
|
Dependencies: deps,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// BuildLinearToLocalUpdates creates an updates map from a Linear issue
|
||||||
|
// to apply to a local Beads issue. This is used when Linear wins a conflict.
|
||||||
|
func BuildLinearToLocalUpdates(li *Issue, config *MappingConfig) map[string]interface{} {
|
||||||
|
updates := make(map[string]interface{})
|
||||||
|
|
||||||
|
// Update title
|
||||||
|
updates["title"] = li.Title
|
||||||
|
|
||||||
|
// Update description
|
||||||
|
updates["description"] = li.Description
|
||||||
|
|
||||||
|
// Update priority using configured mapping
|
||||||
|
updates["priority"] = PriorityToBeads(li.Priority, config)
|
||||||
|
|
||||||
|
// Update status using configured mapping
|
||||||
|
updates["status"] = string(StateToBeadsStatus(li.State, config))
|
||||||
|
|
||||||
|
// Update assignee if present
|
||||||
|
if li.Assignee != nil {
|
||||||
|
if li.Assignee.Email != "" {
|
||||||
|
updates["assignee"] = li.Assignee.Email
|
||||||
|
} else {
|
||||||
|
updates["assignee"] = li.Assignee.Name
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
updates["assignee"] = ""
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update labels from Linear
|
||||||
|
if li.Labels != nil {
|
||||||
|
var labels []string
|
||||||
|
for _, label := range li.Labels.Nodes {
|
||||||
|
labels = append(labels, label.Name)
|
||||||
|
}
|
||||||
|
updates["labels"] = labels
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update timestamps
|
||||||
|
if li.UpdatedAt != "" {
|
||||||
|
if updatedAt, err := time.Parse(time.RFC3339, li.UpdatedAt); err == nil {
|
||||||
|
updates["updated_at"] = updatedAt
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Handle closed state
|
||||||
|
if li.CompletedAt != "" {
|
||||||
|
if closedAt, err := time.Parse(time.RFC3339, li.CompletedAt); err == nil {
|
||||||
|
updates["closed_at"] = closedAt
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return updates
|
||||||
|
}
|
||||||
144
internal/linear/mapping_test.go
Normal file
144
internal/linear/mapping_test.go
Normal file
@@ -0,0 +1,144 @@
|
|||||||
|
package linear
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/steveyegge/beads/internal/types"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestGenerateIssueIDs(t *testing.T) {
|
||||||
|
// Create test issues without IDs
|
||||||
|
issues := []*types.Issue{
|
||||||
|
{
|
||||||
|
Title: "First issue",
|
||||||
|
Description: "Description 1",
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Title: "Second issue",
|
||||||
|
Description: "Description 2",
|
||||||
|
CreatedAt: time.Now().Add(-time.Hour),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Title: "Third issue",
|
||||||
|
Description: "Description 3",
|
||||||
|
CreatedAt: time.Now().Add(-2 * time.Hour),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
// Generate IDs
|
||||||
|
err := GenerateIssueIDs(issues, "test", "linear-import", IDGenerationOptions{})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("GenerateIssueIDs failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify all issues have IDs
|
||||||
|
for i, issue := range issues {
|
||||||
|
if issue.ID == "" {
|
||||||
|
t.Errorf("Issue %d has empty ID", i)
|
||||||
|
}
|
||||||
|
// Verify prefix
|
||||||
|
if !hasPrefix(issue.ID, "test-") {
|
||||||
|
t.Errorf("Issue %d ID '%s' doesn't have prefix 'test-'", i, issue.ID)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify all IDs are unique
|
||||||
|
seen := make(map[string]bool)
|
||||||
|
for i, issue := range issues {
|
||||||
|
if seen[issue.ID] {
|
||||||
|
t.Errorf("Duplicate ID found: %s (issue %d)", issue.ID, i)
|
||||||
|
}
|
||||||
|
seen[issue.ID] = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGenerateIssueIDsPreservesExisting(t *testing.T) {
|
||||||
|
existingID := "test-existing"
|
||||||
|
issues := []*types.Issue{
|
||||||
|
{
|
||||||
|
ID: existingID,
|
||||||
|
Title: "Existing issue",
|
||||||
|
Description: "Has an ID already",
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Title: "New issue",
|
||||||
|
Description: "Needs an ID",
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
err := GenerateIssueIDs(issues, "test", "linear-import", IDGenerationOptions{})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("GenerateIssueIDs failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// First issue should keep its original ID
|
||||||
|
if issues[0].ID != existingID {
|
||||||
|
t.Errorf("Existing ID was changed: got %s, want %s", issues[0].ID, existingID)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Second issue should have a new ID
|
||||||
|
if issues[1].ID == "" {
|
||||||
|
t.Error("Second issue has empty ID")
|
||||||
|
}
|
||||||
|
if issues[1].ID == existingID {
|
||||||
|
t.Error("Second issue has same ID as first (collision)")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGenerateIssueIDsNoDuplicates(t *testing.T) {
|
||||||
|
// Create issues with identical content - should still get unique IDs
|
||||||
|
now := time.Now()
|
||||||
|
issues := []*types.Issue{
|
||||||
|
{
|
||||||
|
Title: "Same title",
|
||||||
|
Description: "Same description",
|
||||||
|
CreatedAt: now,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Title: "Same title",
|
||||||
|
Description: "Same description",
|
||||||
|
CreatedAt: now,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
err := GenerateIssueIDs(issues, "bd", "linear-import", IDGenerationOptions{})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("GenerateIssueIDs failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Both should have IDs
|
||||||
|
if issues[0].ID == "" || issues[1].ID == "" {
|
||||||
|
t.Error("One or both issues have empty IDs")
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDs should be different (nonce handles collision)
|
||||||
|
if issues[0].ID == issues[1].ID {
|
||||||
|
t.Errorf("Both issues have same ID: %s", issues[0].ID)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestNormalizeIssueForLinearHashCanonicalizesExternalRef(t *testing.T) {
|
||||||
|
slugged := "https://linear.app/crown-dev/issue/BEA-93/updated-title-for-beads"
|
||||||
|
canonical := "https://linear.app/crown-dev/issue/BEA-93"
|
||||||
|
issue := &types.Issue{
|
||||||
|
Title: "Title",
|
||||||
|
Description: "Description",
|
||||||
|
ExternalRef: &slugged,
|
||||||
|
}
|
||||||
|
|
||||||
|
normalized := NormalizeIssueForLinearHash(issue)
|
||||||
|
if normalized.ExternalRef == nil {
|
||||||
|
t.Fatal("expected external_ref to be present")
|
||||||
|
}
|
||||||
|
if *normalized.ExternalRef != canonical {
|
||||||
|
t.Fatalf("expected canonical external_ref %q, got %q", canonical, *normalized.ExternalRef)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func hasPrefix(s, prefix string) bool {
|
||||||
|
return len(s) >= len(prefix) && s[:len(prefix)] == prefix
|
||||||
|
}
|
||||||
252
internal/linear/types.go
Normal file
252
internal/linear/types.go
Normal file
@@ -0,0 +1,252 @@
|
|||||||
|
// Package linear provides client and data types for the Linear GraphQL API.
|
||||||
|
//
|
||||||
|
// This package handles all interactions with Linear's issue tracking system,
|
||||||
|
// including fetching, creating, and updating issues. It provides bidirectional
|
||||||
|
// mapping between Linear's data model and Beads' internal types.
|
||||||
|
package linear
|
||||||
|
|
||||||
|
import (
|
||||||
|
"net/http"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
// API configuration constants.
|
||||||
|
const (
|
||||||
|
// DefaultAPIEndpoint is the Linear GraphQL API endpoint.
|
||||||
|
DefaultAPIEndpoint = "https://api.linear.app/graphql"
|
||||||
|
|
||||||
|
// DefaultTimeout is the default HTTP request timeout.
|
||||||
|
DefaultTimeout = 30 * time.Second
|
||||||
|
|
||||||
|
// MaxRetries is the maximum number of retries for rate-limited requests.
|
||||||
|
MaxRetries = 3
|
||||||
|
|
||||||
|
// RetryDelay is the base delay between retries (exponential backoff).
|
||||||
|
RetryDelay = time.Second
|
||||||
|
|
||||||
|
// MaxPageSize is the maximum number of issues to fetch per page.
|
||||||
|
MaxPageSize = 100
|
||||||
|
)
|
||||||
|
|
||||||
|
// Client provides methods to interact with the Linear GraphQL API.
|
||||||
|
type Client struct {
|
||||||
|
APIKey string
|
||||||
|
TeamID string
|
||||||
|
Endpoint string // GraphQL endpoint URL (defaults to DefaultAPIEndpoint)
|
||||||
|
HTTPClient *http.Client
|
||||||
|
}
|
||||||
|
|
||||||
|
// GraphQLRequest represents a GraphQL request payload.
|
||||||
|
type GraphQLRequest struct {
|
||||||
|
Query string `json:"query"`
|
||||||
|
Variables map[string]interface{} `json:"variables,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// GraphQLResponse represents a generic GraphQL response.
|
||||||
|
type GraphQLResponse struct {
|
||||||
|
Data []byte `json:"data"`
|
||||||
|
Errors []GraphQLError `json:"errors,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// GraphQLError represents a GraphQL error.
|
||||||
|
type GraphQLError struct {
|
||||||
|
Message string `json:"message"`
|
||||||
|
Path []string `json:"path,omitempty"`
|
||||||
|
Extensions struct {
|
||||||
|
Code string `json:"code,omitempty"`
|
||||||
|
} `json:"extensions,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// Issue represents an issue from the Linear API.
|
||||||
|
type Issue struct {
|
||||||
|
ID string `json:"id"`
|
||||||
|
Identifier string `json:"identifier"` // e.g., "TEAM-123"
|
||||||
|
Title string `json:"title"`
|
||||||
|
Description string `json:"description"`
|
||||||
|
URL string `json:"url"`
|
||||||
|
Priority int `json:"priority"` // 0=no priority, 1=urgent, 2=high, 3=medium, 4=low
|
||||||
|
State *State `json:"state"`
|
||||||
|
Assignee *User `json:"assignee"`
|
||||||
|
Labels *Labels `json:"labels"`
|
||||||
|
Parent *Parent `json:"parent,omitempty"`
|
||||||
|
Relations *Relations `json:"relations,omitempty"`
|
||||||
|
CreatedAt string `json:"createdAt"`
|
||||||
|
UpdatedAt string `json:"updatedAt"`
|
||||||
|
CompletedAt string `json:"completedAt,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// State represents a workflow state in Linear.
|
||||||
|
type State struct {
|
||||||
|
ID string `json:"id"`
|
||||||
|
Name string `json:"name"`
|
||||||
|
Type string `json:"type"` // "backlog", "unstarted", "started", "completed", "canceled"
|
||||||
|
}
|
||||||
|
|
||||||
|
// User represents a user in Linear.
|
||||||
|
type User struct {
|
||||||
|
ID string `json:"id"`
|
||||||
|
Name string `json:"name"`
|
||||||
|
Email string `json:"email"`
|
||||||
|
DisplayName string `json:"displayName"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// Labels represents paginated labels on an issue.
|
||||||
|
type Labels struct {
|
||||||
|
Nodes []Label `json:"nodes"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// Label represents a label in Linear.
|
||||||
|
type Label struct {
|
||||||
|
ID string `json:"id"`
|
||||||
|
Name string `json:"name"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parent represents a parent issue reference.
|
||||||
|
type Parent struct {
|
||||||
|
ID string `json:"id"`
|
||||||
|
Identifier string `json:"identifier"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// Relation represents a relation between issues in Linear.
|
||||||
|
type Relation struct {
|
||||||
|
ID string `json:"id"`
|
||||||
|
Type string `json:"type"` // "blocks", "blockedBy", "duplicate", "related"
|
||||||
|
RelatedIssue struct {
|
||||||
|
ID string `json:"id"`
|
||||||
|
Identifier string `json:"identifier"`
|
||||||
|
} `json:"relatedIssue"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// Relations wraps the nodes array for relations.
|
||||||
|
type Relations struct {
|
||||||
|
Nodes []Relation `json:"nodes"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// TeamStates represents workflow states for a team.
|
||||||
|
type TeamStates struct {
|
||||||
|
ID string `json:"id"`
|
||||||
|
States *StatesWrapper `json:"states"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// StatesWrapper wraps the nodes array for states.
|
||||||
|
type StatesWrapper struct {
|
||||||
|
Nodes []State `json:"nodes"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// IssuesResponse represents the response from issues query.
|
||||||
|
type IssuesResponse struct {
|
||||||
|
Issues struct {
|
||||||
|
Nodes []Issue `json:"nodes"`
|
||||||
|
PageInfo struct {
|
||||||
|
HasNextPage bool `json:"hasNextPage"`
|
||||||
|
EndCursor string `json:"endCursor"`
|
||||||
|
} `json:"pageInfo"`
|
||||||
|
} `json:"issues"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// IssueCreateResponse represents the response from issueCreate mutation.
|
||||||
|
type IssueCreateResponse struct {
|
||||||
|
IssueCreate struct {
|
||||||
|
Success bool `json:"success"`
|
||||||
|
Issue Issue `json:"issue"`
|
||||||
|
} `json:"issueCreate"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// IssueUpdateResponse represents the response from issueUpdate mutation.
|
||||||
|
type IssueUpdateResponse struct {
|
||||||
|
IssueUpdate struct {
|
||||||
|
Success bool `json:"success"`
|
||||||
|
Issue Issue `json:"issue"`
|
||||||
|
} `json:"issueUpdate"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// TeamResponse represents the response from team query.
|
||||||
|
type TeamResponse struct {
|
||||||
|
Team TeamStates `json:"team"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// SyncStats tracks statistics for a Linear sync operation.
|
||||||
|
type SyncStats struct {
|
||||||
|
Pulled int `json:"pulled"`
|
||||||
|
Pushed int `json:"pushed"`
|
||||||
|
Created int `json:"created"`
|
||||||
|
Updated int `json:"updated"`
|
||||||
|
Skipped int `json:"skipped"`
|
||||||
|
Errors int `json:"errors"`
|
||||||
|
Conflicts int `json:"conflicts"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// SyncResult represents the result of a Linear sync operation.
|
||||||
|
type SyncResult struct {
|
||||||
|
Success bool `json:"success"`
|
||||||
|
Stats SyncStats `json:"stats"`
|
||||||
|
LastSync string `json:"last_sync,omitempty"`
|
||||||
|
Error string `json:"error,omitempty"`
|
||||||
|
Warnings []string `json:"warnings,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// PullStats tracks pull operation statistics.
|
||||||
|
type PullStats struct {
|
||||||
|
Created int
|
||||||
|
Updated int
|
||||||
|
Skipped int
|
||||||
|
Incremental bool // Whether this was an incremental sync
|
||||||
|
SyncedSince string // Timestamp we synced since (if incremental)
|
||||||
|
}
|
||||||
|
|
||||||
|
// PushStats tracks push operation statistics.
|
||||||
|
type PushStats struct {
|
||||||
|
Created int
|
||||||
|
Updated int
|
||||||
|
Skipped int
|
||||||
|
Errors int
|
||||||
|
}
|
||||||
|
|
||||||
|
// Conflict represents a conflict between local and Linear versions.
|
||||||
|
// A conflict occurs when both the local and Linear versions have been modified
|
||||||
|
// since the last sync.
|
||||||
|
type Conflict struct {
|
||||||
|
IssueID string // Beads issue ID
|
||||||
|
LocalUpdated time.Time // When the local version was last modified
|
||||||
|
LinearUpdated time.Time // When the Linear version was last modified
|
||||||
|
LinearExternalRef string // URL to the Linear issue
|
||||||
|
LinearIdentifier string // Linear issue identifier (e.g., "TEAM-123")
|
||||||
|
LinearInternalID string // Linear's internal UUID (for API updates)
|
||||||
|
}
|
||||||
|
|
||||||
|
// IssueConversion holds the result of converting a Linear issue to Beads.
|
||||||
|
// It includes the issue and any dependencies that should be created.
|
||||||
|
type IssueConversion struct {
|
||||||
|
Issue interface{} // *types.Issue - avoiding circular import
|
||||||
|
Dependencies []DependencyInfo
|
||||||
|
}
|
||||||
|
|
||||||
|
// DependencyInfo represents a dependency to be created after issue import.
|
||||||
|
// Stored separately since we need all issues imported before linking dependencies.
|
||||||
|
type DependencyInfo struct {
|
||||||
|
FromLinearID string // Linear identifier of the dependent issue (e.g., "TEAM-123")
|
||||||
|
ToLinearID string // Linear identifier of the dependency target
|
||||||
|
Type string // Beads dependency type (blocks, related, duplicates, parent-child)
|
||||||
|
}
|
||||||
|
|
||||||
|
// StateCache caches workflow states for the team to avoid repeated API calls.
|
||||||
|
type StateCache struct {
|
||||||
|
States []State
|
||||||
|
StatesByID map[string]State
|
||||||
|
OpenStateID string // First "unstarted" or "backlog" state
|
||||||
|
}
|
||||||
|
|
||||||
|
// Team represents a team in Linear.
|
||||||
|
type Team struct {
|
||||||
|
ID string `json:"id"` // UUID
|
||||||
|
Name string `json:"name"` // Display name
|
||||||
|
Key string `json:"key"` // Short key used in issue identifiers (e.g., "ENG")
|
||||||
|
}
|
||||||
|
|
||||||
|
// TeamsResponse represents the response from teams query.
|
||||||
|
type TeamsResponse struct {
|
||||||
|
Teams struct {
|
||||||
|
Nodes []Team `json:"nodes"`
|
||||||
|
} `json:"teams"`
|
||||||
|
}
|
||||||
|
|
||||||
@@ -2,57 +2,15 @@ package sqlite
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"crypto/sha256"
|
|
||||||
"database/sql"
|
"database/sql"
|
||||||
"fmt"
|
"fmt"
|
||||||
"math/big"
|
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/steveyegge/beads/internal/idgen"
|
||||||
"github.com/steveyegge/beads/internal/types"
|
"github.com/steveyegge/beads/internal/types"
|
||||||
)
|
)
|
||||||
|
|
||||||
// base36Alphabet is the character set for base36 encoding (0-9, a-z)
|
|
||||||
const base36Alphabet = "0123456789abcdefghijklmnopqrstuvwxyz"
|
|
||||||
|
|
||||||
// encodeBase36 converts a byte slice to a base36 string of specified length
|
|
||||||
// Takes the first N bytes and converts them to base36 representation
|
|
||||||
func encodeBase36(data []byte, length int) string {
|
|
||||||
// Convert bytes to big integer
|
|
||||||
num := new(big.Int).SetBytes(data)
|
|
||||||
|
|
||||||
// Convert to base36
|
|
||||||
var result strings.Builder
|
|
||||||
base := big.NewInt(36)
|
|
||||||
zero := big.NewInt(0)
|
|
||||||
mod := new(big.Int)
|
|
||||||
|
|
||||||
// Build the string in reverse
|
|
||||||
chars := make([]byte, 0, length)
|
|
||||||
for num.Cmp(zero) > 0 {
|
|
||||||
num.DivMod(num, base, mod)
|
|
||||||
chars = append(chars, base36Alphabet[mod.Int64()])
|
|
||||||
}
|
|
||||||
|
|
||||||
// Reverse the string
|
|
||||||
for i := len(chars) - 1; i >= 0; i-- {
|
|
||||||
result.WriteByte(chars[i])
|
|
||||||
}
|
|
||||||
|
|
||||||
// Pad with zeros if needed
|
|
||||||
str := result.String()
|
|
||||||
if len(str) < length {
|
|
||||||
str = strings.Repeat("0", length-len(str)) + str
|
|
||||||
}
|
|
||||||
|
|
||||||
// Truncate to exact length if needed (keep least significant digits)
|
|
||||||
if len(str) > length {
|
|
||||||
str = str[len(str)-length:]
|
|
||||||
}
|
|
||||||
|
|
||||||
return str
|
|
||||||
}
|
|
||||||
|
|
||||||
// isValidBase36 checks if a string contains only base36 characters
|
// isValidBase36 checks if a string contains only base36 characters
|
||||||
func isValidBase36(s string) bool {
|
func isValidBase36(s string) bool {
|
||||||
for _, c := range s {
|
for _, c := range s {
|
||||||
@@ -124,31 +82,31 @@ func GenerateIssueID(ctx context.Context, conn *sql.Conn, prefix string, issue *
|
|||||||
// Fallback to 6 on error
|
// Fallback to 6 on error
|
||||||
baseLength = 6
|
baseLength = 6
|
||||||
}
|
}
|
||||||
|
|
||||||
// Try baseLength, baseLength+1, baseLength+2, up to max of 8
|
// Try baseLength, baseLength+1, baseLength+2, up to max of 8
|
||||||
maxLength := 8
|
maxLength := 8
|
||||||
if baseLength > maxLength {
|
if baseLength > maxLength {
|
||||||
baseLength = maxLength
|
baseLength = maxLength
|
||||||
}
|
}
|
||||||
|
|
||||||
for length := baseLength; length <= maxLength; length++ {
|
for length := baseLength; length <= maxLength; length++ {
|
||||||
// Try up to 10 nonces at each length
|
// Try up to 10 nonces at each length
|
||||||
for nonce := 0; nonce < 10; nonce++ {
|
for nonce := 0; nonce < 10; nonce++ {
|
||||||
candidate := generateHashID(prefix, issue.Title, issue.Description, actor, issue.CreatedAt, length, nonce)
|
candidate := generateHashID(prefix, issue.Title, issue.Description, actor, issue.CreatedAt, length, nonce)
|
||||||
|
|
||||||
// Check if this ID already exists
|
// Check if this ID already exists
|
||||||
var count int
|
var count int
|
||||||
err = conn.QueryRowContext(ctx, `SELECT COUNT(*) FROM issues WHERE id = ?`, candidate).Scan(&count)
|
err = conn.QueryRowContext(ctx, `SELECT COUNT(*) FROM issues WHERE id = ?`, candidate).Scan(&count)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", fmt.Errorf("failed to check for ID collision: %w", err)
|
return "", fmt.Errorf("failed to check for ID collision: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
if count == 0 {
|
if count == 0 {
|
||||||
return candidate, nil
|
return candidate, nil
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return "", fmt.Errorf("failed to generate unique ID after trying lengths %d-%d with 10 nonces each", baseLength, maxLength)
|
return "", fmt.Errorf("failed to generate unique ID after trying lengths %d-%d with 10 nonces each", baseLength, maxLength)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -161,13 +119,13 @@ func GenerateBatchIssueIDs(ctx context.Context, conn *sql.Conn, prefix string, i
|
|||||||
// Fallback to 6 on error
|
// Fallback to 6 on error
|
||||||
baseLength = 6
|
baseLength = 6
|
||||||
}
|
}
|
||||||
|
|
||||||
// Try baseLength, baseLength+1, baseLength+2, up to max of 8
|
// Try baseLength, baseLength+1, baseLength+2, up to max of 8
|
||||||
maxLength := 8
|
maxLength := 8
|
||||||
if baseLength > maxLength {
|
if baseLength > maxLength {
|
||||||
baseLength = maxLength
|
baseLength = maxLength
|
||||||
}
|
}
|
||||||
|
|
||||||
for i := range issues {
|
for i := range issues {
|
||||||
if issues[i].ID == "" {
|
if issues[i].ID == "" {
|
||||||
var generated bool
|
var generated bool
|
||||||
@@ -175,18 +133,18 @@ func GenerateBatchIssueIDs(ctx context.Context, conn *sql.Conn, prefix string, i
|
|||||||
for length := baseLength; length <= maxLength && !generated; length++ {
|
for length := baseLength; length <= maxLength && !generated; length++ {
|
||||||
for nonce := 0; nonce < 10; nonce++ {
|
for nonce := 0; nonce < 10; nonce++ {
|
||||||
candidate := generateHashID(prefix, issues[i].Title, issues[i].Description, actor, issues[i].CreatedAt, length, nonce)
|
candidate := generateHashID(prefix, issues[i].Title, issues[i].Description, actor, issues[i].CreatedAt, length, nonce)
|
||||||
|
|
||||||
// Check if this ID is already used in this batch or in the database
|
// Check if this ID is already used in this batch or in the database
|
||||||
if usedIDs[candidate] {
|
if usedIDs[candidate] {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
var count int
|
var count int
|
||||||
err := conn.QueryRowContext(ctx, `SELECT COUNT(*) FROM issues WHERE id = ?`, candidate).Scan(&count)
|
err := conn.QueryRowContext(ctx, `SELECT COUNT(*) FROM issues WHERE id = ?`, candidate).Scan(&count)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("failed to check for ID collision: %w", err)
|
return fmt.Errorf("failed to check for ID collision: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
if count == 0 {
|
if count == 0 {
|
||||||
issues[i].ID = candidate
|
issues[i].ID = candidate
|
||||||
usedIDs[candidate] = true
|
usedIDs[candidate] = true
|
||||||
@@ -195,7 +153,7 @@ func GenerateBatchIssueIDs(ctx context.Context, conn *sql.Conn, prefix string, i
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if !generated {
|
if !generated {
|
||||||
return fmt.Errorf("failed to generate unique ID for issue %d after trying lengths %d-%d with 10 nonces each", i, baseLength, maxLength)
|
return fmt.Errorf("failed to generate unique ID for issue %d after trying lengths %d-%d with 10 nonces each", i, baseLength, maxLength)
|
||||||
}
|
}
|
||||||
@@ -248,7 +206,7 @@ func EnsureIDs(ctx context.Context, conn *sql.Conn, prefix string, issues []*typ
|
|||||||
return wrapDBErrorf(err, "validate ID prefix for %s", issues[i].ID)
|
return wrapDBErrorf(err, "validate ID prefix for %s", issues[i].ID)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// For hierarchical IDs (bd-a3f8e9.1), ensure parent exists
|
// For hierarchical IDs (bd-a3f8e9.1), ensure parent exists
|
||||||
// Use IsHierarchicalID to correctly handle prefixes with dots (GH#508)
|
// Use IsHierarchicalID to correctly handle prefixes with dots (GH#508)
|
||||||
if isHierarchical, parentID := IsHierarchicalID(issues[i].ID); isHierarchical {
|
if isHierarchical, parentID := IsHierarchicalID(issues[i].ID); isHierarchical {
|
||||||
@@ -278,11 +236,11 @@ func EnsureIDs(ctx context.Context, conn *sql.Conn, prefix string, issues []*typ
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
usedIDs[issues[i].ID] = true
|
usedIDs[issues[i].ID] = true
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Second pass: generate IDs for issues that need them
|
// Second pass: generate IDs for issues that need them
|
||||||
return GenerateBatchIssueIDs(ctx, conn, prefix, issues, actor, usedIDs)
|
return GenerateBatchIssueIDs(ctx, conn, prefix, issues, actor, usedIDs)
|
||||||
}
|
}
|
||||||
@@ -293,34 +251,5 @@ func EnsureIDs(ctx context.Context, conn *sql.Conn, prefix string, issues []*typ
|
|||||||
// Includes a nonce parameter to handle same-length collisions.
|
// Includes a nonce parameter to handle same-length collisions.
|
||||||
// Uses base36 encoding (0-9, a-z) for better information density than hex.
|
// Uses base36 encoding (0-9, a-z) for better information density than hex.
|
||||||
func generateHashID(prefix, title, description, creator string, timestamp time.Time, length, nonce int) string {
|
func generateHashID(prefix, title, description, creator string, timestamp time.Time, length, nonce int) string {
|
||||||
// Combine inputs into a stable content string
|
return idgen.GenerateHashID(prefix, title, description, creator, timestamp, length, nonce)
|
||||||
// Include nonce to handle hash collisions
|
|
||||||
content := fmt.Sprintf("%s|%s|%s|%d|%d", title, description, creator, timestamp.UnixNano(), nonce)
|
|
||||||
|
|
||||||
// Hash the content
|
|
||||||
hash := sha256.Sum256([]byte(content))
|
|
||||||
|
|
||||||
// Use base36 encoding with variable length (3-8 chars)
|
|
||||||
// Determine how many bytes to use based on desired output length
|
|
||||||
var numBytes int
|
|
||||||
switch length {
|
|
||||||
case 3:
|
|
||||||
numBytes = 2 // 2 bytes = 16 bits ≈ 3.09 base36 chars
|
|
||||||
case 4:
|
|
||||||
numBytes = 3 // 3 bytes = 24 bits ≈ 4.63 base36 chars
|
|
||||||
case 5:
|
|
||||||
numBytes = 4 // 4 bytes = 32 bits ≈ 6.18 base36 chars
|
|
||||||
case 6:
|
|
||||||
numBytes = 4 // 4 bytes = 32 bits ≈ 6.18 base36 chars
|
|
||||||
case 7:
|
|
||||||
numBytes = 5 // 5 bytes = 40 bits ≈ 7.73 base36 chars
|
|
||||||
case 8:
|
|
||||||
numBytes = 5 // 5 bytes = 40 bits ≈ 7.73 base36 chars
|
|
||||||
default:
|
|
||||||
numBytes = 3 // default to 3 chars
|
|
||||||
}
|
|
||||||
|
|
||||||
shortHash := encodeBase36(hash[:numBytes], length)
|
|
||||||
|
|
||||||
return fmt.Sprintf("%s-%s", prefix, shortHash)
|
|
||||||
}
|
}
|
||||||
|
|||||||
Reference in New Issue
Block a user