Compare commits

..

52 Commits

Author SHA1 Message Date
f99f4069f0 feat(ci): Add Gitea Actions workflow with Nix caching
Some checks failed
CI / check (push) Failing after 1s
Uses johno/gitea-actions/nix-setup composite action for:
- Nix installation via DeterminateSystems/nix-installer-action
- Nix store caching via actions/cache@v4
- Per-repo cache isolation based on flake.lock hash
2026-01-13 15:50:54 -08:00
320a2d3738 refactor: Move import_gitea_issues to user-level skill
Moves from project-level (.claude/commands/) to user-level
(home/roles/development/skills/) so it's available across all projects
via Home Manager activation.

Bead: nixos-configs-g72
2026-01-13 15:37:59 -08:00
92b6cfb710 fix(common): Add ghostty terminfo for SSH compatibility
Installs ghostty.terminfo on all NixOS machines so tmux works
when SSH'ing from a Ghostty terminal.
2026-01-13 14:09:57 -08:00
996fb86ed8 [nixos-configs-vru] Add skill for responding to Gitea PR review comments (#26)
## Summary
- Rewrote gitea_pr_review.md as a comprehensive interactive skill
- Accepts PR number as argument or auto-detects from current branch
- Reads Gitea config from tea CLI config file
- Fetches and displays review comments via REST API
- Interactive comment selection via AskUserQuestion
- Posts replies via `tea comment` with file:line context

## Bead Reference
Implements bead: nixos-configs-vru

## Changes
- Rewritten `home/roles/development/skills/gitea_pr_review.md` (+259/-155 lines)

## Testing
Please leave a review comment on this PR so we can test the skill!

## Limitations
- Thread replies are posted as top-level comments (Gitea API limitation)
- Uses first login from tea config

Reviewed-on: #26
Co-authored-by: John Ogle <john@ogle.fyi>
Co-committed-by: John Ogle <john@ogle.fyi>
2026-01-13 09:08:17 -08:00
47e2392a56 feat(john-endesktop): Enable virtualisation role 2026-01-12 21:57:46 -08:00
c26a11a9a8 fix(boxy): Update appLauncherServer to use module option syntax 2026-01-12 21:42:29 -08:00
7ba1b52ec7 feat(kodi): Add base-linux role to home-kodi imports 2026-01-12 21:42:25 -08:00
056c1a1e62 feat(remote-build): Add john-endesktop as builder machine
- Enable enableBuilder role on john-endesktop
- Add john-endesktop to nix-book's builder list (maxJobs=1, speedFactor=1)
- Document SSH setup process for new clients in remote-build role

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 21:27:01 -08:00
c92a82b21a fix(audio): Remove conflicting pulseaudio config
The audio role had both pipewire (with pulse.enable = true) and
pulseaudio configured, which are mutually exclusive. PipeWire's
PulseAudio compatibility layer handles pulse clients, so the
services.pulseaudio block is unnecessary and causes conflicts.

Closes: nixos-configs-0vf

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 20:48:41 -08:00
b6e9de0f61 feat(skills): Add plan-awareness to bead workflow skills
- parallel_beads: Filter beads by plan readiness before selection
  - Include beads with plans or type=bug
  - Warn about skipped beads that need plans first
- beads_implement: Check for plan based on bead type
  - Bugs can proceed without plans
  - Features/tasks warn and ask user preference
- beads_workflow: Document design decisions
  - Artifacts vs statuses for phase tracking
  - One bead per feature as default
  - Discovered-work pattern for splitting work

Closes: nixos-configs-45r, nixos-configs-8gr, nixos-configs-oog, nixos-configs-505

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 18:39:51 -08:00
ba4922981b feat(skills): Add validation status to parallel_beads PR descriptions
Enhances the parallel_beads workflow to capture and report validation status:

- Add step 3 to extract validation criteria from plans or use best-effort
  fallbacks (make test, nix flake check, npm test)
- Update step 4 to run validation and track PASS/FAIL/SKIP results
- Add Validation section with table to PR body templates (gh and tea)
- Enhance result reporting to include validation summary
- Add Validation column and Validation Failures section to summary table

Implements bead: nixos-configs-39m

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 18:29:25 -08:00
47aaad2eb5 docs: Add beads issue tracking documentation to AGENTS.md 2026-01-12 18:02:30 -08:00
8eca8204ff fix(home): Move plasma-manager-kodi to Linux-specific roles 2026-01-12 18:02:26 -08:00
082b0918af feat(skills): Add beads-aware workflow skills
New skills for integrated beads + humanlayer workflow:
- beads_research.md: Research with per-bead artifact storage
- beads_plan.md: Planning with bead linking
- beads_implement.md: Implementation with per-plan checkpoints
- beads_iterate.md: Plan iteration with version history
- beads_workflow.md: Comprehensive workflow documentation

Skills output to thoughts/beads-{id}/ for artifact storage
and automatically update bead notes with artifact links.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 17:58:41 -08:00
7a5f167a8c Remove perles 2026-01-11 16:30:35 -08:00
9e1003d4fc Add kodi role to zix790prors 2026-01-11 16:28:54 -08:00
bf600987e9 feat(john-endesktop): Add k3s node labels for workload scheduling
Add fast-cpu and fast-storage labels since this node has a faster CPU
than other cluster nodes and is the NFS host with fast local storage.
Also add k3s-upgrade=disabled to exclude from system-upgrade-controller.
2026-01-10 20:14:54 -08:00
346ad3665d feat(k3s-node): Add k3s-node role and enable on john-endesktop
Add reusable k3s-node role with configurable options for server/agent
modes. Configure john-endesktop as a k3s agent joining the cluster at
10.0.0.222.

Role supports:
- Server or agent role selection
- Configurable server address and token file
- Graceful node shutdown
- Optional firewall port opening
- Cluster initialization for first server

Note: NixOS nodes must be labeled with `k3s-upgrade=disabled` to exclude
them from the system-upgrade-controller, since NixOS manages k3s upgrades
through Nix rather than in-place binary replacement.
2026-01-10 20:08:57 -08:00
565acb1632 Add kubectl to home-server 2026-01-10 19:16:29 -08:00
b05c6d8c30 fix(nix-book): Remove suspend-then-hibernate lid behavior 2026-01-10 19:05:05 -08:00
0f555fdd57 feat(emacs): Add beads package configuration with keybindings 2026-01-10 19:02:09 -08:00
9973273b5e Extend nvidia role to include driver configuration
The nvidia role now handles full driver configuration instead of just
packages. Added options for open driver, modesetting, power management,
graphics settings, and driver package selection.

Updated zix790prors and wixos machine configs to use the new role
options, removing duplicated hardware.nvidia configuration blocks.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 14:39:41 -08:00
f281384b69 feat(skills): Add import_gitea_issues skill for bead creation
Add a Claude Code skill that imports open Gitea issues as beads:
- Uses 'tea issues' to list open issues
- Checks existing beads to avoid duplicates
- Detects issue type (bug/feature/task) from content
- Creates beads with P2 priority and Gitea issue URL in notes
- Reports summary of imported vs skipped issues

Implements bead: nixos-configs-tdf
2026-01-10 13:24:20 -08:00
4eec701729 feat(skills): Add gitea_pr_review skill for managing PR review comments
Adds a new Claude Code skill that enables reading PR review comments and
posting replies on Gitea/Forgejo instances. Documents both the REST API
approach for reading reviews and the web endpoint approach for thread
replies, with fallback to top-level comments when thread replies aren't
possible due to authentication limitations.

Implements bead: nixos-configs-vru
2026-01-10 13:22:36 -08:00
bbcb13881f refactor(flake): Consolidate overlay configurations into shared functions
Extract duplicated overlay and home-manager configuration code into two
reusable factory functions:

- mkBaseOverlay: Creates the base overlay with unstable pkgs, custom
  packages, and bitwarden-desktop compatibility. Accepts optional
  unstableOverlays parameter for darwin-specific customizations.

- mkHomeManagerConfig: Creates home-manager configuration with shared
  settings (useGlobalPkgs, useUserPackages, doom-emacs module). Accepts
  sharedModules parameter for platform-specific modules like plasma-manager.

This reduces code duplication across nixosModules, nixosModulesUnstable,
and darwinModules, making the flake easier to maintain and extend.

Implements bead: nixos-configs-ek5
2026-01-10 13:15:57 -08:00
c28d6a7896 chore(packages): Remove unused vulkan-hdr-layer package
The vulkan-hdr-layer package was not used anywhere in the configuration.
Removing it to reduce maintenance burden.
2026-01-10 13:14:19 -08:00
79ff0b8aa4 feat: Move bootstrap/build-liveusb scripts to flake apps
- Move bootstrap.sh to scripts/ and add as flake app
- Move build-liveusb.sh to scripts/ and add as flake app
- Update usage comments to show nix run commands
- Improve build-liveusb.sh with better error handling (set -euo pipefail)
- Remove emojis from output messages for cleaner log output

Scripts can now be run consistently via:
  nix run .#bootstrap -- <hostname>
  nix run .#build-liveusb

Implements bead: nixos-configs-bli
2026-01-10 13:06:52 -08:00
1d9249ea83 Ignore .beads in main 2026-01-10 12:45:53 -08:00
2fdd2d5345 fix(skills): Correct reconcile_beads instructions for bd and tea CLI
- Fix jq syntax: bd show --json returns array, use .[0].notes
- Add grep command to extract PR number from URL
- Correct Gitea workflow: tea pr view lists all PRs, use tea pr list --state=closed instead
2026-01-10 12:45:25 -08:00
722cb315dc Stop tracking sync_base.jsonl in 2026-01-10 12:42:26 -08:00
e042acff16 feat(skills): Improve parallel beads workflow with in_review status
- Add step to mark beads as 'in_review' after PR creation
- Add PR URL to bead notes for traceability
- Create reconcile_beads skill to close beads when PRs are merged
- Update summary table to show bead status instead of generic status

Implements bead: nixos-configs-85h
2026-01-10 12:41:04 -08:00
4fe531f87f feat(emacs): Add prebuilt Doom option using nix-doom-emacs-unstraightened
Implement pre-built Doom Emacs packages for the live USB image, eliminating
the need to run `doom sync` after first boot.

Changes:
- Add nix-doom-emacs-unstraightened flake input
- Add homeModule to all three module sets (nixos, unstable, darwin)
- Add `prebuiltDoom` option to emacs role (default: false)
- Enable prebuiltDoom for live-usb configuration
- Pin custom packages in packages.el for deterministic builds:
  - claude-code-ide, gptel-tool-library, beads

When prebuiltDoom=true, all Doom packages are compiled at nix build time
using emacs-overlay. The doom configuration is stored in the nix store
(read-only), and no `doom sync` is required at runtime.

This is ideal for:
- Live USB images
- Immutable/reproducible systems
- Offline deployments

Closes: nixos-configs-1wd
2026-01-10 12:33:40 -08:00
266dee9f8f feat(home-server): Add starship prompt and alphabetize roles 2026-01-10 12:24:52 -08:00
38395c238f Fix race condition in Claude Code skill installation 2026-01-10 12:24:06 -08:00
e4a1771f48 sync beads 2026-01-10 12:15:40 -08:00
ff1fb245ac Add home-manager integration for john-endesktop server
Create home-server.nix with minimal development-focused configuration
enabling base, emacs, development, and tmux roles. Update flake.nix
to wire up home-manager for the johno user on the server.
2026-01-10 12:15:12 -08:00
82fb1738c1 feat(home): Add starship cross-shell prompt role
Add a new home-manager role for starship.rs, a fast and customizable
cross-shell prompt written in Rust.

Configuration includes:
- Bash and Zsh integration enabled
- Clean character symbols (> for success, x for error)
- Vi mode indicator support
- Smart directory truncation (4 levels, truncate to repo root)
- Git branch and status display
- Nix shell indicator with snowflake symbol
- Command duration for long-running commands (2s+)
- Disabled noisy modules (language runtimes, cloud providers)

Enabled in: home-desktop, home-laptop-compact, home-live-usb,
home-media-center configurations.

Closes: nixos-configs-uji
2026-01-10 11:46:43 -08:00
425e4f4cee Extract shared NixOS/Darwin base config into roles/common.nix
Create roles/common.nix containing shared configuration between NixOS and
Darwin: timezone, base packages (git, glances, pciutils, tree, usbutils, vim),
nix settings (experimental-features, max-jobs, trusted-users), gc config,
and allowUnfree setting.

Both roles/default.nix and roles/darwin.nix now import common.nix and only
contain platform-specific configuration.
2026-01-10 11:44:58 -08:00
0e5b11e55d Remove humanlayer prefix from local skills installation
Local skills don't need the humanlayer: prefix since they're not
from the humanlayer/claude-plugins repo.
2026-01-10 11:21:31 -08:00
1ba1a8fc9d [nixos-configs-7hd] Add parallel_beads skill for orchestrating bead processing
- Add skills/ directory for local Claude skills
- Create parallel_beads.md skill that orchestrates:
  - Phase 1: Multi-select bead selection from bd ready
  - Phase 2: Parallel subagents for implementation (worktree, implement, commit, PR)
  - Phase 3: Parallel review subagents
  - Phase 4: Cleanup and summary
- Update default.nix to install local skills alongside humanlayer plugins
- Support both gh (GitHub) and tea (Gitea/Forgejo) based on origin URL
2026-01-10 11:14:43 -08:00
009b84656f [john-endesktop] Update migration plan with completed pre-migration items 2026-01-10 10:49:04 -08:00
ef4e4509d3 [john-endesktop] Remove swap 2026-01-10 09:43:22 -08:00
cd6b528692 [john-endesktop] Update with actual disk ids 2026-01-10 09:34:28 -08:00
3914b54c73 actually actually finish for real? 2026-01-09 11:24:52 -08:00
9aa74258f9 actually finish beads-sync migration 2026-01-09 11:23:46 -08:00
64dda20aa4 finish migration to beads-sync 2026-01-09 11:22:58 -08:00
ac01548e89 chore(beads): commit untracked JSONL files
Auto-committed by bd doctor --fix
2026-01-09 11:22:13 -08:00
bb7f79843b bd sync: 2026-01-09 11:22:07 2026-01-09 11:22:07 -08:00
c1d6663a36 bd init 2026-01-09 11:21:49 -08:00
3cf4403ffa Add perles TUI package for Beads issue tracking
Adds a custom Nix package for perles, a terminal user interface for the
Beads issue tracking system with BQL query language support.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 11:20:02 -08:00
4e6123de9a Simplify beads integration to use bd setup claude
Replace complex marketplace plugin installation with simple 'bd setup claude'
which installs hooks into ~/.claude/settings.json. This is the recommended
approach for Claude Code (CLI + hooks) vs the MCP server approach.
2026-01-08 19:28:17 -08:00
19ee298b71 Add beads Claude plugin installation via home-manager
- Add beadsRepo reference from flake input
- Add activation script to install beads as marketplace plugin
- Updates known_marketplaces.json and config.json declaratively
2026-01-08 19:21:58 -08:00
54 changed files with 3368 additions and 305 deletions

39
.beads/.gitignore vendored Normal file
View File

@@ -0,0 +1,39 @@
# SQLite databases
*.db
*.db?*
*.db-journal
*.db-wal
*.db-shm
# Daemon runtime files
daemon.lock
daemon.log
daemon.pid
bd.sock
sync-state.json
last-touched
# Local version tracking (prevents upgrade notification spam after git ops)
.local_version
# Legacy database files
db.sqlite
bd.db
# Worktree redirect file (contains relative path to main repo's .beads/)
# Must not be committed as paths would be wrong in other clones
redirect
# Merge artifacts (temporary files from 3-way merge)
beads.base.jsonl
beads.base.meta.json
beads.left.jsonl
beads.left.meta.json
beads.right.jsonl
beads.right.meta.json
# NOTE: Do NOT add negation patterns (e.g., !issues.jsonl) here.
# They would override fork protection in .git/info/exclude, allowing
# contributors to accidentally commit upstream issue databases.
# The JSONL files (issues.jsonl, interactions.jsonl) and config files
# are tracked by git by default since no pattern above ignores them.

0
.beads/.sync.lock Normal file
View File

81
.beads/README.md Normal file
View File

@@ -0,0 +1,81 @@
# Beads - AI-Native Issue Tracking
Welcome to Beads! This repository uses **Beads** for issue tracking - a modern, AI-native tool designed to live directly in your codebase alongside your code.
## What is Beads?
Beads is issue tracking that lives in your repo, making it perfect for AI coding agents and developers who want their issues close to their code. No web UI required - everything works through the CLI and integrates seamlessly with git.
**Learn more:** [github.com/steveyegge/beads](https://github.com/steveyegge/beads)
## Quick Start
### Essential Commands
```bash
# Create new issues
bd create "Add user authentication"
# View all issues
bd list
# View issue details
bd show <issue-id>
# Update issue status
bd update <issue-id> --status in_progress
bd update <issue-id> --status done
# Sync with git remote
bd sync
```
### Working with Issues
Issues in Beads are:
- **Git-native**: Stored in `.beads/issues.jsonl` and synced like code
- **AI-friendly**: CLI-first design works perfectly with AI coding agents
- **Branch-aware**: Issues can follow your branch workflow
- **Always in sync**: Auto-syncs with your commits
## Why Beads?
**AI-Native Design**
- Built specifically for AI-assisted development workflows
- CLI-first interface works seamlessly with AI coding agents
- No context switching to web UIs
🚀 **Developer Focused**
- Issues live in your repo, right next to your code
- Works offline, syncs when you push
- Fast, lightweight, and stays out of your way
🔧 **Git Integration**
- Automatic sync with git commits
- Branch-aware issue tracking
- Intelligent JSONL merge resolution
## Get Started with Beads
Try Beads in your own projects:
```bash
# Install Beads
curl -sSL https://raw.githubusercontent.com/steveyegge/beads/main/scripts/install.sh | bash
# Initialize in your repo
bd init
# Create your first issue
bd create "Try out Beads"
```
## Learn More
- **Documentation**: [github.com/steveyegge/beads/docs](https://github.com/steveyegge/beads/tree/main/docs)
- **Quick Start Guide**: Run `bd quickstart`
- **Examples**: [github.com/steveyegge/beads/examples](https://github.com/steveyegge/beads/tree/main/examples)
---
*Beads: Issue tracking that moves at the speed of thought*

62
.beads/config.yaml Normal file
View File

@@ -0,0 +1,62 @@
# Beads Configuration File
# This file configures default behavior for all bd commands in this repository
# All settings can also be set via environment variables (BD_* prefix)
# or overridden with command-line flags
# Issue prefix for this repository (used by bd init)
# If not set, bd init will auto-detect from directory name
# Example: issue-prefix: "myproject" creates issues like "myproject-1", "myproject-2", etc.
# issue-prefix: ""
# Use no-db mode: load from JSONL, no SQLite, write back after each command
# When true, bd will use .beads/issues.jsonl as the source of truth
# instead of SQLite database
# no-db: false
# Disable daemon for RPC communication (forces direct database access)
# no-daemon: false
# Disable auto-flush of database to JSONL after mutations
# no-auto-flush: false
# Disable auto-import from JSONL when it's newer than database
# no-auto-import: false
# Enable JSON output by default
# json: false
# Default actor for audit trails (overridden by BD_ACTOR or --actor)
# actor: ""
# Path to database (overridden by BEADS_DB or --db)
# db: ""
# Auto-start daemon if not running (can also use BEADS_AUTO_START_DAEMON)
# auto-start-daemon: true
# Debounce interval for auto-flush (can also use BEADS_FLUSH_DEBOUNCE)
# flush-debounce: "5s"
# Git branch for beads commits (bd sync will commit to this branch)
# IMPORTANT: Set this for team projects so all clones use the same sync branch.
# This setting persists across clones (unlike database config which is gitignored).
# Can also use BEADS_SYNC_BRANCH env var for local override.
# If not set, bd sync will require you to run 'bd config set sync.branch <branch>'.
sync-branch: "beads-sync"
# Multi-repo configuration (experimental - bd-307)
# Allows hydrating from multiple repositories and routing writes to the correct JSONL
# repos:
# primary: "." # Primary repo (where this database lives)
# additional: # Additional repos to hydrate from (read-only)
# - ~/beads-planning # Personal planning repo
# - ~/work-planning # Work planning repo
# Integration settings (access with 'bd config get/set')
# These are stored in the database, not in this file:
# - jira.url
# - jira.project
# - linear.url
# - linear.api-key
# - github.org
# - github.repo

View File

4
.beads/metadata.json Normal file
View File

@@ -0,0 +1,4 @@
{
"database": "beads.db",
"jsonl_export": "sync_base.jsonl"
}

3
.gitattributes vendored Normal file
View File

@@ -0,0 +1,3 @@
# Use bd merge for beads JSONL files
.beads/issues.jsonl merge=beads

18
.gitea/workflows/ci.yml Normal file
View File

@@ -0,0 +1,18 @@
name: CI
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: johno/gitea-actions/nix-setup@main
- name: Check flake
run: nix flake check

1
.gitignore vendored
View File

@@ -1,2 +1,3 @@
result
thoughts
.beads

View File

@@ -6,6 +6,10 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
This is a NixOS configuration repository using flakes, managing multiple machines and home-manager configurations. The repository follows a modular architecture with reusable "roles" that can be composed for different machines.
## Issue Tracking
This repository uses `beads` for issue tracking and management. Run `bd quickstart` to get an overview of the system at the start of every session.
## Architecture
### Flake Structure
@@ -201,3 +205,29 @@ tea issues close --repo johno/nixos-configs 2
## Important Notes
- **Sudo access**: Claude Code does not have sudo access. Ask the user to run elevated commands like `sudo nixos-rebuild switch`
## Landing the Plane (Session Completion)
**When ending a work session**, you MUST complete ALL steps below. Work is NOT complete until `git push` succeeds.
**MANDATORY WORKFLOW:**
1. **File issues for remaining work** - Create issues for anything that needs follow-up
2. **Run quality gates** (if code changed) - Tests, linters, builds
3. **Update issue status** - Close finished work, update in-progress items
4. **PUSH TO REMOTE** - This is MANDATORY:
```bash
git pull --rebase
bd sync
git push
git status # MUST show "up to date with origin"
```
5. **Clean up** - Clear stashes, prune remote branches
6. **Verify** - All changes committed AND pushed
7. **Hand off** - Provide context for next session
**CRITICAL RULES:**
- Work is NOT complete until `git push` succeeds
- NEVER stop before pushing - that leaves work stranded locally
- NEVER say "ready to push when you are" - YOU must push
- If push fails, resolve and retry until it succeeds

View File

@@ -1,19 +0,0 @@
#!/usr/bin/env bash
# Build Live USB ISO from flake configuration
# Creates an uncompressed ISO suitable for Ventoy and other USB boot tools
set -e
echo "Building Live USB ISO..."
nix build .#nixosConfigurations.live-usb.config.system.build.isoImage --show-trace
if [ -f "./result/iso/"*.iso ]; then
iso_file=$(ls ./result/iso/*.iso)
echo "✅ Build complete!"
echo "📁 ISO location: $iso_file"
echo "💾 Ready for Ventoy or dd to USB"
else
echo "❌ Build failed - no ISO file found"
exit 1
fi

131
flake.lock generated
View File

@@ -1,5 +1,65 @@
{
"nodes": {
"beads": {
"inputs": {
"flake-utils": "flake-utils",
"nixpkgs": [
"nixpkgs-unstable"
]
},
"locked": {
"lastModified": 1767911810,
"narHash": "sha256-0L4ATr01UsmBC0rSW62VIMVVSUihAQu2+ZOoHk9BQnA=",
"owner": "steveyegge",
"repo": "beads",
"rev": "28ff9fe9919a9665a0f00f5b3fcd084b43fb6cc3",
"type": "github"
},
"original": {
"owner": "steveyegge",
"repo": "beads",
"type": "github"
}
},
"doomemacs": {
"flake": false,
"locked": {
"lastModified": 1767773143,
"narHash": "sha256-QL/t9v2kFNxBDyNJb/s411o3mxujan+QX5IZglTdpTk=",
"owner": "doomemacs",
"repo": "doomemacs",
"rev": "3e15fb36d7f94f0a218bda977be4d3f5da983a71",
"type": "github"
},
"original": {
"owner": "doomemacs",
"repo": "doomemacs",
"type": "github"
}
},
"emacs-overlay": {
"inputs": {
"nixpkgs": [
"nix-doom-emacs-unstraightened"
],
"nixpkgs-stable": [
"nix-doom-emacs-unstraightened"
]
},
"locked": {
"lastModified": 1768011937,
"narHash": "sha256-SnU2XTo34vwVaijs+4VwcXTNwMWO4nwzzs08N39UagA=",
"owner": "nix-community",
"repo": "emacs-overlay",
"rev": "79abf71d9897cf3b5189f7175cda1b1102abc65c",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "emacs-overlay",
"type": "github"
}
},
"flake-compat": {
"flake": false,
"locked": {
@@ -16,6 +76,24 @@
"type": "github"
}
},
"flake-utils": {
"inputs": {
"systems": "systems"
},
"locked": {
"lastModified": 1731533236,
"narHash": "sha256-l0KFg5HjrsfsO/JpG+r7fRrqm12kzFHyUHqHCVpMMbI=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "11707dc2f618dd54ca8739b309ec4fc024de578b",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"google-cookie-retrieval": {
"inputs": {
"nixpkgs": [
@@ -120,6 +198,27 @@
"type": "github"
}
},
"nix-doom-emacs-unstraightened": {
"inputs": {
"doomemacs": "doomemacs",
"emacs-overlay": "emacs-overlay",
"nixpkgs": [],
"systems": "systems_2"
},
"locked": {
"lastModified": 1768034604,
"narHash": "sha256-62pIZMvGHhYJmMiiBsxHqZt/dFyENPcFHlJq5NJF3Sw=",
"owner": "marienz",
"repo": "nix-doom-emacs-unstraightened",
"rev": "9b3b8044fe4ccdcbb2d6f733d7dbe4d5feea18bc",
"type": "github"
},
"original": {
"owner": "marienz",
"repo": "nix-doom-emacs-unstraightened",
"type": "github"
}
},
"nix-github-actions": {
"inputs": {
"nixpkgs": [
@@ -258,17 +357,49 @@
},
"root": {
"inputs": {
"beads": "beads",
"google-cookie-retrieval": "google-cookie-retrieval",
"home-manager": "home-manager",
"home-manager-unstable": "home-manager-unstable",
"jovian": "jovian",
"nix-darwin": "nix-darwin",
"nix-doom-emacs-unstraightened": "nix-doom-emacs-unstraightened",
"nixos-wsl": "nixos-wsl",
"nixpkgs": "nixpkgs_2",
"nixpkgs-unstable": "nixpkgs-unstable",
"plasma-manager": "plasma-manager",
"plasma-manager-unstable": "plasma-manager-unstable"
}
},
"systems": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
},
"systems_2": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
}
},
"root": "root",

115
flake.nix
View File

@@ -42,92 +42,89 @@
url = "github:Jovian-Experiments/Jovian-NixOS";
inputs.nixpkgs.follows = "nixpkgs-unstable";
};
beads = {
url = "github:steveyegge/beads";
inputs.nixpkgs.follows = "nixpkgs-unstable";
};
nix-doom-emacs-unstraightened = {
url = "github:marienz/nix-doom-emacs-unstraightened";
# Don't follow nixpkgs to avoid rebuild issues with emacs-overlay
inputs.nixpkgs.follows = "";
};
};
outputs = { self, nixpkgs, nixpkgs-unstable, nixos-wsl, ... } @ inputs: let
nixosModules = [
./roles
] ++ [
inputs.home-manager.nixosModules.home-manager
{
nixpkgs.overlays = [
(final: prev: {
# Shared overlay function to reduce duplication across module sets
# Parameters:
# unstableOverlays: Additional overlays to apply when importing nixpkgs-unstable
mkBaseOverlay = { unstableOverlays ? [] }: (final: prev: {
unstable = import nixpkgs-unstable {
system = prev.stdenv.hostPlatform.system;
config.allowUnfree = true;
overlays = unstableOverlays;
};
custom = prev.callPackage ./packages {};
# Compatibility: bitwarden renamed to bitwarden-desktop in unstable
bitwarden-desktop = prev.bitwarden-desktop or prev.bitwarden;
})
];
});
# Shared home-manager configuration factory
# Parameters:
# sharedModules: Additional modules to include in home-manager.sharedModules
mkHomeManagerConfig = { sharedModules ? [] }: {
home-manager.useGlobalPkgs = true;
home-manager.useUserPackages = true;
home-manager.sharedModules = [
inputs.plasma-manager.homeModules.plasma-manager
home-manager.sharedModules = sharedModules ++ [
inputs.nix-doom-emacs-unstraightened.homeModule
];
home-manager.extraSpecialArgs = {
globalInputs = inputs;
};
};
nixosModules = [
./roles
inputs.home-manager.nixosModules.home-manager
{
nixpkgs.overlays = [ (mkBaseOverlay {}) ];
}
(mkHomeManagerConfig {
sharedModules = [ inputs.plasma-manager.homeModules.plasma-manager ];
})
];
# Modules for unstable-based systems (like nix-deck)
nixosModulesUnstable = [
./roles
] ++ [
inputs.home-manager-unstable.nixosModules.home-manager
inputs.jovian.nixosModules.jovian
{
nixpkgs.overlays = [
(final: prev: {
unstable = import nixpkgs-unstable {
system = prev.stdenv.hostPlatform.system;
config.allowUnfree = true;
};
custom = prev.callPackage ./packages {};
# Compatibility: bitwarden renamed to bitwarden-desktop in unstable
bitwarden-desktop = prev.bitwarden-desktop or prev.bitwarden;
nixpkgs.overlays = [ (mkBaseOverlay {}) ];
}
(mkHomeManagerConfig {
sharedModules = [ inputs.plasma-manager-unstable.homeModules.plasma-manager ];
})
];
home-manager.useGlobalPkgs = true;
home-manager.useUserPackages = true;
home-manager.sharedModules = [
inputs.plasma-manager-unstable.homeModules.plasma-manager
];
home-manager.extraSpecialArgs = {
globalInputs = inputs;
};
}
];
darwinModules = [
./roles/darwin.nix
] ++ [
inputs.home-manager.darwinModules.home-manager
{
nixpkgs.overlays = [
(final: prev: {
unstable = import nixpkgs-unstable {
system = prev.stdenv.hostPlatform.system;
config.allowUnfree = true;
overlays = [
(mkBaseOverlay {
# Override claude-code in unstable to use our custom GCS-based build
# (needed for corporate networks that block npm registry)
unstableOverlays = [
(ufinal: uprev: {
claude-code = prev.custom.claude-code or (prev.callPackage ./packages {}).claude-code;
claude-code = uprev.callPackage ./packages/claude-code {};
})
];
};
custom = prev.callPackage ./packages {};
# Compatibility: bitwarden renamed to bitwarden-desktop in unstable
bitwarden-desktop = prev.bitwarden-desktop or prev.bitwarden;
})
];
home-manager.useGlobalPkgs = true;
home-manager.useUserPackages = true;
home-manager.extraSpecialArgs = {
globalInputs = inputs;
};
}
(mkHomeManagerConfig { sharedModules = []; })
];
in {
@@ -214,7 +211,11 @@
system = "x86_64-linux";
modules = nixosModules ++ [
./machines/john-endesktop/configuration.nix
# Minimal server - no home-manager needed
inputs.home-manager.nixosModules.home-manager
{
home-manager.users.johno = import ./home/home-server.nix;
home-manager.extraSpecialArgs = { inherit system; };
}
];
};
@@ -255,6 +256,16 @@
export PATH="${pkgs.lib.makeBinPath commonDeps}:$PATH"
${builtins.readFile ./scripts/upgrade.sh}
'';
bootstrap = pkgs.writeShellScriptBin "bootstrap" ''
export PATH="${pkgs.lib.makeBinPath commonDeps}:$PATH"
${builtins.readFile ./scripts/bootstrap.sh}
'';
build-liveusb = pkgs.writeShellScriptBin "build-liveusb" ''
export PATH="${pkgs.lib.makeBinPath commonDeps}:$PATH"
${builtins.readFile ./scripts/build-liveusb.sh}
'';
in {
update-doomemacs = {
type = "app";
@@ -272,6 +283,14 @@
type = "app";
program = "${upgrade}/bin/upgrade";
};
bootstrap = {
type = "app";
program = "${bootstrap}/bin/bootstrap";
};
build-liveusb = {
type = "app";
program = "${build-liveusb}/bin/build-liveusb";
};
}
);
};

View File

@@ -23,6 +23,7 @@
kubectl.enable = true;
tmux.enable = true;
plasma-manager.enable = true;
starship.enable = true;
};
targets.genericLinux.enable = true;

View File

@@ -24,5 +24,6 @@
imports = [
./roles
./roles/base-linux
];
}

View File

@@ -23,6 +23,7 @@
plasma-manager.enable = true;
emacs.enable = true;
i3_sway.enable = true;
starship.enable = true;
# Launcher wrappers for excluded/optional packages
launchers = {

View File

@@ -14,8 +14,14 @@
desktop.enable = true;
tmux.enable = true;
plasma-manager.enable = true;
emacs.enable = true;
emacs = {
enable = true;
# Use pre-built Doom Emacs - all packages built at nix build time
# This means no doom sync is needed after booting the live USB
prebuiltDoom = true;
};
i3_sway.enable = true;
starship.enable = true;
# development.enable = false; # Not needed for live USB
# communication.enable = false; # Not needed for live USB
# office.enable = false; # Not needed for live USB

View File

@@ -20,6 +20,7 @@
plasma-manager.enable = true;
emacs.enable = true;
i3_sway.enable = true;
starship.enable = true;
# office.enable = false; # Not needed for media center
# sync.enable = false; # Shared machine, no personal file sync
};

27
home/home-server.nix Normal file
View File

@@ -0,0 +1,27 @@
{ pkgs, globalInputs, system, ... }:
{
# Home Manager configuration for servers (minimal with development tools)
home.username = "johno";
home.homeDirectory = "/home/johno";
home.stateVersion = "24.05";
# Minimal roles for server with development capability
home.roles = {
base.enable = true;
development.enable = true;
emacs.enable = true;
kubectl.enable = true;
starship.enable = true;
tmux.enable = true;
};
targets.genericLinux.enable = true;
home.sessionVariables = {};
home.sessionPath = [];
imports = [
./roles
./roles/base-linux
];
}

View File

@@ -3,6 +3,7 @@
# Includes Linux-specific roles that require Linux-only home-manager modules
imports = [
../plasma-manager
../plasma-manager-kodi
../i3+sway
];
}

View File

@@ -15,9 +15,9 @@
./launchers
./media
./office
./plasma-manager-kodi
./sync
./tmux
./emacs
./starship
];
}

View File

@@ -5,7 +5,7 @@ with lib;
let
cfg = config.home.roles.development;
# Fetch the claude-plugins repository
# Fetch the claude-plugins repository (for humanlayer commands/agents)
# Update the rev to get newer versions of the commands
claudePluginsRepo = builtins.fetchGit {
url = "https://github.com/jeffh/claude-plugins.git";
@@ -14,6 +14,7 @@ let
rev = "5e3e4d937162185b6d78c62022cbfd1c8ad42c4c";
ref = "main";
};
in
{
options.home.roles.development = {
@@ -36,6 +37,7 @@ in
config = mkIf cfg.enable {
home.packages = [
globalInputs.beads.packages.${system}.default
pkgs.unstable.claude-code
pkgs.unstable.claude-code-router
pkgs.unstable.codex
@@ -82,11 +84,35 @@ in
fi
done
# Copy local skills from this repo (with retry for race conditions with running Claude)
for file in ${./skills}/*.md; do
if [ -f "$file" ]; then
filename=$(basename "$file" .md)
dest="$HOME/.claude/commands/''${filename}.md"
# Remove existing file first, then copy with retry on failure
rm -f "$dest" 2>/dev/null || true
if ! cp "$file" "$dest" 2>/dev/null; then
sleep 0.5
cp "$file" "$dest" || echo "Warning: Failed to copy $filename.md to commands"
fi
fi
done
$DRY_RUN_CMD echo "Claude Code humanlayer commands and agents installed successfully${
if cfg.allowArbitraryClaudeCodeModelSelection
then " (model specifications preserved)"
else " (model selection removed)"
}"
} + local skills"
'';
# Set up beads Claude Code integration (hooks for SessionStart/PreCompact)
# This uses the CLI + hooks approach which is recommended over MCP for Claude Code
home.activation.claudeCodeBeadsSetup = lib.hm.dag.entryAfter ["writeBoundary" "claudeCodeCommands"] ''
# Run bd setup claude to install hooks into ~/.claude/settings.json
# This is idempotent - safe to run multiple times
${globalInputs.beads.packages.${system}.default}/bin/bd setup claude 2>/dev/null || true
$DRY_RUN_CMD echo "Claude Code beads integration configured (hooks installed)"
'';
# Note: modules must be imported at top-level home config

View File

@@ -0,0 +1,247 @@
---
description: Implement a plan from thoughts/ for a bead issue
---
# Beads Implement
You are tasked with implementing an approved plan for a bead issue. Plans are stored in `thoughts/beads-{id}/plan.md`.
## Initial Setup
When this command is invoked:
1. **Parse the input for bead ID**:
- If a bead ID is provided, use it
- If no bead ID, check for beads with plans:
```bash
bd list --status=in_progress
```
Then check which have plans in `thoughts/beads-{id}/plan.md`
2. **Load bead context**:
```bash
bd show {bead-id}
```
Note the bead **type** (bug, feature, task) from the output.
3. **Check for plan and handle by type**:
Check if plan exists:
```bash
ls thoughts/beads-{bead-id}/plan.md 2>/dev/null
```
**If plan exists**: Proceed normally (skip to step 4)
**If no plan**:
- **type=bug**: Proceed without plan (simple bugs can implement directly)
- **type=feature or type=task**: Warn and ask:
```
No plan found for this {type}.
Plans help ensure complex work is well-designed and verifiable.
Location expected: thoughts/beads-{bead-id}/plan.md
Options:
1. Create a plan first (recommended) - Run /beads_plan {bead-id}
2. Proceed without a plan (for simple changes)
How would you like to proceed?
```
Wait for user response before continuing.
4. **Load plan and research context** (if plan exists):
- Read `thoughts/beads-{bead-id}/plan.md` FULLY
- Check for any existing checkmarks (- [x]) indicating partial progress
- Read any research at `thoughts/beads-{bead-id}/research.md`
5. **Mark bead in progress** (if not already):
```bash
bd update {bead-id} --status=in_progress
```
6. **Respond with**:
```
Implementing plan for bead {bead-id}: {bead-title}
Plan location: thoughts/beads-{bead-id}/plan.md
{If partial progress: "Resuming from Phase X - previous phases completed."}
I'll implement each phase and verify success criteria before proceeding.
```
## Implementation Process
### Step 1: Understand the Plan
1. **Read the plan completely**
2. **Check for existing progress** (checkmarked items)
3. **Read all files mentioned in the plan**
4. **Create a TodoWrite list** tracking each phase
### Step 2: Implement Each Phase
For each phase in the plan:
1. **Announce the phase**:
```
## Starting Phase {N}: {Phase Name}
This phase will: {overview from plan}
```
2. **Make the changes**:
- Follow the plan's specific instructions
- Use Edit tool for modifications
- Create new files only when specified
3. **Run automated verification**:
- Execute each command in "Automated Verification"
- Fix any issues before proceeding
4. **Update plan checkboxes**:
- Use Edit tool to check off completed items in the plan
- This enables resume if session is interrupted
5. **Update bead notes** with progress:
```bash
bd update {bead-id} --notes="Phase {N} complete. Automated verification passed."
```
### Step 3: Per-Plan Checkpoint
**CRITICAL**: After completing ALL phases and ALL automated verification:
```
## Implementation Complete - Ready for Manual Verification
All phases completed and automated verification passed:
- [ ] Phase 1: {name} - DONE
- [ ] Phase 2: {name} - DONE
- [ ] ...
**Automated checks passed:**
- {List of automated checks that passed}
**Please perform manual verification:**
- {List manual verification items from plan}
Let me know when manual testing is complete so I can close the bead.
```
**STOP HERE and wait for user confirmation.**
Do NOT:
- Close the bead automatically
- Proceed to "next steps" without confirmation
- Start additional work
### Step 4: After Manual Verification
When user confirms manual verification passed:
1. **Update plan status**:
- Edit the plan's frontmatter: `status: complete`
2. **Close the bead**:
```bash
bd close {bead-id} --reason="Implementation complete. All verification passed."
```
3. **Final summary**:
```
Bead {bead-id} closed.
Summary:
- {What was implemented}
- {Key changes made}
Artifacts:
- Plan: thoughts/beads-{bead-id}/plan.md
- {Any other artifacts created}
```
## Handling Issues
### When something doesn't match the plan:
```
Issue in Phase {N}:
Expected: {what the plan says}
Found: {actual situation}
Why this matters: {explanation}
Options:
1. Adapt the implementation to work with current state
2. Update the plan to reflect reality
3. Stop and investigate further
How should I proceed?
```
### When tests fail:
1. **Analyze the failure**
2. **Attempt to fix** if the fix is clear and within scope
3. **If fix is unclear**, report:
```
Test failure in Phase {N}:
Failing test: {test name}
Error: {error message}
I've attempted: {what you tried}
This may require: {your assessment}
```
### When blocked:
```
Blocked in Phase {N}:
Blocker: {description}
Impact: {what can't proceed}
Suggested resolution: {your recommendation}
```
## Resuming Work
If the plan has existing checkmarks:
1. **Trust completed work** - don't re-verify unless something seems off
2. **Pick up from first unchecked item**
3. **Verify previous work only if** current phase depends on it and seems broken
## Important Guidelines
1. **Follow the plan's intent** while adapting to reality
2. **Implement each phase fully** before moving to next
3. **Update checkboxes in real-time** as you complete items
4. **One checkpoint per plan** - not per phase
5. **Never close bead** without manual verification confirmation
6. **Keep bead notes updated** with progress
## Session Close Protocol
If you need to end the session before completion:
1. **Update plan** with current progress (checkboxes)
2. **Update bead notes**:
```bash
bd update {bead-id} --notes="In progress: Phase {N} partially complete. Next: {what's next}"
```
3. **Inform user** of status and how to resume
## Example Invocation
```
User: /beads:implement nixos-configs-abc123
Assistant: Implementing plan for bead nixos-configs-abc123...
## Starting Phase 1: Database Schema
This phase will add the new user_preferences table...
```

View File

@@ -0,0 +1,214 @@
---
description: Iterate on existing implementation plans for a bead issue
model: opus
---
# Beads Iterate
You are tasked with updating existing implementation plans based on feedback. Plans are stored in `thoughts/beads-{id}/plan.md`.
## Initial Setup
When this command is invoked:
1. **Parse the input**:
- Bead ID (required or ask for it)
- Requested changes/feedback (can be provided with command or after)
2. **Handle different scenarios**:
**No bead ID provided**:
```
Which bead's plan would you like to iterate on?
Recent beads with plans:
{list beads that have thoughts/beads-{id}/plan.md}
```
**Bead ID but no feedback**:
```
I've found the plan at thoughts/beads-{bead-id}/plan.md
What changes would you like to make? For example:
- "Add a phase for migration handling"
- "Update success criteria to include performance tests"
- "Adjust scope to exclude feature X"
- "Split Phase 2 into two separate phases"
```
**Both bead ID and feedback provided**:
- Proceed immediately to Step 1
## Iteration Process
### Step 1: Understand Current Plan
1. **Read the existing plan COMPLETELY**:
```bash
cat thoughts/beads-{bead-id}/plan.md
```
- Understand current structure, phases, scope
- Note success criteria and approach
2. **Read the bead for context**:
```bash
bd show {bead-id}
```
3. **Understand requested changes**:
- Parse what user wants to add/modify/remove
- Identify if changes require codebase research
### Step 2: Research If Needed
**Only if changes require new technical understanding:**
1. **Spawn parallel research tasks**:
- **codebase-locator**: Find relevant files
- **codebase-analyzer**: Understand implementation details
- **codebase-pattern-finder**: Find similar patterns
2. **Be specific about directories** in prompts
3. **Wait for ALL tasks** before proceeding
### Step 3: Present Understanding
Before making changes:
```
Based on your feedback, I understand you want to:
- {Change 1 with specific detail}
- {Change 2 with specific detail}
{If research was needed:}
My research found:
- {Relevant discovery}
- {Important constraint}
I plan to update the plan by:
1. {Specific modification}
2. {Another modification}
Does this align with your intent?
```
Get user confirmation before proceeding.
### Step 4: Update the Plan
1. **Make focused, precise edits**:
- Use Edit tool for surgical changes
- Maintain existing structure unless explicitly changing it
- Keep file:line references accurate
2. **Ensure consistency**:
- New phases follow existing pattern
- Update "What We're NOT Doing" if scope changes
- Maintain automated vs manual success criteria distinction
3. **Update plan metadata**:
- Update frontmatter `date` to current timestamp
- Add `iteration: {N}` to frontmatter
- Add `iteration_reason: "{brief description}"` to frontmatter
4. **Preserve completed work**:
- Don't uncheck items that were already completed
- If changing completed phases, discuss with user first
### Step 5: Save Iteration History (Optional)
For significant changes, save the previous version:
```bash
cp thoughts/beads-{bead-id}/plan.md thoughts/beads-{bead-id}/plan-v{N}.md
```
Then update the main plan.
### Step 6: Update Bead
```bash
bd update {bead-id} --notes="Plan iterated: {brief description of changes}"
```
### Step 7: Present Changes
```
I've updated the plan at `thoughts/beads-{bead-id}/plan.md`
Changes made:
- {Specific change 1}
- {Specific change 2}
The updated plan now:
- {Key improvement}
- {Another improvement}
Would you like any further adjustments?
```
## Important Guidelines
1. **Be Skeptical**:
- Don't blindly accept changes that seem problematic
- Question vague feedback - ask for clarification
- Point out conflicts with existing phases
2. **Be Surgical**:
- Make precise edits, not wholesale rewrites
- Preserve good content that doesn't need changing
- Only research what's necessary
3. **Be Thorough**:
- Read entire plan before making changes
- Ensure updated sections maintain quality
- Verify success criteria are still measurable
4. **Be Interactive**:
- Confirm understanding before making changes
- Allow course corrections
- Don't disappear into research without communicating
5. **No Open Questions**:
- If changes raise questions, ASK
- Don't update plan with unresolved questions
## Success Criteria Guidelines
When updating success criteria, maintain two categories:
**Automated Verification**:
- Commands: `make test`, `npm run lint`
- Prefer `make` commands when available
- File existence checks
**Manual Verification**:
- UI/UX functionality
- Performance under real conditions
- Edge cases hard to automate
## Handling Major Changes
If feedback requires significant restructuring:
1. **Discuss scope** before proceeding
2. **Consider if this should be a new plan** instead of iteration
3. **Preserve the original** in `plan-v{N}.md`
4. **Update bead description** if scope changed significantly
## Example Invocations
**With full context**:
```
User: /beads:iterate nixos-configs-abc123 - add error handling phase
Assistant: Based on your feedback, I understand you want to add a new phase for error handling...
```
**Interactive**:
```
User: /beads:iterate nixos-configs-abc123
Assistant: I've found the plan. What changes would you like to make?
User: Split Phase 2 into backend and frontend phases
Assistant: I'll split Phase 2 into two separate phases...
```

View File

@@ -0,0 +1,281 @@
---
description: Create detailed implementation plans for a bead issue
model: opus
---
# Beads Plan
You are tasked with creating detailed implementation plans for a bead issue. This skill integrates with the beads issue tracker and stores plans in the `thoughts/` directory.
## Initial Setup
When this command is invoked:
1. **Parse the input for bead ID**:
- If a bead ID is provided, use it
- If no bead ID, run `bd ready` and ask which bead to plan for
2. **Load bead context**:
```bash
bd show {bead-id}
```
- Read the bead description for requirements
- Check for existing research: `thoughts/beads-{bead-id}/research.md`
- Note any dependencies or blockers
3. **Create artifact directory**:
```bash
mkdir -p thoughts/beads-{bead-id}
```
4. **Check for existing research**:
- If `thoughts/beads-{bead-id}/research.md` exists, read it fully
- This research provides crucial context for planning
5. **Respond with**:
```
Creating implementation plan for bead {bead-id}: {bead-title}
{If research exists: "Found existing research at thoughts/beads-{bead-id}/research.md - incorporating findings."}
Let me analyze the requirements and codebase to create a detailed plan.
```
## Planning Process
### Step 1: Context Gathering
1. **Read all mentioned files FULLY**:
- Bead description references
- Existing research document
- Any linked tickets or docs
- Use Read tool WITHOUT limit/offset
2. **Spawn initial research tasks**:
- **codebase-locator**: Find all files related to the task
- **codebase-analyzer**: Understand current implementation
- **codebase-pattern-finder**: Find similar features to model after
- **thoughts-locator**: Find any existing plans or decisions
3. **Read all files identified by research**:
- Read them FULLY into main context
- Cross-reference with requirements
### Step 2: Present Understanding
Before writing the plan, confirm understanding:
```
Based on the bead and my research, I understand we need to [accurate summary].
I've found that:
- [Current implementation detail with file:line reference]
- [Relevant pattern or constraint discovered]
- [Potential complexity or edge case identified]
Questions that my research couldn't answer:
- [Specific technical question requiring human judgment]
- [Business logic clarification]
```
Only ask questions you genuinely cannot answer through code investigation.
### Step 3: Research & Discovery
After getting clarifications:
1. **If user corrects any misunderstanding**:
- Spawn new research tasks to verify
- Read specific files/directories mentioned
- Only proceed once verified
2. **Present design options**:
```
Based on my research:
**Current State:**
- [Key discovery about existing code]
- [Pattern or convention to follow]
**Design Options:**
1. [Option A] - [pros/cons]
2. [Option B] - [pros/cons]
Which approach aligns best?
```
### Step 4: Plan Structure
Once aligned on approach:
```
Here's my proposed plan structure:
## Overview
[1-2 sentence summary]
## Implementation Phases:
1. [Phase name] - [what it accomplishes]
2. [Phase name] - [what it accomplishes]
Does this phasing make sense?
```
Get feedback on structure before writing details.
### Step 5: Write the Plan
Write to `thoughts/beads-{bead-id}/plan.md`:
```markdown
---
date: {ISO timestamp}
bead_id: {bead-id}
bead_title: "{bead title}"
author: claude
git_commit: {commit hash}
branch: {branch name}
repository: {repo name}
status: draft
---
# {Feature/Task Name} Implementation Plan
## Overview
{Brief description of what we're implementing and why}
## Current State Analysis
{What exists now, what's missing, key constraints}
### Key Discoveries:
- {Finding with file:line reference}
- {Pattern to follow}
## Desired End State
{Specification of desired end state and how to verify it}
## What We're NOT Doing
{Explicitly list out-of-scope items}
## Implementation Approach
{High-level strategy and reasoning}
## Phase 1: {Descriptive Name}
### Overview
{What this phase accomplishes}
### Changes Required:
#### 1. {Component/File Group}
**File**: `path/to/file.ext`
**Changes**: {Summary}
```{language}
// Specific code to add/modify
```
### Success Criteria:
#### Automated Verification:
- [ ] Tests pass: `make test`
- [ ] Linting passes: `make lint`
- [ ] Type checking passes: `make typecheck`
#### Manual Verification:
- [ ] Feature works as expected in UI
- [ ] Edge cases handled correctly
---
## Phase 2: {Descriptive Name}
{Similar structure...}
---
## Testing Strategy
### Unit Tests:
- {What to test}
- {Key edge cases}
### Integration Tests:
- {End-to-end scenarios}
### Manual Testing Steps:
1. {Specific step}
2. {Another step}
## References
- Bead: {bead-id}
- Research: `thoughts/beads-{bead-id}/research.md`
- Similar implementation: {file:line}
```
### Step 6: Update the bead
```bash
bd update {bead-id} --notes="Plan created: thoughts/beads-{bead-id}/plan.md"
```
### Step 7: Create implementation bead (if appropriate)
If the planning bead is separate from implementation:
```bash
bd create --title="Implement: {feature name}" --type=task --priority=1 \
--description="Implement the plan at thoughts/beads-{original-bead-id}/plan.md
See bead {original-bead-id} for planning context."
# Link as dependency
bd dep add {new-bead-id} {original-bead-id}
```
### Step 8: Present for Review
```
I've created the implementation plan at:
`thoughts/beads-{bead-id}/plan.md`
Please review it and let me know:
- Are the phases properly scoped?
- Are the success criteria specific enough?
- Any technical details that need adjustment?
- Missing edge cases or considerations?
```
## Important Guidelines
1. **Be Skeptical**: Question vague requirements, identify potential issues early
2. **Be Interactive**: Don't write the full plan in one shot, get buy-in at each step
3. **Be Thorough**: Read all context files COMPLETELY, include specific file:line refs
4. **Be Practical**: Focus on incremental, testable changes
5. **No Open Questions**: If you have unresolved questions, STOP and ask
## Success Criteria Guidelines
Always separate into two categories:
**Automated Verification** (run by agents):
- Commands: `make test`, `npm run lint`, etc.
- File existence checks
- Type checking
**Manual Verification** (requires human):
- UI/UX functionality
- Performance under real conditions
- Edge cases hard to automate
## Example Invocation
```
User: /beads:plan nixos-configs-abc123
Assistant: Creating implementation plan for bead nixos-configs-abc123...
```

View File

@@ -0,0 +1,206 @@
---
description: Research a bead topic comprehensively and store findings in thoughts/
model: opus
---
# Beads Research
You are tasked with conducting comprehensive research for a bead issue. This skill integrates with the beads issue tracker and stores findings in the `thoughts/` directory.
## CRITICAL: YOUR ONLY JOB IS TO DOCUMENT AND EXPLAIN THE CODEBASE AS IT EXISTS TODAY
- DO NOT suggest improvements or changes unless the user explicitly asks for them
- DO NOT perform root cause analysis unless the user explicitly asks for them
- DO NOT propose future enhancements unless the user explicitly asks for them
- DO NOT critique the implementation or identify problems
- ONLY describe what exists, where it exists, how it works, and how components interact
- You are creating a technical map/documentation of the existing system
## Initial Setup
When this command is invoked:
1. **Parse the input for bead ID**:
- If a bead ID is provided (e.g., `nixos-configs-abc123`), use it
- If no bead ID provided, run `bd ready --type=research` to find research beads, or ask which bead to research
2. **Load bead context**:
```bash
bd show {bead-id}
```
- Read the bead description to understand the research question
- Note any linked files or references in the bead
3. **Create artifact directory**:
```bash
mkdir -p thoughts/beads-{bead-id}
```
4. **Respond with**:
```
Starting research for bead {bead-id}: {bead-title}
Research question: {extracted from bead description}
I'll analyze this thoroughly and store findings in thoughts/beads-{bead-id}/research.md
```
## Research Process
### Step 1: Read any directly mentioned files
- If the bead or user mentions specific files, read them FULLY first
- Use the Read tool WITHOUT limit/offset parameters
- Read these files yourself in the main context before spawning sub-tasks
### Step 2: Analyze and decompose the research question
- Break down the query into composable research areas
- Identify specific components, patterns, or concepts to investigate
- Create a research plan using TodoWrite
- Consider which directories, files, or patterns are relevant
### Step 3: Spawn parallel sub-agent tasks
Use specialized agents for research:
**For codebase research:**
- **codebase-locator** - Find WHERE files and components live
- **codebase-analyzer** - Understand HOW specific code works
- **codebase-pattern-finder** - Find examples of existing patterns
**For thoughts directory:**
- **thoughts-locator** - Discover what documents exist about the topic
- **thoughts-analyzer** - Extract key insights from specific documents
**For web research (only if explicitly requested):**
- **web-search-researcher** - External documentation and resources
Key principles:
- Run multiple agents in parallel when searching for different things
- Each agent knows its job - tell it what you're looking for, not HOW to search
- Remind agents they are documenting, not evaluating
### Step 4: Synthesize findings
Wait for ALL sub-agents to complete, then:
- Compile all results (codebase and thoughts findings)
- Prioritize live codebase findings as primary source of truth
- Connect findings across different components
- Include specific file paths and line numbers
- Highlight patterns, connections, and architectural decisions
### Step 5: Gather metadata
```bash
# Git metadata
git rev-parse HEAD # Current commit
git branch --show-current # Current branch
basename $(git rev-parse --show-toplevel) # Repo name
date -Iseconds # Current timestamp
```
### Step 6: Write research document
Write to `thoughts/beads-{bead-id}/research.md`:
```markdown
---
date: {ISO timestamp with timezone}
bead_id: {bead-id}
bead_title: "{bead title}"
researcher: claude
git_commit: {commit hash}
branch: {branch name}
repository: {repo name}
status: complete
---
# Research: {bead title}
**Bead**: {bead-id}
**Date**: {timestamp}
**Git Commit**: {commit hash}
**Branch**: {branch name}
## Research Question
{Original question from bead description}
## Summary
{High-level documentation answering the research question}
## Detailed Findings
### {Component/Area 1}
- Description of what exists (file.ext:line)
- How it connects to other components
- Current implementation details
### {Component/Area 2}
...
## Code References
- `path/to/file.py:123` - Description
- `another/file.ts:45-67` - Description
## Architecture Documentation
{Current patterns, conventions found in codebase}
## Historical Context (from thoughts/)
{Relevant insights from thoughts/ with references}
## Open Questions
{Any areas needing further investigation}
```
### Step 7: Update the bead
```bash
# Add artifact link to bead notes
bd update {bead-id} --notes="Research complete: thoughts/beads-{bead-id}/research.md"
```
### Step 8: Autonomy decision
**For research beads (type=research):**
- If research is complete and comprehensive, close the bead:
```bash
bd close {bead-id} --reason="Research complete. See thoughts/beads-{bead-id}/research.md"
```
- Present summary to user
**For other bead types:**
- Do NOT close the bead
- Present findings and ask how to proceed
### Step 9: Handle follow-up questions
If the user has follow-up questions:
- Append to the same research document
- Add a new section: `## Follow-up Research [{timestamp}]`
- Update the bead notes with the new findings
## Important Guidelines
- Always use parallel Task agents to maximize efficiency
- Always run fresh codebase research - never rely solely on existing documents
- Focus on finding concrete file paths and line numbers
- Research documents should be self-contained
- Document cross-component connections
- Include temporal context (when research was conducted)
- Keep the main agent focused on synthesis, not deep file reading
- **CRITICAL**: You and all sub-agents are documentarians, not evaluators
- **REMEMBER**: Document what IS, not what SHOULD BE
## Example Invocation
```
User: /beads:research nixos-configs-abc123
Assistant: Starting research for bead nixos-configs-abc123: Investigate auth flow
...
```
Or without bead ID:
```
User: /beads:research
Assistant: Let me check for research beads...
[runs bd ready]
Which bead would you like me to research?
```

View File

@@ -0,0 +1,387 @@
---
description: Comprehensive guide for the beads + humanlayer integrated workflow
---
# Beads Workflow Guide
This document describes the integrated workflow combining **beads** (issue tracking) with **humanlayer-style skills** (deep research, planning, implementation).
## Philosophy
### Two Systems, Complementary Purposes
| System | Purpose | Storage |
|--------|---------|---------|
| **Beads** | Track WHAT work exists | `.beads/` (git-synced) |
| **Thoughts** | Store HOW to do the work | `thoughts/` (local or symlinked) |
### Autonomy Model
| Bead Type | Agent Autonomy | Checkpoint |
|-----------|----------------|------------|
| `research` | **Full** - agent closes when satisfied | None |
| `feature`, `task`, `bug` | **Checkpointed** - pause for validation | Per-plan |
**Key insight**: Research produces artifacts. Implementation produces commits. Commits are the review boundary.
## Directory Structure
```
project/
├── .beads/ # Beads database (git-synced)
│ ├── beads.db
│ ├── config.yaml
│ └── issues.jsonl
├── thoughts/ # Artifacts (local or symlink)
│ ├── beads-{id}/ # Per-bead artifacts
│ │ ├── research.md
│ │ ├── plan.md
│ │ └── plan-v1.md # Iteration history
│ └── shared/ # Legacy/non-bead artifacts
│ ├── research/
│ └── plans/
└── home/roles/development/skills/ # Skill definitions
├── beads_research.md
├── beads_plan.md
├── beads_implement.md
└── beads_iterate.md
```
## When to Use What
### Use Beads When:
- Work spans multiple sessions
- Work has dependencies or blockers
- You need to track status across interruptions
- Multiple related tasks need coordination
- Context recovery after compaction matters
### Use TodoWrite When:
- Single-session execution tracking
- Breaking down work within a session
- Tracking progress on a single bead
### Use Both Together:
- Beads track the overall work items
- TodoWrite tracks progress within a session
- Example: Bead for "Implement auth", TodoWrite for each file being edited
## Workflow Patterns
### Pattern 1: Research-First Approach
```
1. Create research bead
bd create --title="Research auth patterns" --type=research --priority=1
2. Run research
/beads:research {bead-id}
→ Agent researches, writes to thoughts/beads-{id}/research.md
→ Agent closes bead when satisfied
3. Create implementation bead
bd create --title="Implement auth" --type=feature --priority=1
4. Plan the implementation
/beads:plan {bead-id}
→ Agent reads prior research, creates plan
→ Plan saved to thoughts/beads-{id}/plan.md
5. Implement
/beads:implement {bead-id}
→ Agent follows plan, pauses for manual verification
→ You validate, agent closes bead
```
### Pattern 2: Direct Implementation
For well-understood tasks without research:
```
1. Create bead
bd create --title="Fix login bug" --type=bug --priority=0
2. Plan and implement
/beads:plan {bead-id}
→ Quick planning based on bead description
/beads:implement {bead-id}
→ Follow plan, pause at checkpoint
```
### Pattern 3: Iterative Planning
When requirements evolve:
```
1. Initial plan
/beads:plan {bead-id}
2. Iterate based on feedback
/beads:iterate {bead-id} - add error handling phase
3. Iterate again if needed
/beads:iterate {bead-id} - split phase 2 into backend/frontend
4. Implement when plan is solid
/beads:implement {bead-id}
```
### Pattern 4: Parallel Work
Using parallel_beads skill for multiple independent tasks:
```
1. Check what's ready
bd ready
2. Select multiple beads
/parallel_beads
→ Select beads to work on
→ Each gets worktree, PR, review
3. Reconcile after PRs merge
/reconcile_beads
```
## Skills Reference
### /beads:research {bead-id}
- Conducts comprehensive codebase research
- Uses parallel sub-agents for efficiency
- Outputs to `thoughts/beads-{id}/research.md`
- **Autonomy**: Can close research beads automatically
### /beads:plan {bead-id}
- Creates detailed implementation plans
- Interactive process with checkpoints
- Outputs to `thoughts/beads-{id}/plan.md`
- Can create dependent implementation beads
### /beads:implement {bead-id}
- Follows plans from thoughts/
- Updates plan checkboxes for resumability
- **Checkpoint**: Pauses after plan completion for manual verification
- Only closes bead after human confirms
### /beads:iterate {bead-id}
- Updates existing plans based on feedback
- Preserves plan structure while making targeted changes
- Saves iteration history as `plan-v{N}.md`
### /parallel_beads
- Orchestrates parallel bead processing
- Creates worktrees, PRs, reviews for multiple beads
- Good for batching independent work
### /reconcile_beads
- Closes beads whose PRs have merged
- Run after merging PRs to keep beads in sync
## Session Protocols
### Starting a Session
```bash
# Check what's available
bd ready
# Pick work and start
bd update {bead-id} --status=in_progress
```
### Ending a Session
```bash
# Always run this checklist:
[ ] git status # Check changes
[ ] git add <files> # Stage code changes
[ ] bd sync # Sync beads
[ ] git commit -m "..." # Commit code
[ ] git push # Push to remote
```
### Resuming Work
```bash
# Find in-progress work
bd list --status=in_progress
# Check bead notes for context
bd show {bead-id}
# Check for partial plan progress
cat thoughts/beads-{id}/plan.md | grep "\[x\]"
```
## Thoughts Directory Patterns
### For Work Repos (via symlink)
```
project/thoughts → ~/thoughts/repos/{repo-name}/
```
- Syncs via codelayer to work remote
- Shared across projects on same machine
### For Personal Repos (local)
```
project/thoughts/ # Regular directory, not symlink
```
- Stays local to project
- Committed with project or gitignored
### Determining Which Pattern
```bash
# Check if thoughts is a symlink
ls -la thoughts
# If symlink, it points to ~/thoughts/repos/{repo}/
# If directory, it's local to this project
```
## Best Practices
### 1. Bead Descriptions Matter
Write clear descriptions - they're the input for research and planning:
```bash
bd create --title="Implement user preferences" --type=feature \
--description="Add user preferences storage and UI.
Requirements:
- Store preferences in SQLite
- Expose via REST API
- Add settings page in UI
See related: thoughts/shared/research/preferences-patterns.md"
```
### 2. Link Artifacts in Beads
Always update bead notes with artifact locations:
```bash
bd update {id} --notes="Research: thoughts/beads-{id}/research.md
Plan: thoughts/beads-{id}/plan.md"
```
### 3. Use Dependencies
Structure work with dependencies:
```bash
# Research blocks planning
bd dep add {plan-bead} {research-bead}
# Planning blocks implementation
bd dep add {impl-bead} {plan-bead}
```
### 4. Trust the Checkpoint Model
- Research beads: Let agent close them
- Implementation beads: Always validate before closing
- If in doubt, err on the side of checkpoints
### 5. Keep Plans Updated
- Check off completed items as you go
- Update notes with progress
- This enables seamless resume across sessions
## Troubleshooting
### "What bead should I work on?"
```bash
bd ready # Shows unblocked work
```
### "Where did the research go?"
```bash
ls thoughts/beads-{id}/
bd show {id} # Check notes for artifact links
```
### "Plan doesn't match reality"
```bash
/beads:iterate {id} # Update plan based on findings
```
### "Session ended mid-implementation"
```bash
bd show {id} # Check notes for progress
cat thoughts/beads-{id}/plan.md | grep "\[x\]" # See completed items
/beads:implement {id} # Resume - will pick up from last checkpoint
```
### "Bead is blocked"
```bash
bd show {id} # See what's blocking
bd blocked # See all blocked beads
```
## Migration Notes
### From Pure Humanlayer to Beads+Humanlayer
Old pattern:
```
thoughts/shared/research/2025-01-01-topic.md
thoughts/shared/plans/2025-01-01-feature.md
```
New pattern:
```
thoughts/beads-{id}/research.md
thoughts/beads-{id}/plan.md
```
The `shared/` structure still works for non-bead artifacts, but prefer per-bead directories for tracked work.
### Existing Content
- Keep existing `thoughts/shared/` content
- New bead-tracked work uses `thoughts/beads-{id}/`
- Reference old research from bead descriptions when relevant
## Design Decisions
### Phase Tracking: Artifacts vs Statuses
**Current approach**: Skills infer workflow phase from artifact presence:
- Has `research.md` → research done
- Has `plan.md` → planning done
- No artifacts → needs research/planning
**Alternative considered**: Explicit phase statuses (`needs_research`, `needs_plan`, `implementing`, etc.)
**Why artifacts win**:
1. **Single source of truth** - Status can't drift from reality
2. **Less state to maintain** - No need to update status when creating artifacts
3. **Works across repos** - No custom status config needed
4. **Skills already check artifacts** - Natural fit with existing behavior
**When explicit statuses would help**:
- Pipeline visualization (e.g., `bd list --status=needs_plan`)
- Agent self-selection by phase
- Team coordination dashboards
**Recommendation**: Keep artifact-inference as primary mechanism. If pipeline visibility becomes important, consider adding statuses that skills auto-set when creating artifacts (advisory, not enforced).
### One Bead Per Feature (Default)
**Current approach**: File one bead per logical feature. Skills handle phases internally.
**Alternative considered**: Separate beads for research → planning → implementation, linked by dependencies.
**Why single bead wins for most work**:
1. **Lower friction** - Quick idea dump without filing 3 tickets
2. **Simpler tracking** - One status to check
3. **Natural grouping** - Artifacts stay together in `thoughts/beads-{id}/`
**When to split into multiple beads**:
- Research reveals the work should be multiple features
- Different phases need different assignees
- Explicit dependency tracking matters (e.g., "auth must ship before payments")
**The discovered-work pattern**: Start with one bead. If research reveals split work, file additional beads with dependencies. Skills guide this naturally.
### Plan Requirements by Type
**Bug fixes** (`type=bug`): Can proceed without plans - usually well-scoped from bug report.
**Features/tasks** (`type=feature`, `type=task`): Should have plans - helps ensure design is sound before implementation.
This is advisory, not enforced. Skills warn but allow override for simple changes.

View File

@@ -0,0 +1,472 @@
---
description: Address Gitea/Forgejo PR review comments with code changes
---
# Gitea PR Review
You are tasked with **addressing** PR review comments by making code changes, then summarizing what was done. This skill drives PR progress, not just conversation.
## Philosophy
**Comments are work items, not conversation starters.**
When a reviewer leaves a comment, they're identifying something that needs attention. This skill:
1. Categorizes comments by actionability
2. Makes code changes to address actionable comments
3. Commits and pushes those changes
4. Posts a single summary comment describing what was done
## Prerequisites
- `tea` CLI configured with a Gitea/Forgejo instance
- Access token from tea config: `~/.config/tea/config.yml`
- Repository must be a Gitea/Forgejo remote (not GitHub)
- **Nix users**: All tools available via nixpkgs (`nix run nixpkgs#tea`)
## Initial Setup
When this command is invoked:
1. **Parse the input for PR number**:
- If a PR number is provided as argument, use it
- If no PR number, detect from current branch (see PR Detection section)
2. **Verify required tools are available**:
```bash
which tea
```
If tea is missing:
```
Error: `tea` CLI not found.
Please install:
- Nix: nix run nixpkgs#tea
- Other: https://gitea.com/gitea/tea
```
**STOP** if tea is missing.
3. **Extract configuration from tea config**:
```bash
# Read tea config (it's YAML but simple enough to grep)
TEA_CONFIG="$HOME/.config/tea/config.yml"
GITEA_URL=$(grep -A1 'logins:' "$TEA_CONFIG" | grep 'url:' | head -1 | sed 's/.*url: //')
TOKEN=$(grep -A5 'logins:' "$TEA_CONFIG" | grep 'token:' | head -1 | sed 's/.*token: //')
```
If config is missing or invalid:
```
Error: Could not read tea config at ~/.config/tea/config.yml
Please ensure `tea` is installed and configured:
1. Install tea
2. Log in: tea login add --url https://your-gitea-instance --token YOUR_TOKEN
```
**STOP** if config is invalid.
4. **Detect repository info from git remote**:
```bash
REMOTE_URL=$(git remote get-url origin)
# Parse owner and repo from URL (handles both SSH and HTTPS)
OWNER=$(echo "$REMOTE_URL" | sed -E 's#.*[:/]([^/]+)/[^/]+\.git$#\1#')
REPO=$(echo "$REMOTE_URL" | sed -E 's#.*/([^/]+)\.git$#\1#')
```
5. **Ensure we're on the PR branch**:
```bash
CURRENT_BRANCH=$(git branch --show-current)
# Verify this branch corresponds to the PR
```
6. **Respond with**:
```
Addressing PR review comments for PR #{PR_NUMBER}...
Repository: {OWNER}/{REPO}
Branch: {CURRENT_BRANCH}
Gitea URL: {GITEA_URL}
```
## PR Detection
If no PR number is provided, detect from the current branch:
```bash
CURRENT_BRANCH=$(git branch --show-current)
tea pr list --fields index,head --output simple | grep "$CURRENT_BRANCH"
```
If no PR exists for the current branch, use `AskUserQuestion`:
```
No PR found for branch '{CURRENT_BRANCH}'.
Would you like to:
1. Enter a PR number manually
2. Cancel
```
## Workflow
### Step 1: Fetch and Parse Comments
Fetch all reviews and their comments:
```bash
# Fetch reviews (filter out dismissed reviews)
curl -s -H "Authorization: token $TOKEN" \
"$GITEA_URL/api/v1/repos/$OWNER/$REPO/pulls/$PR_NUMBER/reviews" \
| jq '[.[] | select(.dismissed != true)]'
# For each review, fetch comments
curl -s -H "Authorization: token $TOKEN" \
"$GITEA_URL/api/v1/repos/$OWNER/$REPO/pulls/$PR_NUMBER/reviews/$REVIEW_ID/comments"
```
**Filter resolved comments**: When processing comments, skip any that have been marked as resolved. Check the `resolver` field in the comment response - if it's not null, the comment has been resolved and should be skipped.
```bash
# Example: Filter to only unresolved comments
jq '[.[] | select(.resolver == null)]'
```
If no reviews found or all comments are resolved:
```
No unresolved reviews found for PR #{PR_NUMBER}.
Nothing to address.
```
**STOP** here.
### Step 2: Categorize Comments
For each comment, categorize it as one of:
| Category | Description | Action |
|----------|-------------|--------|
| **actionable** | Requests a code change, addition, or fix | Launch subagent to make change |
| **question** | Asks for clarification or explanation | Include answer in summary |
| **acknowledged** | FYI, self-resolved, or "no action needed" noted | Note in summary |
| **blocked** | Requires external input or is out of scope | Flag for user |
**Categorization heuristics**:
- Contains "add", "change", "fix", "update", "consider adding", "should be" → **actionable**
- Contains "?" or "why", "how", "what" → **question**
- Contains "no need to update", "will be separate", "acknowledged" → **acknowledged**
- Contains "discuss", "later", "out of scope", "blocked by" → **blocked**
Display the categorization:
```
## Comment Analysis
### Actionable (will make changes):
1. {file}:{line} - "{comment_summary}" → Will add nix note to prerequisites
### Questions (will answer in summary):
2. {file}:{line} - "{comment_summary}" → Explain CI token approach
### Acknowledged (no action needed):
3. {file}:{line} - "{comment_summary}" → Reviewer noted separate skill
### Blocked (needs input):
(none)
```
### Step 3: User Confirmation
Use `AskUserQuestion` to confirm the plan:
```
I've categorized {N} comments. My plan:
**Will make changes for:**
- {file}:{line}: {planned_change}
**Will explain in summary:**
- {file}:{line}: {planned_explanation}
**No action needed:**
- {file}:{line}: {reason}
Proceed with this plan?
```
Options:
1. **Proceed** - Execute the plan
2. **Modify** - Let user adjust categorization
3. **Cancel** - Exit without changes
### Step 4: Address Actionable Comments (Parallel Subagents)
For each actionable comment, launch a subagent using the Task tool:
```
Launch Task subagent with:
- subagent_type: "general-purpose"
- prompt: |
You are addressing a PR review comment. Make the requested change and nothing else.
**File**: {file_path}
**Line**: {line_number}
**Comment**: {comment_body}
**Diff context**:
```
{diff_hunk}
```
Instructions:
1. Read the file to understand context
2. Make the minimal change to address the comment
3. Do NOT commit - just make the edit
4. Report what you changed
Be precise. Only change what's needed to address this specific comment.
```
**Important**: Launch actionable comment subagents in parallel when they touch different files. For comments on the same file, run sequentially to avoid conflicts.
Wait for all subagents to complete and collect their results.
### Step 5: Commit and Push
After all subagents complete:
1. **Stage changes**:
```bash
git add -A
```
2. **Create commit with summary**:
```bash
git commit -m "Address PR review comments
Changes made:
- {file1}: {change_summary}
- {file2}: {change_summary}
Addresses comments from review by {reviewer}"
```
3. **Push to remote**:
```bash
git push
```
### Step 6: Post Summary Comment
Post a single comment summarizing all actions taken:
```bash
tea comment $PR_NUMBER "$(cat <<'EOF'
## Review Comments Addressed
cc @{reviewer1} @{reviewer2}
**Changes made** (commit {SHORT_SHA}):
- `{file1}:{line}`: {what_was_changed}
- `{file2}:{line}`: {what_was_changed}
**Responses to questions**:
- `{file3}:{line}`: {answer_to_question}
**Acknowledged** (no action needed):
- `{file4}:{line}`: {reason_no_action}
---
*Automated response via /gitea_pr_review*
EOF
)"
```
### Step 7: Final Summary
Display to user:
```
## PR Review Complete
**Commit**: {SHA}
**Changes**: {N} files modified
### Actions Taken:
- [x] {file1}:{line} - Added nix prerequisite note
- [x] {file2}:{line} - Explained CI approach in comment
- [ ] {file3}:{line} - Acknowledged (separate skill)
**Reviewers tagged**: @{reviewer1}, @{reviewer2}
**Comment posted**: {comment_url}
PR URL: {GITEA_URL}/{OWNER}/{REPO}/pulls/{PR_NUMBER}
```
**Note**: When posting the summary comment, tag all reviewers who left comments so they receive notifications about the changes.
## Error Handling
### Subagent failed to make change
If a subagent fails:
```
Warning: Could not address comment on {file}:{line}
Reason: {error}
Options:
1. Skip this comment and continue
2. Retry with manual guidance
3. Abort all changes
```
### Push failed
```
Error pushing changes: {error}
Your changes are committed locally. You may need to:
1. Pull and resolve conflicts: git pull --rebase
2. Push again: git push
```
### No actionable comments
If all comments are questions/acknowledged:
```
No code changes needed.
All comments are either questions or acknowledged items.
Posting summary comment with explanations...
```
## API Reference
### Endpoints Used
| Action | Method | Endpoint |
|--------|--------|----------|
| List reviews | GET | `/api/v1/repos/{owner}/{repo}/pulls/{index}/reviews` |
| Get review comments | GET | `/api/v1/repos/{owner}/{repo}/pulls/{index}/reviews/{id}/comments` |
| Create issue comment | POST | via `tea comment` |
### Review States
- `PENDING` - Draft review not yet submitted
- `COMMENT` - General comment without approval/rejection
- `APPROVE` - Approving the changes
- `REQUEST_CHANGES` - Requesting changes before merge
## Shell Command Patterns
Claude Code's bash execution has quirks. Use these patterns for reliability:
### curl requests
**DO** - Use single quotes for URL and header separately:
```bash
curl -s 'https://git.example.com/api/v1/repos/owner/repo/pulls/1/reviews' \
-H 'Authorization: token YOUR_TOKEN_HERE' | jq .
```
**DON'T** - Variable expansion in `-H` flag often fails:
```bash
# This may fail with "blank argument" errors
curl -s -H "Authorization: token $TOKEN" "$URL"
```
### Iterating over reviews
**DO** - Run separate commands for each review ID:
```bash
echo "=== Review 4 ===" && curl -s 'URL/reviews/4/comments' -H 'Authorization: token ...' | jq .
echo "=== Review 5 ===" && curl -s 'URL/reviews/5/comments' -H 'Authorization: token ...' | jq .
```
**DON'T** - For loops with multiline bodies often fail:
```bash
# This may cause syntax errors
for id in 4 5 6; do
curl -s "URL/reviews/$id/comments"
done
```
### tea comment
**DO** - Use single-quoted string for comment body:
```bash
tea comment 26 '## Summary
Changes made:
- Item 1
- Item 2'
```
**DON'T** - Heredocs may hang or timeout:
```bash
# This may hang indefinitely
tea comment 26 "$(cat <<'EOF'
...
EOF
)"
```
## Limitations
1. **Thread replies**: Gitea API doesn't support inline thread replies. We post a single summary comment instead.
2. **Complex changes**: For comments requiring significant refactoring, the subagent may need guidance. The skill will flag these as "blocked" for user input.
3. **Merge conflicts**: If the branch is behind, you may need to rebase before changes can be pushed.
## Example Session
```
User: /gitea_pr_review 26
Assistant: Addressing PR review comments for PR #26...
Repository: johno/nixos-configs
Branch: bead/nixos-configs-vru
Gitea URL: https://git.johnogle.info
## Comment Analysis
### Actionable (will make changes):
1. gitea_pr_review.md:12 - "could we indicate nix+nixpkgs satisfies this?"
→ Will add note that nix users can get tools via nixpkgs
### Questions (will answer in summary):
(none)
### Acknowledged (no action needed):
2. gitea_pr_review.md:50 - "we eventually want to run this in CI..."
→ Reviewer noted this will be a separate skill
Proceed with this plan? [Proceed]
Launching subagent to address comment 1...
[Subagent completes edit]
Committing changes...
[abc1234] Address PR review comments
Pushing to remote...
Done.
Posting summary comment...
## PR Review Complete
**Commit**: abc1234
**Changes**: 1 file modified
### Actions Taken:
- [x] gitea_pr_review.md:12 - Added nix prerequisite note
- [ ] gitea_pr_review.md:50 - Acknowledged (separate skill)
**Comment posted**: https://git.johnogle.info/.../pulls/26#issuecomment-XXX
PR URL: https://git.johnogle.info/johno/nixos-configs/pulls/26
```
## See Also
- `tea` CLI: https://gitea.com/gitea/tea
- Gitea API: https://docs.gitea.com/api/
- `/beads_workflow` for full development workflow

View File

@@ -0,0 +1,130 @@
---
description: Import open Gitea issues as beads, skipping already-imported ones
---
# Import Gitea Issues as Beads
This skill imports open Gitea issues as beads, checking for duplicates to avoid re-importing already tracked issues.
## Prerequisites
- `tea` CLI must be installed and configured for the repository
- `bd` (beads) CLI must be installed
- Must be in a git repository with a Gitea/Forgejo remote
## Workflow
### Step 1: Get open Gitea issues
List all open issues using `tea`:
```bash
tea issues
```
This returns a table with columns: INDEX, TITLE, LABELS, MILESTONE
### Step 2: Get existing beads
List all current beads to check what's already imported:
```bash
bd list
```
Also check bead notes for issue URLs to identify imports:
```bash
bd list --json | jq -r '.[] | select(.notes != null) | .notes' | grep -oP 'issues/\K\d+'
```
### Step 3: Check for already-linked PRs
Check if any open PRs reference beads (skip these issues as they're being worked on):
```bash
tea pr list
```
Look for PRs with:
- Bead ID in title: `[nixos-configs-xxx]`
- Bead reference in body: `Implements bead:` or `Bead ID:`
### Step 4: For each untracked issue, create a bead
For each issue not already tracked:
1. **Get full issue details**:
```bash
tea issue [ISSUE_NUMBER]
```
2. **Determine bead type** based on issue content:
- "bug" - if issue mentions bug, error, broken, fix, crash
- "feature" - if issue mentions feature, add, new, enhancement
- "task" - default for other issues
3. **Create the bead**:
```bash
bd add "[ISSUE_TITLE]" \
--type=[TYPE] \
--priority=P2 \
--notes="Gitea issue: [ISSUE_URL]
Original issue description:
[ISSUE_BODY]"
```
Note: The `--notes` flag accepts multi-line content.
### Step 5: Report results
Present a summary:
```
## Gitea Issues Import Summary
### Imported as Beads
| Issue | Title | Bead ID | Type |
|-------|-------|---------|------|
| #5 | Add dark mode | nixos-configs-abc | feature |
| #3 | Config broken on reboot | nixos-configs-def | bug |
### Skipped (Already Tracked)
| Issue | Title | Reason |
|-------|-------|--------|
| #4 | Update flake | Existing bead: nixos-configs-xyz |
| #2 | Refactor roles | PR #7 references bead |
### Skipped (Other)
| Issue | Title | Reason |
|-------|-------|--------|
| #1 | Discussion: future plans | No actionable work |
```
## Type Detection Heuristics
Keywords to detect issue type:
**Bug indicators** (case-insensitive):
- bug, error, broken, fix, crash, fail, issue, problem, wrong, not working
**Feature indicators** (case-insensitive):
- feature, add, new, enhancement, implement, support, request, want, would be nice
**Task** (default):
- Anything not matching bug or feature patterns
## Error Handling
- **tea not configured**: Report error and exit
- **bd not available**: Report error and exit
- **Issue already has bead**: Skip and report in summary
- **Issue is a PR**: Skip (tea shows PRs and issues separately)
## Notes
- Default priority is P2; adjust manually after import if needed
- Issue labels from Gitea are not automatically mapped to bead tags
- Run this periodically to catch new issues
- After import, use `bd ready` to see which beads can be worked on

View File

@@ -0,0 +1,281 @@
---
description: Orchestrate parallel bead processing with worktrees, PRs, and reviews
---
# Parallel Beads Workflow
This skill orchestrates parallel bead processing using subagents. Each bead gets its own worktree, implementation, PR, and review.
## Phase 1: Selection
1. **Get ready beads**: Run `bd ready` to list all beads with no blockers
2. **Filter by plan readiness**:
For each ready bead, check if it's ready for batch implementation:
- **Has plan** (`thoughts/beads-{id}/plan.md` exists): Include
- **type=bug** without plan: Include (simple bugs can implement directly)
- **type=feature/task** without plan: Exclude with warning
```bash
# Check for plan existence
ls thoughts/beads-{bead-id}/plan.md 2>/dev/null
```
3. **Report skipped beads**:
If any beads were skipped, inform the user:
```
Skipped beads (no plan):
- {bead-id}: {title} (type: feature) - Run /beads_plan {bead-id} first
- {bead-id}: {title} (type: task) - Run /beads_plan {bead-id} first
```
4. **Present selection**: Use `AskUserQuestion` with `multiSelect: true` to let the user choose which beads to work on
- Include bead ID and title for each option
- Only show beads that passed the plan check
- Allow selection of multiple beads
Example:
```
AskUserQuestion with:
- question: "Which beads do you want to work on in parallel?"
- multiSelect: true
- options from filtered bd ready output
```
## Phase 2: Parallel Implementation
For each selected bead, launch a subagent using the Task tool. All subagents should be launched in parallel (single message with multiple Task tool calls).
### Subagent Instructions Template
Each implementation subagent should receive these instructions:
```
Work on bead [BEAD_ID]: [BEAD_TITLE]
1. **Create worktree**:
- Branch name: `bead/[BEAD_ID]`
- Worktree path: `~/wt/[REPO_NAME]/[BEAD_ID]`
- Command: `git worktree add -b bead/[BEAD_ID] ~/wt/[REPO_NAME]/[BEAD_ID]`
2. **Review the bead requirements**:
- Run `bd show [BEAD_ID]` to understand the acceptance criteria
- Note any external issue references (GitHub issues, Linear tickets, etc.)
3. **Extract validation criteria**:
- Check for a plan: `thoughts/beads-[BEAD_ID]/plan.md`
- If plan exists:
- Read the plan and find the "Automated Verification" section
- Extract each verification command (lines starting with `- [ ]` followed by a command)
- Example: `- [ ] Tests pass: \`make test\`` → extract `make test`
- If no plan exists, use best-effort validation:
- Check if `Makefile` exists → try `make test` and `make lint`
- Check if `flake.nix` exists → try `nix flake check`
- Check if `package.json` exists → try `npm test`
- If none found, note "No validation criteria found"
4. **Implement the changes**:
- Work in the worktree directory
- Complete all acceptance criteria listed in the bead
After implementation, run validation:
- Execute each validation command from step 3
- Track results in this format:
```
VALIDATION_RESULTS:
- make test: PASS
- make lint: FAIL (exit code 1: src/foo.ts:23 - missing semicolon)
- nix flake check: SKIP (command not found)
```
- If any validation fails:
- Continue with PR creation (don't block)
- Document failures in bead notes: `bd update [BEAD_ID] --notes="Validation failures: [list]"`
5. **Commit and push**:
- Stage all changes: `git add -A`
- Create a descriptive commit message
- Push the branch: `git push -u origin bead/[BEAD_ID]`
6. **Create a PR**:
- Detect hosting provider from origin URL: `git remote get-url origin`
- If URL contains `github.com`, use `gh`; otherwise use `tea` (Gitea/Forgejo)
- PR title: "[BEAD_ID] [BEAD_TITLE]"
- PR body must include:
- Reference to bead ID: "Implements bead: [BEAD_ID]"
- Any external issue references from the bead (e.g., "Closes #123")
- Summary of changes
- For GitHub (`gh`):
```bash
gh pr create --title "[BEAD_ID] [BEAD_TITLE]" --body "$(cat <<'EOF'
## Summary
[Brief description of changes]
## Bead Reference
Implements bead: [BEAD_ID]
## External Issues
[Any linked issues from the bead]
## Changes
- [List of changes made]
## Validation
[Include validation results from step 4]
| Check | Status | Details |
|-------|--------|---------|
| make test | PASS | |
| make lint | FAIL | src/foo.ts:23 - missing semicolon |
| nix flake check | SKIP | command not found |
EOF
)"
```
- For Gitea (`tea`):
```bash
tea pr create --head bead/[BEAD_ID] --base main \
--title "[BEAD_ID] [BEAD_TITLE]" \
--description "## Summary
[Brief description of changes]
## Bead Reference
Implements bead: [BEAD_ID]
## External Issues
[Any linked issues from the bead]
## Changes
- [List of changes made]
## Validation
[Include validation results from step 4]
| Check | Status | Details |
|-------|--------|---------|
| make test | PASS | |
| make lint | FAIL | src/foo.ts:23 - missing semicolon |
| nix flake check | SKIP | command not found |"
```
7. **Update bead status**:
- Mark the bead as "in_review": `bd update [BEAD_ID] --status=in_review`
- Add the PR URL to the bead notes: `bd update [BEAD_ID] --notes="$(bd show [BEAD_ID] --json | jq -r '.notes')
PR: [PR_URL]"`
8. **Report results**:
- Return:
- PR URL
- Bead ID
- Implementation status (success/failure/blocked)
- Validation summary: `X passed, Y failed, Z skipped`
- List of any validation failures with details
- If blocked or unable to complete, explain what's blocking progress
- If validation failed, include the specific failures so the main agent can summarize them for the user
```
### Launching Subagents
Use `subagent_type: "general-purpose"` for implementation subagents. Launch all selected beads' subagents in a single message for parallel execution:
```
<Task calls for each selected bead - all in one message>
```
Collect results from all subagents before proceeding.
## Phase 3: Parallel Review
After all implementation subagents complete, launch review subagents for each PR.
### Review Subagent Instructions Template
```
Review PR for bead [BEAD_ID]
1. **Detect hosting provider**: Run `git remote get-url origin` - if it contains `github.com` use `gh`, otherwise use `tea`
2. **Read the PR**:
- For GitHub: `gh pr view [PR_NUMBER] --json title,body,additions,deletions,files`
- For Gitea: `tea pr view [PR_NUMBER]`
- View the diff: `git diff main...bead/[BEAD_ID]`
3. **Review against acceptance criteria**:
- Run `bd show [BEAD_ID]` to get the acceptance criteria
- Verify each criterion is addressed
4. **Leave review comments**:
- For GitHub: `gh pr review [PR_NUMBER] --comment --body "[COMMENTS]"`
- For Gitea: `tea pr review [PR_NUMBER] --comment "[COMMENTS]"`
- Include:
- Acceptance criteria checklist (which are met, which might be missing)
- Code quality observations
- Suggestions for improvement
5. **Return summary**:
- Overall assessment (ready to merge / needs changes)
- Key findings
```
Launch all review subagents in parallel.
## Phase 4: Cleanup and Summary
After reviews complete:
1. **Clean up worktrees**:
```bash
git worktree remove ~/wt/[REPO_NAME]/[BEAD_ID] --force
```
Do this for each bead's worktree.
2. **Provide final summary**:
Present a table or list with:
- Bead ID
- PR URL
- Status (success / failed / blocked)
- Validation summary (X/Y passed)
- Review summary
- Any failures or blockers encountered
If any validation failures occurred, list them in a "Validation Failures" section so the user can address them.
Example output:
```
## Parallel Beads Summary
| Bead | PR | Bead Status | Validation | Review |
|------|-----|-------------|------------|--------|
| beads-abc | #123 | in_review | 3/3 passed | Approved |
| beads-xyz | #124 | in_review | 2/3 passed | Needs changes |
| beads-123 | - | open (failed) | - | Blocked by missing dependency |
### Validation Failures
- beads-xyz: `make lint` failed - src/foo.ts:23 missing semicolon
### Failures/Blockers
- beads-123: Could not complete because [reason]
### Next Steps
- Fix validation failures before merging
- Review PRs that need changes
- Address blockers for failed beads
- Run `/reconcile_beads` after PRs are merged to close beads
```
## Error Handling
- **Subagent failures**: If a subagent fails or times out, note it in the summary but continue with other beads
- **PR creation failures**: Report the error but continue with reviews of successful PRs
- **Worktree conflicts**: If a worktree already exists, ask the user if they want to remove it or skip that bead
## Resource Limits
- Consider limiting concurrent subagents to 3-5 to avoid overwhelming system resources
- If user selects more beads than the limit, process them in batches
## Notes
- This workflow integrates with the beads system (`bd` commands)
- Worktrees are created in `~/wt/[REPO_NAME]/` by convention
- Each bead gets its own isolated branch and worktree
- PRs automatically reference the bead ID for traceability

View File

@@ -0,0 +1,88 @@
---
description: Reconcile beads with merged PRs and close completed beads
---
# Reconcile Beads Workflow
This skill reconciles beads that are in `in_review` status with their corresponding PRs. If a PR has been merged, the bead is closed.
## Prerequisites
- Custom status `in_review` must be configured: `bd config set status.custom "in_review"`
- Beads in `in_review` status should have a PR URL in their notes
## Workflow
### Step 1: Find beads in review
```bash
bd list --status=in_review
```
### Step 2: For each bead, check PR status
1. **Get the PR URL from bead notes**:
```bash
bd show [BEAD_ID] --json | jq -r '.[0].notes'
```
Note: `bd show --json` returns an array, so use `.[0]` to access the first element.
Extract the PR URL (look for lines starting with "PR:" or containing pull request URLs).
Extract the PR number: `echo "$NOTES" | grep -oP '/pulls/\K\d+'`
2. **Detect hosting provider**:
- Run `git remote get-url origin`
- If URL contains `github.com`, use `gh`; otherwise use `tea` (Gitea/Forgejo)
3. **Check PR status**:
- For GitHub:
```bash
gh pr view [PR_NUMBER] --json state,merged
```
- For Gitea:
```bash
tea pr list --state=closed
```
Look for the PR number in the INDEX column with STATE "merged".
Note: `tea pr view [PR_NUMBER]` lists all PRs, not a specific one. Use `tea pr list --state=closed` and look for your PR number in the results.
### Step 3: Close merged beads
If the PR is merged:
```bash
bd close [BEAD_ID] --reason="PR merged: [PR_URL]"
```
### Step 4: Report summary
Present results:
```
## Beads Reconciliation Summary
### Closed (PR Merged)
| Bead | PR | Title |
|------|-----|-------|
| beads-abc | #123 | Feature X |
| beads-xyz | #456 | Bug fix Y |
### Still in Review
| Bead | PR | Status | Title |
|------|-----|--------|-------|
| beads-def | #789 | Open | Feature Z |
### Issues Found
- beads-ghi: No PR URL found in notes
- beads-jkl: PR #999 not found (may have been deleted)
```
## Error Handling
- **Missing PR URL**: Skip the bead and report it
- **PR not found**: Report the error but continue with other beads
- **API errors**: Report and continue
## Notes
- This skill complements `/parallel_beads` which sets beads to `in_review` status
- Run this skill periodically or after merging PRs to keep beads in sync
- Beads with closed (but not merged) PRs are not automatically closed - they may need rework

View File

@@ -23,13 +23,30 @@ let
if pkgs.stdenv.isDarwin
then pkgs.emacs-macport.pkgs.withPackages emacsPackages
else pkgs.emacs.pkgs.withPackages emacsPackages;
# Path to doom config directory (relative to this file)
doomConfigDir = ./doom;
in
{
options.home.roles.emacs = {
enable = mkEnableOption "Doom Emacs with vterm and tree-sitter support";
prebuiltDoom = mkOption {
type = types.bool;
default = false;
description = ''
Use nix-doom-emacs-unstraightened to pre-build all Doom packages at
nix build time. This eliminates the need to run `doom sync` after
first boot, making it ideal for live USB images or immutable systems.
When enabled, the doom configuration is read-only (stored in nix store).
'';
};
};
config = mkIf cfg.enable {
config = mkIf cfg.enable (mkMerge [
# Common configuration for both modes
{
home.packages = [
pkgs.emacs-all-the-icons-fonts
pkgs.fira-code
@@ -46,13 +63,16 @@ in
pkgs.python3
];
fonts.fontconfig.enable = true;
}
# Standard Doom Emacs mode (requires doom sync at runtime)
(mkIf (!cfg.prebuiltDoom) {
programs.emacs = {
enable = true;
package = defaultEmacsPackage;
};
fonts.fontconfig.enable = true;
# Mount emacs and tree-sitter grammars from nix store
home.file = {
"${config.xdg.configHome}/emacs".source = doomEmacs;
@@ -73,5 +93,20 @@ in
rm -rf "${config.xdg.configHome}/doom"
ln -sf "${config.home.homeDirectory}/nixos-configs/home/roles/emacs/doom" "${config.xdg.configHome}/doom"
'';
})
# Pre-built Doom Emacs mode (no doom sync needed - ideal for live USB)
(mkIf cfg.prebuiltDoom {
programs.doom-emacs = {
enable = true;
doomDir = doomConfigDir;
doomLocalDir = "${config.xdg.dataHome}/doom";
# Add extra packages that aren't part of Doom but needed for our config
extraPackages = epkgs: [
epkgs.vterm
epkgs.treesit-grammars.with-all-grammars
];
};
})
]);
}

View File

@@ -167,6 +167,20 @@
claude-code-ide-window-side 'right
claude-code-ide-window-width 90))
(use-package! beads
:commands (beads)
:init
(map! :leader
(:prefix ("o" . "open")
(:prefix ("B" . "beads")
:desc "List issues" "B" (cmd! (require 'beads) (beads-list))
:desc "Project issues" "p" (cmd! (require 'beads) (beads-project-list))
:desc "Activity feed" "a" (cmd! (require 'beads) (beads-activity))
:desc "Stale issues" "s" (cmd! (require 'beads) (beads-stale))
:desc "Orphaned issues" "o" (cmd! (require 'beads) (beads-orphans))
:desc "Find duplicates" "d" (cmd! (require 'beads) (beads-duplicates))
:desc "Lint issues" "l" (cmd! (require 'beads) (beads-lint))))))
(after! gptel
(require 'gptel-tool-library)
(setq gptel-tool-library-use-maybe-safe t

View File

@@ -51,11 +51,21 @@
;; (package! org-caldav)
;; Note: Packages with custom recipes must be pinned for nix-doom-emacs-unstraightened
;; to build deterministically. Update pins when upgrading packages.
(package! gptel :recipe (:nonrecursive t))
(package! claude-code-ide
:recipe (:host github :repo "manzaltu/claude-code-ide.el"))
:recipe (:host github :repo "manzaltu/claude-code-ide.el")
:pin "760240d7f03ff16f90ede9d4f4243cd94f3fed73")
(package! gptel-tool-library
:recipe (:host github :repo "aard-fi/gptel-tool-library"
:files ("*.el")))
:files ("*.el"))
:pin "baffc3b0d74a2b7cbda0d5cd6dd7726d6ccaca83")
(package! beads
:recipe (:type git :repo "https://codeberg.org/ctietze/beads.el.git"
:files ("lisp/*.el"))
:pin "f40a6461d3c0fa0969311bbb6a1e30d1bba86c88")

View File

@@ -0,0 +1,72 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.home.roles.starship;
in
{
options.home.roles.starship = {
enable = mkEnableOption "starship cross-shell prompt";
};
config = mkIf cfg.enable {
programs.starship = {
enable = true;
enableBashIntegration = true;
enableZshIntegration = true;
settings = {
add_newline = true;
character = {
success_symbol = "[>](bold green)";
error_symbol = "[x](bold red)";
vimcmd_symbol = "[<](bold green)";
};
directory = {
truncation_length = 4;
truncate_to_repo = true;
};
git_branch = {
symbol = "";
format = "[$symbol$branch(:$remote_branch)]($style) ";
};
git_status = {
format = "([$all_status$ahead_behind]($style) )";
};
nix_shell = {
symbol = "";
format = "[$symbol$state( \\($name\\))]($style) ";
};
cmd_duration = {
min_time = 2000;
format = "[$duration]($style) ";
};
# Disable modules that are noisy or rarely needed
package.disabled = true;
nodejs.disabled = true;
python.disabled = true;
ruby.disabled = true;
java.disabled = true;
golang.disabled = true;
rust.disabled = true;
php.disabled = true;
lua.disabled = true;
perl.disabled = true;
terraform.disabled = true;
kubernetes.disabled = true;
docker_context.disabled = true;
aws.disabled = true;
gcloud.disabled = true;
azure.disabled = true;
};
};
};
}

View File

@@ -26,6 +26,7 @@ with lib;
enable = true;
autologin = true;
wayland = true;
appLauncherServer.enable = true;
jellyfinScaleFactor = 1.0;
};
nfs-mounts.enable = true;

View File

@@ -170,6 +170,7 @@ This document outlines the plan to migrate the john-endesktop server from Arch L
```bash
blkid /dev/nvme0n1p5
# Note the UUID for updating hardware-configuration.nix
/dev/nvme0n1p5: LABEL="nixos" UUID="5f4ad025-bfab-4aed-a933-6638348059e5" UUID_SUB="4734d820-7b8a-4b7f-853a-026021c1d204" BLOCK_SIZE="4096" TYPE="btrfs" PARTLABEL="data" PARTUUID="9ea025df-cdb7-48fd-b5d4-37cd5d8588eb"
```
8. **Copy your NixOS configuration to the server**
@@ -388,11 +389,11 @@ After successful migration and 24-48 hours of stable operation:
Pre-migration:
- [x] nvme0n1p5 removal from media pool complete
- [ ] Recent backup verified (< 24 hours)
- [ ] Maintenance window scheduled
- [ ] NixOS ISO downloaded
- [ ] Bootable USB created
- [ ] NixOS config builds successfully
- [x] Recent backup verified (< 24 hours)
- [x] Maintenance window scheduled
- [x] NixOS ISO downloaded
- [x] Bootable USB created
- [x] NixOS config builds successfully
During migration:
- [ ] ZFS pools exported

View File

@@ -104,6 +104,28 @@ with lib;
# User configuration
roles.users.enable = true;
# Enable as remote builder (similar to zix790prors)
roles.remote-build.enableBuilder = true;
# k3s agent configuration
roles.k3s-node = {
enable = true;
role = "agent";
# serverAddr defaults to https://10.0.0.222:6443
# tokenFile defaults to /etc/k3s/token
extraFlags = [
# Node labels for workload scheduling
# fast-cpu: This node has a faster CPU than other cluster nodes
"--node-label=fast-cpu=true"
# fast-storage: This node is the NFS host with fast local storage access
"--node-label=fast-storage=true"
# k3s-upgrade=disabled: NixOS manages k3s upgrades via Nix, not system-upgrade-controller
"--node-label=k3s-upgrade=disabled"
];
};
roles.virtualisation.enable = true;
# Time zone
time.timeZone = "America/Los_Angeles"; # Adjust as needed

View File

@@ -18,12 +18,29 @@
# File systems - these will need to be updated after installation
# The nvme0n1p5 partition will be formatted as btrfs for NixOS root
fileSystems."/" = {
# Update this device path after installation
device = "/dev/disk/by-uuid/CHANGE-THIS-TO-YOUR-UUID";
device = "/dev/disk/by-uuid/5f4ad025-bfab-4aed-a933-6638348059e5";
fsType = "btrfs";
options = [ "subvol=@" "compress=zstd" "noatime" ];
};
fileSystems."/home" = {
device = "/dev/disk/by-uuid/5f4ad025-bfab-4aed-a933-6638348059e5";
fsType = "btrfs";
options = [ "subvol=@home" "compress=zstd" "noatime" ];
};
fileSystems."/nix" = {
device = "/dev/disk/by-uuid/5f4ad025-bfab-4aed-a933-6638348059e5";
fsType = "btrfs";
options = [ "subvol=@nix" "compress=zstd" "noatime" ];
};
fileSystems."/var/log" = {
device = "/dev/disk/by-uuid/5f4ad025-bfab-4aed-a933-6638348059e5";
fsType = "btrfs";
options = [ "subvol=@log" "compress=zstd" "noatime" ];
};
fileSystems."/boot" = {
# This should match your current EFI partition
device = "/dev/disk/by-uuid/F5C6-D570";
@@ -35,10 +52,8 @@
# The pools should be imported automatically via boot.zfs.extraPools
# /media and /swarmvols will be mounted by ZFS
# Swap - using ZFS zvol
swapDevices = [
{ device = "/dev/zvol/media/swap"; }
];
# No swap needed - 23GB RAM is sufficient for this NFS/ZFS server
swapDevices = [ ];
# CPU microcode
hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;

View File

@@ -21,11 +21,18 @@
};
nfs-mounts.enable = true;
printing.enable = true;
remote-build.builders = [{
remote-build.builders = [
{
hostName = "zix790prors";
maxJobs = 16;
speedFactor = 3;
}];
}
{
hostName = "john-endesktop";
maxJobs = 1;
speedFactor = 1;
}
];
spotifyd.enable = true;
users = {
enable = true;
@@ -41,14 +48,9 @@
boot.initrd.luks.devices."luks-b614167b-9045-4234-a441-ac6f60a96d81".device = "/dev/disk/by-uuid/b614167b-9045-4234-a441-ac6f60a96d81";
services.logind.settings.Login = {
HandleLidSwitch = "suspend-then-hibernate";
HandlePowerKey = "hibernate";
HandlePowerKeyLongPress = "poweroff";
};
systemd.sleep.extraConfig = ''
HibernateDelaySec=30m
SuspendState=mem
'';
networking.hostName = "nix-book"; # Define your hostname.
# networking.wireless.enable = true; # Enables wireless support via wpa_supplicant.

View File

@@ -17,6 +17,15 @@
enable = true;
wayland = true;
};
nvidia = {
enable = true;
package = "latest";
graphics.extraPackages = with pkgs; [
mesa
libvdpau-va-gl
libva-vdpau-driver
];
};
users.enable = true;
};
@@ -29,28 +38,13 @@
wsl.wslConf.network.hostname = "wixos";
wsl.wslConf.user.default = "johno";
services.xserver.videoDrivers = [ "nvidia" ];
hardware.graphics = {
enable = true;
extraPackages = with pkgs; [
mesa
libvdpau-va-gl
libva-vdpau-driver
];
};
# WSL-specific environment variables for graphics
environment.sessionVariables = {
LD_LIBRARY_PATH = [
"/usr/lib/wsl/lib"
"/run/opengl-driver/lib"
];
};
hardware.nvidia = {
modesetting.enable = true;
nvidiaSettings = true;
open = true;
package = config.boot.kernelPackages.nvidiaPackages.latest;
};
# This value determines the NixOS release from which the default
# settings for stateful data, like file locations and database versions

View File

@@ -25,8 +25,12 @@ with lib;
wayland = true;
x11 = true;
};
kodi.enable = true;
nfs-mounts.enable = true;
nvidia.enable = true;
nvidia = {
enable = true;
graphics.enable32Bit = true;
};
printing.enable = true;
remote-build.enableBuilder = true;
users.enable = true;
@@ -47,27 +51,11 @@ with lib;
# Fix dual boot clock sync - tell Linux to use local time for hardware clock
time.hardwareClockInLocalTime = true;
# NVIDIA Graphics configuration
services.xserver.videoDrivers = [ "nvidia" ];
hardware.graphics.enable = true;
hardware.graphics.enable32Bit = true;
# Set DP-0 as primary display with 164.90Hz refresh rate
services.xserver.displayManager.sessionCommands = ''
${pkgs.xorg.xrandr}/bin/xrandr --output DP-0 --mode 3440x1440 --rate 164.90 --primary
'';
hardware.nvidia = {
modesetting.enable = true;
nvidiaSettings = true;
package = pkgs.linuxPackages.nvidiaPackages.stable;
open = true;
# For gaming performance
powerManagement.enable = false;
powerManagement.finegrained = false;
};
services.ollama = {
enable = true;
acceleration = "cuda";

View File

@@ -1,6 +1,5 @@
{ pkgs, ... }:
{
vulkanHDRLayer = pkgs.callPackage ./vulkan-hdr-layer {};
tea-rbw = pkgs.callPackage ./tea-rbw {};
app-launcher-server = pkgs.callPackage ./app-launcher-server {};
claude-code = pkgs.callPackage ./claude-code {};

View File

@@ -1,34 +0,0 @@
{ lib, stdenv, fetchFromGitHub, meson, pkg-config, vulkan-loader, ninja, writeText, vulkan-headers, vulkan-utility-libraries, jq, libX11, libXrandr, libxcb, wayland, wayland-scanner }:
stdenv.mkDerivation rec {
pname = "vulkan-hdr-layer";
version = "63d2eec";
src = (fetchFromGitHub {
owner = "Zamundaaa";
repo = "VK_hdr_layer";
rev = "869199cd2746e7f69cf19955153080842b6dacfc";
fetchSubmodules = true;
hash = "sha256-xfVYI+Aajmnf3BTaY2Ysg5fyDO6SwDFGyU0L+F+E3is=";
}).overrideAttrs (_: {
GIT_CONFIG_COUNT = 1;
GIT_CONFIG_KEY_0 = "url.https://github.com/.insteadOf";
GIT_CONFIG_VALUE_0 = "git@github.com:";
});
nativeBuildInputs = [ vulkan-headers meson ninja pkg-config jq ];
buildInputs = [ vulkan-headers vulkan-loader vulkan-utility-libraries libX11 libXrandr libxcb wayland wayland-scanner ];
# Help vulkan-loader find the validation layers
setupHook = writeText "setup-hook" ''
addToSearchPath XDG_DATA_DIRS @out@/share
'';
meta = with lib; {
description = "Layers providing Vulkan HDR";
homepage = "https://github.com/Zamundaaa/VK_hdr_layer";
platforms = platforms.linux;
license = licenses.mit;
};
}

View File

@@ -24,14 +24,6 @@ in
pulse.enable = true;
};
services.pulseaudio = {
package = pkgs.pulseaudioFull;
extraConfig = ''
load-module module-combine-sink
load-module module-switch-on-connect
'';
};
services.squeezelite = {
#enable = true;
pulseAudio = true;

36
roles/common.nix Normal file
View File

@@ -0,0 +1,36 @@
# Common configuration shared between NixOS and Darwin
{ lib, pkgs, ... }:
{
config = {
time.timeZone = "America/Los_Angeles";
environment.systemPackages = with pkgs; [
git
glances
ghostty.terminfo # So tmux works when SSH'ing from ghostty
pciutils
tree
usbutils
vim
];
nix = {
package = pkgs.nix;
settings = {
experimental-features = [ "nix-command" "flakes" ];
max-jobs = "auto";
trusted-users = [ "johno" ];
substituters = [
];
};
gc = {
automatic = true;
options = "--delete-older-than 10d";
};
};
nixpkgs.config.allowUnfree = true;
};
}

View File

@@ -7,6 +7,10 @@ let
setEnvironmentPath = "${config.system.build.setEnvironment}";
in
{
imports = [
./common.nix
];
config = {
# Salt manages /etc/bashrc, /etc/zshrc, /etc/zshenv
# nix-darwin writes to .local variants for nix-specific configuration
@@ -43,8 +47,6 @@ in
fi
'';
time.timeZone = "America/Los_Angeles";
# System preferences
system.defaults = {
# Custom keyboard shortcuts
@@ -79,42 +81,5 @@ in
};
};
};
environment.systemPackages = with pkgs; [
git
glances
pciutils
tree
usbutils
vim
];
nix = {
package = pkgs.nix;
# distributedBuilds = true;
# buildMachines = [{
# hostName = "z790prors.oglehome";
# system = "x86_64-linux";
# protocol = "ssh-ng";
# sshUser = "johno";
# sshKey = "/root/.ssh/id_ed25519";
# maxJobs = 3;
# speedFactor = 2;
# }];
settings = {
experimental-features = [ "nix-command" "flakes" ];
max-jobs = "auto";
trusted-users = [ "johno" ];
substituters = [
];
};
gc = {
automatic = true;
options = "--delete-older-than 10d";
};
};
nixpkgs.config.allowUnfree = true;
};
}

View File

@@ -4,10 +4,12 @@ with lib;
{
imports = [
./common.nix
./audio
./bluetooth
./btrfs
./desktop
./k3s-node
./kodi
./nfs-mounts
./nvidia
@@ -31,7 +33,6 @@ with lib;
LC_TELEPHONE = "en_US.UTF-8";
LC_TIME = "en_US.UTF-8";
};
time.timeZone = "America/Los_Angeles";
services.xserver.xkb = {
layout = "us";
@@ -49,42 +50,7 @@ with lib;
# Enable the OpenSSH daemon.
services.openssh.enable = true;
environment.systemPackages = with pkgs; [
git
glances
pciutils
tree
usbutils
vim
];
nix = {
package = pkgs.nix;
# distributedBuilds = true;
# buildMachines = [{
# hostName = "z790prors.oglehome";
# system = "x86_64-linux";
# protocol = "ssh-ng";
# sshUser = "johno";
# sshKey = "/root/.ssh/id_ed25519";
# maxJobs = 3;
# speedFactor = 2;
# }];
settings = {
experimental-features = [ "nix-command" "flakes" ];
max-jobs = "auto";
trusted-users = [ "johno" ];
substituters = [
];
};
gc = {
automatic = true;
randomizedDelaySec = "14m";
options = "--delete-older-than 10d";
};
};
nixpkgs.config.allowUnfree = true;
# NixOS-specific gc option (not available on Darwin)
nix.gc.randomizedDelaySec = "14m";
};
}

View File

@@ -0,0 +1,81 @@
{ lib, config, pkgs, ... }:
with lib;
let
cfg = config.roles.k3s-node;
in
{
options.roles.k3s-node = {
enable = mkEnableOption "Enable k3s node";
role = mkOption {
type = types.enum [ "server" "agent" ];
default = "agent";
description = "k3s role: server (control plane) or agent (worker)";
};
serverAddr = mkOption {
type = types.str;
default = "https://10.0.0.222:6443";
description = "URL of k3s server to join (required for agents, used for HA servers)";
};
tokenFile = mkOption {
type = types.path;
default = "/etc/k3s/token";
description = "Path to file containing the cluster join token";
};
clusterInit = mkOption {
type = types.bool;
default = false;
description = "Initialize a new cluster (first server only)";
};
extraFlags = mkOption {
type = types.listOf types.str;
default = [];
description = "Additional flags to pass to k3s";
};
gracefulNodeShutdown = mkOption {
type = types.bool;
default = true;
description = "Enable graceful node shutdown";
};
openFirewall = mkOption {
type = types.bool;
default = true;
description = "Open firewall ports for k3s";
};
};
config = mkIf cfg.enable {
# k3s service configuration
services.k3s = {
enable = true;
role = cfg.role;
tokenFile = cfg.tokenFile;
extraFlags = cfg.extraFlags;
gracefulNodeShutdown.enable = cfg.gracefulNodeShutdown;
serverAddr = if (cfg.role == "agent" || !cfg.clusterInit) then cfg.serverAddr else "";
clusterInit = cfg.role == "server" && cfg.clusterInit;
};
# Firewall rules for k3s
networking.firewall = mkIf cfg.openFirewall {
allowedTCPPorts = [
6443 # k3s API server
10250 # kubelet metrics
] ++ optionals (cfg.role == "server") [
2379 # etcd clients (HA)
2380 # etcd peers (HA)
];
allowedUDPPorts = [
8472 # flannel VXLAN
];
};
};
}

View File

@@ -22,7 +22,7 @@ in
appLauncherServer = {
enable = mkOption {
type = types.bool;
default = true;
default = false;
description = "Enable HTTP app launcher server for remote control";
};
port = mkOption {

View File

@@ -8,9 +8,89 @@ in
{
options.roles.nvidia = {
enable = mkEnableOption "Enable the nvidia role";
# Driver configuration options
open = mkOption {
type = types.bool;
default = true;
description = "Use the open source nvidia kernel driver (for Turing and newer GPUs).";
};
modesetting = mkOption {
type = types.bool;
default = true;
description = "Enable kernel modesetting for nvidia.";
};
nvidiaSettings = mkOption {
type = types.bool;
default = true;
description = "Enable the nvidia-settings GUI.";
};
package = mkOption {
type = types.enum [ "stable" "latest" "beta" "vulkan_beta" "production" ];
default = "stable";
description = "The nvidia driver package to use.";
};
powerManagement = {
enable = mkOption {
type = types.bool;
default = false;
description = "Enable nvidia power management (useful for laptops, not recommended for desktops).";
};
finegrained = mkOption {
type = types.bool;
default = false;
description = "Enable fine-grained power management for Turing and newer GPUs.";
};
};
graphics = {
enable = mkOption {
type = types.bool;
default = true;
description = "Enable hardware graphics support.";
};
enable32Bit = mkOption {
type = types.bool;
default = false;
description = "Enable 32-bit graphics libraries (needed for some games).";
};
extraPackages = mkOption {
type = types.listOf types.package;
default = [];
description = "Extra packages to add to hardware.graphics.extraPackages.";
};
};
};
config = mkIf cfg.enable {
# Set xserver video driver
services.xserver.videoDrivers = [ "nvidia" ];
# Graphics configuration
hardware.graphics = {
enable = cfg.graphics.enable;
enable32Bit = cfg.graphics.enable32Bit;
extraPackages = cfg.graphics.extraPackages;
};
# NVIDIA driver configuration
hardware.nvidia = {
modesetting.enable = cfg.modesetting;
nvidiaSettings = cfg.nvidiaSettings;
open = cfg.open;
package = config.boot.kernelPackages.nvidiaPackages.${cfg.package};
powerManagement.enable = cfg.powerManagement.enable;
powerManagement.finegrained = cfg.powerManagement.finegrained;
};
# Additional packages for nvidia support
environment.systemPackages = with pkgs; [
libva-utils
nvidia-vaapi-driver

View File

@@ -1,3 +1,66 @@
# Remote Build Role
#
# This module configures Nix distributed builds, allowing machines to offload
# builds to more powerful remote machines.
#
# SETUP INSTRUCTIONS
# ==================
#
# 1. BUILDER MACHINE SETUP
# On machines that will serve as builders (e.g., zix790prors, john-endesktop):
#
# a) Enable the builder role in configuration.nix:
# roles.remote-build.enableBuilder = true;
#
# b) After nixos-rebuild, the nix-builder user is created automatically.
# You need to add client SSH public keys to the builder. Either:
#
# Option A - Manual (recommended for initial setup):
# sudo mkdir -p /var/lib/nix-builder/.ssh
# sudo bash -c 'cat >> /var/lib/nix-builder/.ssh/authorized_keys' << 'EOF'
# ssh-ed25519 AAAA... root@client-hostname
# EOF
# sudo chown -R nix-builder:nix-builder /var/lib/nix-builder/.ssh
# sudo chmod 700 /var/lib/nix-builder/.ssh
# sudo chmod 600 /var/lib/nix-builder/.ssh/authorized_keys
#
# Option B - Via NixOS config (if you store keys in the repo):
# users.users.nix-builder.openssh.authorizedKeys.keys = [
# "ssh-ed25519 AAAA... root@client-hostname"
# ];
#
# 2. CLIENT MACHINE SETUP
# On machines that will use remote builders (e.g., nix-book):
#
# a) Configure builders in configuration.nix:
# roles.remote-build.builders = [
# {
# hostName = "zix790prors";
# maxJobs = 16; # Number of parallel build jobs
# speedFactor = 3; # Higher = prefer this builder
# }
# {
# hostName = "john-endesktop";
# maxJobs = 1; # Conservative for busy machines
# speedFactor = 1;
# }
# ];
#
# b) Generate SSH key for root (if not exists) and copy to builders:
# sudo ssh-keygen -t ed25519 -f /root/.ssh/id_ed25519 -N ""
# sudo cat /root/.ssh/id_ed25519.pub # Add this to builder's authorized_keys
#
# c) Accept the builder's host key (as root):
# sudo ssh nix-builder@zix790prors echo "Connected!"
# sudo ssh nix-builder@john-endesktop echo "Connected!"
#
# 3. VERIFY SETUP
# Test that distributed builds work:
# nix build --rebuild nixpkgs#hello --print-build-logs
#
# Check builder connectivity:
# nix store ping --store ssh-ng://nix-builder@zix790prors
#
{ lib, config, pkgs, ... }:
with lib;

4
bootstrap.sh → scripts/bootstrap.sh Executable file → Normal file
View File

@@ -1,6 +1,7 @@
#!/usr/bin/env bash
# bootstrap.sh
# Usage: sudo ./bootstrap.sh <hostname>
# Usage: nix run .#bootstrap -- <hostname>
# Or: sudo ./scripts/bootstrap.sh <hostname>
set -euo pipefail
NEW_HOSTNAME="${1:?missing hostname}"
@@ -8,4 +9,3 @@ FLAKE_URI="git+https://git.johnogle.info/johno/nixos-configs.git#${NEW_HOSTNAME}
export NIX_CONFIG="experimental-features = nix-command flakes"
nixos-rebuild switch --flake "$FLAKE_URI"

22
scripts/build-liveusb.sh Normal file
View File

@@ -0,0 +1,22 @@
#!/usr/bin/env bash
# Build Live USB ISO from flake configuration
# Creates an uncompressed ISO suitable for Ventoy and other USB boot tools
# Usage: nix run .#build-liveusb
# Or: ./scripts/build-liveusb.sh
set -euo pipefail
REPO_ROOT="${REPO_ROOT:-$(git rev-parse --show-toplevel 2>/dev/null || pwd)}"
echo "Building Live USB ISO..."
nix build "${REPO_ROOT}#nixosConfigurations.live-usb.config.system.build.isoImage" --show-trace
if ls "${REPO_ROOT}/result/iso/"*.iso 1> /dev/null 2>&1; then
iso_file=$(ls "${REPO_ROOT}/result/iso/"*.iso)
echo "Build complete!"
echo "ISO location: $iso_file"
echo "Ready for Ventoy or dd to USB"
else
echo "Build failed - no ISO file found"
exit 1
fi