Compare commits

..

1 Commits

Author SHA1 Message Date
05ad04764e fix(mu4e): Configure msmtp to preserve email body content
The mu4e msmtp configuration was causing email bodies to be stripped,
especially for multipart messages from org-msg. This was due to missing
critical msmtp settings.

Changes:
- Add message-sendmail-f-is-evil to prevent -f flag issues
- Add --read-envelope-from to msmtp arguments
- Set both send-mail-function and message-send-mail-function

Fixes: nixos-configs-9l8
2026-01-10 10:39:29 -08:00
45 changed files with 330 additions and 2735 deletions

0
.beads/sync_base.jsonl Normal file
View File

View File

@@ -1,130 +0,0 @@
---
description: Import open Gitea issues as beads, skipping already-imported ones
---
# Import Gitea Issues as Beads
This skill imports open Gitea issues as beads, checking for duplicates to avoid re-importing already tracked issues.
## Prerequisites
- `tea` CLI must be installed and configured for the repository
- `bd` (beads) CLI must be installed
- Must be in a git repository with a Gitea/Forgejo remote
## Workflow
### Step 1: Get open Gitea issues
List all open issues using `tea`:
```bash
tea issues
```
This returns a table with columns: INDEX, TITLE, LABELS, MILESTONE
### Step 2: Get existing beads
List all current beads to check what's already imported:
```bash
bd list
```
Also check bead notes for issue URLs to identify imports:
```bash
bd list --json | jq -r '.[] | select(.notes != null) | .notes' | grep -oP 'issues/\K\d+'
```
### Step 3: Check for already-linked PRs
Check if any open PRs reference beads (skip these issues as they're being worked on):
```bash
tea pr list
```
Look for PRs with:
- Bead ID in title: `[nixos-configs-xxx]`
- Bead reference in body: `Implements bead:` or `Bead ID:`
### Step 4: For each untracked issue, create a bead
For each issue not already tracked:
1. **Get full issue details**:
```bash
tea issue [ISSUE_NUMBER]
```
2. **Determine bead type** based on issue content:
- "bug" - if issue mentions bug, error, broken, fix, crash
- "feature" - if issue mentions feature, add, new, enhancement
- "task" - default for other issues
3. **Create the bead**:
```bash
bd add "[ISSUE_TITLE]" \
--type=[TYPE] \
--priority=P2 \
--notes="Gitea issue: [ISSUE_URL]
Original issue description:
[ISSUE_BODY]"
```
Note: The `--notes` flag accepts multi-line content.
### Step 5: Report results
Present a summary:
```
## Gitea Issues Import Summary
### Imported as Beads
| Issue | Title | Bead ID | Type |
|-------|-------|---------|------|
| #5 | Add dark mode | nixos-configs-abc | feature |
| #3 | Config broken on reboot | nixos-configs-def | bug |
### Skipped (Already Tracked)
| Issue | Title | Reason |
|-------|-------|--------|
| #4 | Update flake | Existing bead: nixos-configs-xyz |
| #2 | Refactor roles | PR #7 references bead |
### Skipped (Other)
| Issue | Title | Reason |
|-------|-------|--------|
| #1 | Discussion: future plans | No actionable work |
```
## Type Detection Heuristics
Keywords to detect issue type:
**Bug indicators** (case-insensitive):
- bug, error, broken, fix, crash, fail, issue, problem, wrong, not working
**Feature indicators** (case-insensitive):
- feature, add, new, enhancement, implement, support, request, want, would be nice
**Task** (default):
- Anything not matching bug or feature patterns
## Error Handling
- **tea not configured**: Report error and exit
- **bd not available**: Report error and exit
- **Issue already has bead**: Skip and report in summary
- **Issue is a PR**: Skip (tea shows PRs and issues separately)
## Notes
- Default priority is P2; adjust manually after import if needed
- Issue labels from Gitea are not automatically mapped to bead tags
- Run this periodically to catch new issues
- After import, use `bd ready` to see which beads can be worked on

1
.gitignore vendored
View File

@@ -1,3 +1,2 @@
result result
thoughts thoughts
.beads

View File

@@ -6,10 +6,6 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
This is a NixOS configuration repository using flakes, managing multiple machines and home-manager configurations. The repository follows a modular architecture with reusable "roles" that can be composed for different machines. This is a NixOS configuration repository using flakes, managing multiple machines and home-manager configurations. The repository follows a modular architecture with reusable "roles" that can be composed for different machines.
## Issue Tracking
This repository uses `beads` for issue tracking and management. Run `bd quickstart` to get an overview of the system at the start of every session.
## Architecture ## Architecture
### Flake Structure ### Flake Structure

4
scripts/bootstrap.sh → bootstrap.sh Normal file → Executable file
View File

@@ -1,7 +1,6 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# bootstrap.sh # bootstrap.sh
# Usage: nix run .#bootstrap -- <hostname> # Usage: sudo ./bootstrap.sh <hostname>
# Or: sudo ./scripts/bootstrap.sh <hostname>
set -euo pipefail set -euo pipefail
NEW_HOSTNAME="${1:?missing hostname}" NEW_HOSTNAME="${1:?missing hostname}"
@@ -9,3 +8,4 @@ FLAKE_URI="git+https://git.johnogle.info/johno/nixos-configs.git#${NEW_HOSTNAME}
export NIX_CONFIG="experimental-features = nix-command flakes" export NIX_CONFIG="experimental-features = nix-command flakes"
nixos-rebuild switch --flake "$FLAKE_URI" nixos-rebuild switch --flake "$FLAKE_URI"

19
build-liveusb.sh Executable file
View File

@@ -0,0 +1,19 @@
#!/usr/bin/env bash
# Build Live USB ISO from flake configuration
# Creates an uncompressed ISO suitable for Ventoy and other USB boot tools
set -e
echo "Building Live USB ISO..."
nix build .#nixosConfigurations.live-usb.config.system.build.isoImage --show-trace
if [ -f "./result/iso/"*.iso ]; then
iso_file=$(ls ./result/iso/*.iso)
echo "✅ Build complete!"
echo "📁 ISO location: $iso_file"
echo "💾 Ready for Ventoy or dd to USB"
else
echo "❌ Build failed - no ISO file found"
exit 1
fi

76
flake.lock generated
View File

@@ -21,45 +21,6 @@
"type": "github" "type": "github"
} }
}, },
"doomemacs": {
"flake": false,
"locked": {
"lastModified": 1767773143,
"narHash": "sha256-QL/t9v2kFNxBDyNJb/s411o3mxujan+QX5IZglTdpTk=",
"owner": "doomemacs",
"repo": "doomemacs",
"rev": "3e15fb36d7f94f0a218bda977be4d3f5da983a71",
"type": "github"
},
"original": {
"owner": "doomemacs",
"repo": "doomemacs",
"type": "github"
}
},
"emacs-overlay": {
"inputs": {
"nixpkgs": [
"nix-doom-emacs-unstraightened"
],
"nixpkgs-stable": [
"nix-doom-emacs-unstraightened"
]
},
"locked": {
"lastModified": 1768011937,
"narHash": "sha256-SnU2XTo34vwVaijs+4VwcXTNwMWO4nwzzs08N39UagA=",
"owner": "nix-community",
"repo": "emacs-overlay",
"rev": "79abf71d9897cf3b5189f7175cda1b1102abc65c",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "emacs-overlay",
"type": "github"
}
},
"flake-compat": { "flake-compat": {
"flake": false, "flake": false,
"locked": { "locked": {
@@ -198,27 +159,6 @@
"type": "github" "type": "github"
} }
}, },
"nix-doom-emacs-unstraightened": {
"inputs": {
"doomemacs": "doomemacs",
"emacs-overlay": "emacs-overlay",
"nixpkgs": [],
"systems": "systems_2"
},
"locked": {
"lastModified": 1768034604,
"narHash": "sha256-62pIZMvGHhYJmMiiBsxHqZt/dFyENPcFHlJq5NJF3Sw=",
"owner": "marienz",
"repo": "nix-doom-emacs-unstraightened",
"rev": "9b3b8044fe4ccdcbb2d6f733d7dbe4d5feea18bc",
"type": "github"
},
"original": {
"owner": "marienz",
"repo": "nix-doom-emacs-unstraightened",
"type": "github"
}
},
"nix-github-actions": { "nix-github-actions": {
"inputs": { "inputs": {
"nixpkgs": [ "nixpkgs": [
@@ -363,7 +303,6 @@
"home-manager-unstable": "home-manager-unstable", "home-manager-unstable": "home-manager-unstable",
"jovian": "jovian", "jovian": "jovian",
"nix-darwin": "nix-darwin", "nix-darwin": "nix-darwin",
"nix-doom-emacs-unstraightened": "nix-doom-emacs-unstraightened",
"nixos-wsl": "nixos-wsl", "nixos-wsl": "nixos-wsl",
"nixpkgs": "nixpkgs_2", "nixpkgs": "nixpkgs_2",
"nixpkgs-unstable": "nixpkgs-unstable", "nixpkgs-unstable": "nixpkgs-unstable",
@@ -385,21 +324,6 @@
"repo": "default", "repo": "default",
"type": "github" "type": "github"
} }
},
"systems_2": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
} }
}, },
"root": "root", "root": "root",

138
flake.nix
View File

@@ -47,84 +47,92 @@
url = "github:steveyegge/beads"; url = "github:steveyegge/beads";
inputs.nixpkgs.follows = "nixpkgs-unstable"; inputs.nixpkgs.follows = "nixpkgs-unstable";
}; };
nix-doom-emacs-unstraightened = {
url = "github:marienz/nix-doom-emacs-unstraightened";
# Don't follow nixpkgs to avoid rebuild issues with emacs-overlay
inputs.nixpkgs.follows = "";
};
}; };
outputs = { self, nixpkgs, nixpkgs-unstable, nixos-wsl, ... } @ inputs: let outputs = { self, nixpkgs, nixpkgs-unstable, nixos-wsl, ... } @ inputs: let
# Shared overlay function to reduce duplication across module sets
# Parameters:
# unstableOverlays: Additional overlays to apply when importing nixpkgs-unstable
mkBaseOverlay = { unstableOverlays ? [] }: (final: prev: {
unstable = import nixpkgs-unstable {
system = prev.stdenv.hostPlatform.system;
config.allowUnfree = true;
overlays = unstableOverlays;
};
custom = prev.callPackage ./packages {};
# Compatibility: bitwarden renamed to bitwarden-desktop in unstable
bitwarden-desktop = prev.bitwarden-desktop or prev.bitwarden;
});
# Shared home-manager configuration factory
# Parameters:
# sharedModules: Additional modules to include in home-manager.sharedModules
mkHomeManagerConfig = { sharedModules ? [] }: {
home-manager.useGlobalPkgs = true;
home-manager.useUserPackages = true;
home-manager.sharedModules = sharedModules ++ [
inputs.nix-doom-emacs-unstraightened.homeModule
];
home-manager.extraSpecialArgs = {
globalInputs = inputs;
};
};
nixosModules = [ nixosModules = [
./roles ./roles
] ++ [
inputs.home-manager.nixosModules.home-manager inputs.home-manager.nixosModules.home-manager
{ {
nixpkgs.overlays = [ (mkBaseOverlay {}) ]; nixpkgs.overlays = [
(final: prev: {
unstable = import nixpkgs-unstable {
system = prev.stdenv.hostPlatform.system;
config.allowUnfree = true;
};
custom = prev.callPackage ./packages {};
# Compatibility: bitwarden renamed to bitwarden-desktop in unstable
bitwarden-desktop = prev.bitwarden-desktop or prev.bitwarden;
})
];
home-manager.useGlobalPkgs = true;
home-manager.useUserPackages = true;
home-manager.sharedModules = [
inputs.plasma-manager.homeModules.plasma-manager
];
home-manager.extraSpecialArgs = {
globalInputs = inputs;
};
} }
(mkHomeManagerConfig {
sharedModules = [ inputs.plasma-manager.homeModules.plasma-manager ];
})
]; ];
# Modules for unstable-based systems (like nix-deck) # Modules for unstable-based systems (like nix-deck)
nixosModulesUnstable = [ nixosModulesUnstable = [
./roles ./roles
] ++ [
inputs.home-manager-unstable.nixosModules.home-manager inputs.home-manager-unstable.nixosModules.home-manager
inputs.jovian.nixosModules.jovian inputs.jovian.nixosModules.jovian
{ {
nixpkgs.overlays = [ (mkBaseOverlay {}) ]; nixpkgs.overlays = [
(final: prev: {
unstable = import nixpkgs-unstable {
system = prev.stdenv.hostPlatform.system;
config.allowUnfree = true;
};
custom = prev.callPackage ./packages {};
# Compatibility: bitwarden renamed to bitwarden-desktop in unstable
bitwarden-desktop = prev.bitwarden-desktop or prev.bitwarden;
})
];
home-manager.useGlobalPkgs = true;
home-manager.useUserPackages = true;
home-manager.sharedModules = [
inputs.plasma-manager-unstable.homeModules.plasma-manager
];
home-manager.extraSpecialArgs = {
globalInputs = inputs;
};
} }
(mkHomeManagerConfig {
sharedModules = [ inputs.plasma-manager-unstable.homeModules.plasma-manager ];
})
]; ];
darwinModules = [ darwinModules = [
./roles/darwin.nix ./roles/darwin.nix
] ++ [
inputs.home-manager.darwinModules.home-manager inputs.home-manager.darwinModules.home-manager
{ {
nixpkgs.overlays = [ nixpkgs.overlays = [
(mkBaseOverlay { (final: prev: {
# Override claude-code in unstable to use our custom GCS-based build unstable = import nixpkgs-unstable {
# (needed for corporate networks that block npm registry) system = prev.stdenv.hostPlatform.system;
unstableOverlays = [ config.allowUnfree = true;
(ufinal: uprev: { overlays = [
claude-code = uprev.callPackage ./packages/claude-code {}; # Override claude-code in unstable to use our custom GCS-based build
}) # (needed for corporate networks that block npm registry)
]; (ufinal: uprev: {
claude-code = prev.custom.claude-code or (prev.callPackage ./packages {}).claude-code;
})
];
};
custom = prev.callPackage ./packages {};
# Compatibility: bitwarden renamed to bitwarden-desktop in unstable
bitwarden-desktop = prev.bitwarden-desktop or prev.bitwarden;
}) })
]; ];
home-manager.useGlobalPkgs = true;
home-manager.useUserPackages = true;
home-manager.extraSpecialArgs = {
globalInputs = inputs;
};
} }
(mkHomeManagerConfig { sharedModules = []; })
]; ];
in { in {
@@ -211,11 +219,7 @@
system = "x86_64-linux"; system = "x86_64-linux";
modules = nixosModules ++ [ modules = nixosModules ++ [
./machines/john-endesktop/configuration.nix ./machines/john-endesktop/configuration.nix
inputs.home-manager.nixosModules.home-manager # Minimal server - no home-manager needed
{
home-manager.users.johno = import ./home/home-server.nix;
home-manager.extraSpecialArgs = { inherit system; };
}
]; ];
}; };
@@ -256,16 +260,6 @@
export PATH="${pkgs.lib.makeBinPath commonDeps}:$PATH" export PATH="${pkgs.lib.makeBinPath commonDeps}:$PATH"
${builtins.readFile ./scripts/upgrade.sh} ${builtins.readFile ./scripts/upgrade.sh}
''; '';
bootstrap = pkgs.writeShellScriptBin "bootstrap" ''
export PATH="${pkgs.lib.makeBinPath commonDeps}:$PATH"
${builtins.readFile ./scripts/bootstrap.sh}
'';
build-liveusb = pkgs.writeShellScriptBin "build-liveusb" ''
export PATH="${pkgs.lib.makeBinPath commonDeps}:$PATH"
${builtins.readFile ./scripts/build-liveusb.sh}
'';
in { in {
update-doomemacs = { update-doomemacs = {
type = "app"; type = "app";
@@ -283,14 +277,6 @@
type = "app"; type = "app";
program = "${upgrade}/bin/upgrade"; program = "${upgrade}/bin/upgrade";
}; };
bootstrap = {
type = "app";
program = "${bootstrap}/bin/bootstrap";
};
build-liveusb = {
type = "app";
program = "${build-liveusb}/bin/build-liveusb";
};
} }
); );
}; };

View File

@@ -23,7 +23,6 @@
kubectl.enable = true; kubectl.enable = true;
tmux.enable = true; tmux.enable = true;
plasma-manager.enable = true; plasma-manager.enable = true;
starship.enable = true;
}; };
targets.genericLinux.enable = true; targets.genericLinux.enable = true;

View File

@@ -23,7 +23,6 @@
plasma-manager.enable = true; plasma-manager.enable = true;
emacs.enable = true; emacs.enable = true;
i3_sway.enable = true; i3_sway.enable = true;
starship.enable = true;
# Launcher wrappers for excluded/optional packages # Launcher wrappers for excluded/optional packages
launchers = { launchers = {

View File

@@ -14,14 +14,8 @@
desktop.enable = true; desktop.enable = true;
tmux.enable = true; tmux.enable = true;
plasma-manager.enable = true; plasma-manager.enable = true;
emacs = { emacs.enable = true;
enable = true;
# Use pre-built Doom Emacs - all packages built at nix build time
# This means no doom sync is needed after booting the live USB
prebuiltDoom = true;
};
i3_sway.enable = true; i3_sway.enable = true;
starship.enable = true;
# development.enable = false; # Not needed for live USB # development.enable = false; # Not needed for live USB
# communication.enable = false; # Not needed for live USB # communication.enable = false; # Not needed for live USB
# office.enable = false; # Not needed for live USB # office.enable = false; # Not needed for live USB

View File

@@ -20,7 +20,6 @@
plasma-manager.enable = true; plasma-manager.enable = true;
emacs.enable = true; emacs.enable = true;
i3_sway.enable = true; i3_sway.enable = true;
starship.enable = true;
# office.enable = false; # Not needed for media center # office.enable = false; # Not needed for media center
# sync.enable = false; # Shared machine, no personal file sync # sync.enable = false; # Shared machine, no personal file sync
}; };

View File

@@ -1,27 +0,0 @@
{ pkgs, globalInputs, system, ... }:
{
# Home Manager configuration for servers (minimal with development tools)
home.username = "johno";
home.homeDirectory = "/home/johno";
home.stateVersion = "24.05";
# Minimal roles for server with development capability
home.roles = {
base.enable = true;
development.enable = true;
emacs.enable = true;
kubectl.enable = true;
starship.enable = true;
tmux.enable = true;
};
targets.genericLinux.enable = true;
home.sessionVariables = {};
home.sessionPath = [];
imports = [
./roles
./roles/base-linux
];
}

View File

@@ -3,7 +3,6 @@
# Includes Linux-specific roles that require Linux-only home-manager modules # Includes Linux-specific roles that require Linux-only home-manager modules
imports = [ imports = [
../plasma-manager ../plasma-manager
../plasma-manager-kodi
../i3+sway ../i3+sway
]; ];
} }

View File

@@ -15,9 +15,9 @@
./launchers ./launchers
./media ./media
./office ./office
./plasma-manager-kodi
./sync ./sync
./tmux ./tmux
./emacs ./emacs
./starship
]; ];
} }

View File

@@ -44,6 +44,7 @@ in
# Custom packages # Custom packages
pkgs.custom.tea-rbw pkgs.custom.tea-rbw
pkgs.custom.perles
]; ];
# Install Claude Code humanlayer command and agent plugins # Install Claude Code humanlayer command and agent plugins
@@ -84,25 +85,11 @@ in
fi fi
done done
# Copy local skills from this repo (with retry for race conditions with running Claude)
for file in ${./skills}/*.md; do
if [ -f "$file" ]; then
filename=$(basename "$file" .md)
dest="$HOME/.claude/commands/''${filename}.md"
# Remove existing file first, then copy with retry on failure
rm -f "$dest" 2>/dev/null || true
if ! cp "$file" "$dest" 2>/dev/null; then
sleep 0.5
cp "$file" "$dest" || echo "Warning: Failed to copy $filename.md to commands"
fi
fi
done
$DRY_RUN_CMD echo "Claude Code humanlayer commands and agents installed successfully${ $DRY_RUN_CMD echo "Claude Code humanlayer commands and agents installed successfully${
if cfg.allowArbitraryClaudeCodeModelSelection if cfg.allowArbitraryClaudeCodeModelSelection
then " (model specifications preserved)" then " (model specifications preserved)"
else " (model selection removed)" else " (model selection removed)"
} + local skills" }"
''; '';
# Set up beads Claude Code integration (hooks for SessionStart/PreCompact) # Set up beads Claude Code integration (hooks for SessionStart/PreCompact)

View File

@@ -1,247 +0,0 @@
---
description: Implement a plan from thoughts/ for a bead issue
---
# Beads Implement
You are tasked with implementing an approved plan for a bead issue. Plans are stored in `thoughts/beads-{id}/plan.md`.
## Initial Setup
When this command is invoked:
1. **Parse the input for bead ID**:
- If a bead ID is provided, use it
- If no bead ID, check for beads with plans:
```bash
bd list --status=in_progress
```
Then check which have plans in `thoughts/beads-{id}/plan.md`
2. **Load bead context**:
```bash
bd show {bead-id}
```
Note the bead **type** (bug, feature, task) from the output.
3. **Check for plan and handle by type**:
Check if plan exists:
```bash
ls thoughts/beads-{bead-id}/plan.md 2>/dev/null
```
**If plan exists**: Proceed normally (skip to step 4)
**If no plan**:
- **type=bug**: Proceed without plan (simple bugs can implement directly)
- **type=feature or type=task**: Warn and ask:
```
No plan found for this {type}.
Plans help ensure complex work is well-designed and verifiable.
Location expected: thoughts/beads-{bead-id}/plan.md
Options:
1. Create a plan first (recommended) - Run /beads_plan {bead-id}
2. Proceed without a plan (for simple changes)
How would you like to proceed?
```
Wait for user response before continuing.
4. **Load plan and research context** (if plan exists):
- Read `thoughts/beads-{bead-id}/plan.md` FULLY
- Check for any existing checkmarks (- [x]) indicating partial progress
- Read any research at `thoughts/beads-{bead-id}/research.md`
5. **Mark bead in progress** (if not already):
```bash
bd update {bead-id} --status=in_progress
```
6. **Respond with**:
```
Implementing plan for bead {bead-id}: {bead-title}
Plan location: thoughts/beads-{bead-id}/plan.md
{If partial progress: "Resuming from Phase X - previous phases completed."}
I'll implement each phase and verify success criteria before proceeding.
```
## Implementation Process
### Step 1: Understand the Plan
1. **Read the plan completely**
2. **Check for existing progress** (checkmarked items)
3. **Read all files mentioned in the plan**
4. **Create a TodoWrite list** tracking each phase
### Step 2: Implement Each Phase
For each phase in the plan:
1. **Announce the phase**:
```
## Starting Phase {N}: {Phase Name}
This phase will: {overview from plan}
```
2. **Make the changes**:
- Follow the plan's specific instructions
- Use Edit tool for modifications
- Create new files only when specified
3. **Run automated verification**:
- Execute each command in "Automated Verification"
- Fix any issues before proceeding
4. **Update plan checkboxes**:
- Use Edit tool to check off completed items in the plan
- This enables resume if session is interrupted
5. **Update bead notes** with progress:
```bash
bd update {bead-id} --notes="Phase {N} complete. Automated verification passed."
```
### Step 3: Per-Plan Checkpoint
**CRITICAL**: After completing ALL phases and ALL automated verification:
```
## Implementation Complete - Ready for Manual Verification
All phases completed and automated verification passed:
- [ ] Phase 1: {name} - DONE
- [ ] Phase 2: {name} - DONE
- [ ] ...
**Automated checks passed:**
- {List of automated checks that passed}
**Please perform manual verification:**
- {List manual verification items from plan}
Let me know when manual testing is complete so I can close the bead.
```
**STOP HERE and wait for user confirmation.**
Do NOT:
- Close the bead automatically
- Proceed to "next steps" without confirmation
- Start additional work
### Step 4: After Manual Verification
When user confirms manual verification passed:
1. **Update plan status**:
- Edit the plan's frontmatter: `status: complete`
2. **Close the bead**:
```bash
bd close {bead-id} --reason="Implementation complete. All verification passed."
```
3. **Final summary**:
```
Bead {bead-id} closed.
Summary:
- {What was implemented}
- {Key changes made}
Artifacts:
- Plan: thoughts/beads-{bead-id}/plan.md
- {Any other artifacts created}
```
## Handling Issues
### When something doesn't match the plan:
```
Issue in Phase {N}:
Expected: {what the plan says}
Found: {actual situation}
Why this matters: {explanation}
Options:
1. Adapt the implementation to work with current state
2. Update the plan to reflect reality
3. Stop and investigate further
How should I proceed?
```
### When tests fail:
1. **Analyze the failure**
2. **Attempt to fix** if the fix is clear and within scope
3. **If fix is unclear**, report:
```
Test failure in Phase {N}:
Failing test: {test name}
Error: {error message}
I've attempted: {what you tried}
This may require: {your assessment}
```
### When blocked:
```
Blocked in Phase {N}:
Blocker: {description}
Impact: {what can't proceed}
Suggested resolution: {your recommendation}
```
## Resuming Work
If the plan has existing checkmarks:
1. **Trust completed work** - don't re-verify unless something seems off
2. **Pick up from first unchecked item**
3. **Verify previous work only if** current phase depends on it and seems broken
## Important Guidelines
1. **Follow the plan's intent** while adapting to reality
2. **Implement each phase fully** before moving to next
3. **Update checkboxes in real-time** as you complete items
4. **One checkpoint per plan** - not per phase
5. **Never close bead** without manual verification confirmation
6. **Keep bead notes updated** with progress
## Session Close Protocol
If you need to end the session before completion:
1. **Update plan** with current progress (checkboxes)
2. **Update bead notes**:
```bash
bd update {bead-id} --notes="In progress: Phase {N} partially complete. Next: {what's next}"
```
3. **Inform user** of status and how to resume
## Example Invocation
```
User: /beads:implement nixos-configs-abc123
Assistant: Implementing plan for bead nixos-configs-abc123...
## Starting Phase 1: Database Schema
This phase will add the new user_preferences table...
```

View File

@@ -1,214 +0,0 @@
---
description: Iterate on existing implementation plans for a bead issue
model: opus
---
# Beads Iterate
You are tasked with updating existing implementation plans based on feedback. Plans are stored in `thoughts/beads-{id}/plan.md`.
## Initial Setup
When this command is invoked:
1. **Parse the input**:
- Bead ID (required or ask for it)
- Requested changes/feedback (can be provided with command or after)
2. **Handle different scenarios**:
**No bead ID provided**:
```
Which bead's plan would you like to iterate on?
Recent beads with plans:
{list beads that have thoughts/beads-{id}/plan.md}
```
**Bead ID but no feedback**:
```
I've found the plan at thoughts/beads-{bead-id}/plan.md
What changes would you like to make? For example:
- "Add a phase for migration handling"
- "Update success criteria to include performance tests"
- "Adjust scope to exclude feature X"
- "Split Phase 2 into two separate phases"
```
**Both bead ID and feedback provided**:
- Proceed immediately to Step 1
## Iteration Process
### Step 1: Understand Current Plan
1. **Read the existing plan COMPLETELY**:
```bash
cat thoughts/beads-{bead-id}/plan.md
```
- Understand current structure, phases, scope
- Note success criteria and approach
2. **Read the bead for context**:
```bash
bd show {bead-id}
```
3. **Understand requested changes**:
- Parse what user wants to add/modify/remove
- Identify if changes require codebase research
### Step 2: Research If Needed
**Only if changes require new technical understanding:**
1. **Spawn parallel research tasks**:
- **codebase-locator**: Find relevant files
- **codebase-analyzer**: Understand implementation details
- **codebase-pattern-finder**: Find similar patterns
2. **Be specific about directories** in prompts
3. **Wait for ALL tasks** before proceeding
### Step 3: Present Understanding
Before making changes:
```
Based on your feedback, I understand you want to:
- {Change 1 with specific detail}
- {Change 2 with specific detail}
{If research was needed:}
My research found:
- {Relevant discovery}
- {Important constraint}
I plan to update the plan by:
1. {Specific modification}
2. {Another modification}
Does this align with your intent?
```
Get user confirmation before proceeding.
### Step 4: Update the Plan
1. **Make focused, precise edits**:
- Use Edit tool for surgical changes
- Maintain existing structure unless explicitly changing it
- Keep file:line references accurate
2. **Ensure consistency**:
- New phases follow existing pattern
- Update "What We're NOT Doing" if scope changes
- Maintain automated vs manual success criteria distinction
3. **Update plan metadata**:
- Update frontmatter `date` to current timestamp
- Add `iteration: {N}` to frontmatter
- Add `iteration_reason: "{brief description}"` to frontmatter
4. **Preserve completed work**:
- Don't uncheck items that were already completed
- If changing completed phases, discuss with user first
### Step 5: Save Iteration History (Optional)
For significant changes, save the previous version:
```bash
cp thoughts/beads-{bead-id}/plan.md thoughts/beads-{bead-id}/plan-v{N}.md
```
Then update the main plan.
### Step 6: Update Bead
```bash
bd update {bead-id} --notes="Plan iterated: {brief description of changes}"
```
### Step 7: Present Changes
```
I've updated the plan at `thoughts/beads-{bead-id}/plan.md`
Changes made:
- {Specific change 1}
- {Specific change 2}
The updated plan now:
- {Key improvement}
- {Another improvement}
Would you like any further adjustments?
```
## Important Guidelines
1. **Be Skeptical**:
- Don't blindly accept changes that seem problematic
- Question vague feedback - ask for clarification
- Point out conflicts with existing phases
2. **Be Surgical**:
- Make precise edits, not wholesale rewrites
- Preserve good content that doesn't need changing
- Only research what's necessary
3. **Be Thorough**:
- Read entire plan before making changes
- Ensure updated sections maintain quality
- Verify success criteria are still measurable
4. **Be Interactive**:
- Confirm understanding before making changes
- Allow course corrections
- Don't disappear into research without communicating
5. **No Open Questions**:
- If changes raise questions, ASK
- Don't update plan with unresolved questions
## Success Criteria Guidelines
When updating success criteria, maintain two categories:
**Automated Verification**:
- Commands: `make test`, `npm run lint`
- Prefer `make` commands when available
- File existence checks
**Manual Verification**:
- UI/UX functionality
- Performance under real conditions
- Edge cases hard to automate
## Handling Major Changes
If feedback requires significant restructuring:
1. **Discuss scope** before proceeding
2. **Consider if this should be a new plan** instead of iteration
3. **Preserve the original** in `plan-v{N}.md`
4. **Update bead description** if scope changed significantly
## Example Invocations
**With full context**:
```
User: /beads:iterate nixos-configs-abc123 - add error handling phase
Assistant: Based on your feedback, I understand you want to add a new phase for error handling...
```
**Interactive**:
```
User: /beads:iterate nixos-configs-abc123
Assistant: I've found the plan. What changes would you like to make?
User: Split Phase 2 into backend and frontend phases
Assistant: I'll split Phase 2 into two separate phases...
```

View File

@@ -1,281 +0,0 @@
---
description: Create detailed implementation plans for a bead issue
model: opus
---
# Beads Plan
You are tasked with creating detailed implementation plans for a bead issue. This skill integrates with the beads issue tracker and stores plans in the `thoughts/` directory.
## Initial Setup
When this command is invoked:
1. **Parse the input for bead ID**:
- If a bead ID is provided, use it
- If no bead ID, run `bd ready` and ask which bead to plan for
2. **Load bead context**:
```bash
bd show {bead-id}
```
- Read the bead description for requirements
- Check for existing research: `thoughts/beads-{bead-id}/research.md`
- Note any dependencies or blockers
3. **Create artifact directory**:
```bash
mkdir -p thoughts/beads-{bead-id}
```
4. **Check for existing research**:
- If `thoughts/beads-{bead-id}/research.md` exists, read it fully
- This research provides crucial context for planning
5. **Respond with**:
```
Creating implementation plan for bead {bead-id}: {bead-title}
{If research exists: "Found existing research at thoughts/beads-{bead-id}/research.md - incorporating findings."}
Let me analyze the requirements and codebase to create a detailed plan.
```
## Planning Process
### Step 1: Context Gathering
1. **Read all mentioned files FULLY**:
- Bead description references
- Existing research document
- Any linked tickets or docs
- Use Read tool WITHOUT limit/offset
2. **Spawn initial research tasks**:
- **codebase-locator**: Find all files related to the task
- **codebase-analyzer**: Understand current implementation
- **codebase-pattern-finder**: Find similar features to model after
- **thoughts-locator**: Find any existing plans or decisions
3. **Read all files identified by research**:
- Read them FULLY into main context
- Cross-reference with requirements
### Step 2: Present Understanding
Before writing the plan, confirm understanding:
```
Based on the bead and my research, I understand we need to [accurate summary].
I've found that:
- [Current implementation detail with file:line reference]
- [Relevant pattern or constraint discovered]
- [Potential complexity or edge case identified]
Questions that my research couldn't answer:
- [Specific technical question requiring human judgment]
- [Business logic clarification]
```
Only ask questions you genuinely cannot answer through code investigation.
### Step 3: Research & Discovery
After getting clarifications:
1. **If user corrects any misunderstanding**:
- Spawn new research tasks to verify
- Read specific files/directories mentioned
- Only proceed once verified
2. **Present design options**:
```
Based on my research:
**Current State:**
- [Key discovery about existing code]
- [Pattern or convention to follow]
**Design Options:**
1. [Option A] - [pros/cons]
2. [Option B] - [pros/cons]
Which approach aligns best?
```
### Step 4: Plan Structure
Once aligned on approach:
```
Here's my proposed plan structure:
## Overview
[1-2 sentence summary]
## Implementation Phases:
1. [Phase name] - [what it accomplishes]
2. [Phase name] - [what it accomplishes]
Does this phasing make sense?
```
Get feedback on structure before writing details.
### Step 5: Write the Plan
Write to `thoughts/beads-{bead-id}/plan.md`:
```markdown
---
date: {ISO timestamp}
bead_id: {bead-id}
bead_title: "{bead title}"
author: claude
git_commit: {commit hash}
branch: {branch name}
repository: {repo name}
status: draft
---
# {Feature/Task Name} Implementation Plan
## Overview
{Brief description of what we're implementing and why}
## Current State Analysis
{What exists now, what's missing, key constraints}
### Key Discoveries:
- {Finding with file:line reference}
- {Pattern to follow}
## Desired End State
{Specification of desired end state and how to verify it}
## What We're NOT Doing
{Explicitly list out-of-scope items}
## Implementation Approach
{High-level strategy and reasoning}
## Phase 1: {Descriptive Name}
### Overview
{What this phase accomplishes}
### Changes Required:
#### 1. {Component/File Group}
**File**: `path/to/file.ext`
**Changes**: {Summary}
```{language}
// Specific code to add/modify
```
### Success Criteria:
#### Automated Verification:
- [ ] Tests pass: `make test`
- [ ] Linting passes: `make lint`
- [ ] Type checking passes: `make typecheck`
#### Manual Verification:
- [ ] Feature works as expected in UI
- [ ] Edge cases handled correctly
---
## Phase 2: {Descriptive Name}
{Similar structure...}
---
## Testing Strategy
### Unit Tests:
- {What to test}
- {Key edge cases}
### Integration Tests:
- {End-to-end scenarios}
### Manual Testing Steps:
1. {Specific step}
2. {Another step}
## References
- Bead: {bead-id}
- Research: `thoughts/beads-{bead-id}/research.md`
- Similar implementation: {file:line}
```
### Step 6: Update the bead
```bash
bd update {bead-id} --notes="Plan created: thoughts/beads-{bead-id}/plan.md"
```
### Step 7: Create implementation bead (if appropriate)
If the planning bead is separate from implementation:
```bash
bd create --title="Implement: {feature name}" --type=task --priority=1 \
--description="Implement the plan at thoughts/beads-{original-bead-id}/plan.md
See bead {original-bead-id} for planning context."
# Link as dependency
bd dep add {new-bead-id} {original-bead-id}
```
### Step 8: Present for Review
```
I've created the implementation plan at:
`thoughts/beads-{bead-id}/plan.md`
Please review it and let me know:
- Are the phases properly scoped?
- Are the success criteria specific enough?
- Any technical details that need adjustment?
- Missing edge cases or considerations?
```
## Important Guidelines
1. **Be Skeptical**: Question vague requirements, identify potential issues early
2. **Be Interactive**: Don't write the full plan in one shot, get buy-in at each step
3. **Be Thorough**: Read all context files COMPLETELY, include specific file:line refs
4. **Be Practical**: Focus on incremental, testable changes
5. **No Open Questions**: If you have unresolved questions, STOP and ask
## Success Criteria Guidelines
Always separate into two categories:
**Automated Verification** (run by agents):
- Commands: `make test`, `npm run lint`, etc.
- File existence checks
- Type checking
**Manual Verification** (requires human):
- UI/UX functionality
- Performance under real conditions
- Edge cases hard to automate
## Example Invocation
```
User: /beads:plan nixos-configs-abc123
Assistant: Creating implementation plan for bead nixos-configs-abc123...
```

View File

@@ -1,206 +0,0 @@
---
description: Research a bead topic comprehensively and store findings in thoughts/
model: opus
---
# Beads Research
You are tasked with conducting comprehensive research for a bead issue. This skill integrates with the beads issue tracker and stores findings in the `thoughts/` directory.
## CRITICAL: YOUR ONLY JOB IS TO DOCUMENT AND EXPLAIN THE CODEBASE AS IT EXISTS TODAY
- DO NOT suggest improvements or changes unless the user explicitly asks for them
- DO NOT perform root cause analysis unless the user explicitly asks for them
- DO NOT propose future enhancements unless the user explicitly asks for them
- DO NOT critique the implementation or identify problems
- ONLY describe what exists, where it exists, how it works, and how components interact
- You are creating a technical map/documentation of the existing system
## Initial Setup
When this command is invoked:
1. **Parse the input for bead ID**:
- If a bead ID is provided (e.g., `nixos-configs-abc123`), use it
- If no bead ID provided, run `bd ready --type=research` to find research beads, or ask which bead to research
2. **Load bead context**:
```bash
bd show {bead-id}
```
- Read the bead description to understand the research question
- Note any linked files or references in the bead
3. **Create artifact directory**:
```bash
mkdir -p thoughts/beads-{bead-id}
```
4. **Respond with**:
```
Starting research for bead {bead-id}: {bead-title}
Research question: {extracted from bead description}
I'll analyze this thoroughly and store findings in thoughts/beads-{bead-id}/research.md
```
## Research Process
### Step 1: Read any directly mentioned files
- If the bead or user mentions specific files, read them FULLY first
- Use the Read tool WITHOUT limit/offset parameters
- Read these files yourself in the main context before spawning sub-tasks
### Step 2: Analyze and decompose the research question
- Break down the query into composable research areas
- Identify specific components, patterns, or concepts to investigate
- Create a research plan using TodoWrite
- Consider which directories, files, or patterns are relevant
### Step 3: Spawn parallel sub-agent tasks
Use specialized agents for research:
**For codebase research:**
- **codebase-locator** - Find WHERE files and components live
- **codebase-analyzer** - Understand HOW specific code works
- **codebase-pattern-finder** - Find examples of existing patterns
**For thoughts directory:**
- **thoughts-locator** - Discover what documents exist about the topic
- **thoughts-analyzer** - Extract key insights from specific documents
**For web research (only if explicitly requested):**
- **web-search-researcher** - External documentation and resources
Key principles:
- Run multiple agents in parallel when searching for different things
- Each agent knows its job - tell it what you're looking for, not HOW to search
- Remind agents they are documenting, not evaluating
### Step 4: Synthesize findings
Wait for ALL sub-agents to complete, then:
- Compile all results (codebase and thoughts findings)
- Prioritize live codebase findings as primary source of truth
- Connect findings across different components
- Include specific file paths and line numbers
- Highlight patterns, connections, and architectural decisions
### Step 5: Gather metadata
```bash
# Git metadata
git rev-parse HEAD # Current commit
git branch --show-current # Current branch
basename $(git rev-parse --show-toplevel) # Repo name
date -Iseconds # Current timestamp
```
### Step 6: Write research document
Write to `thoughts/beads-{bead-id}/research.md`:
```markdown
---
date: {ISO timestamp with timezone}
bead_id: {bead-id}
bead_title: "{bead title}"
researcher: claude
git_commit: {commit hash}
branch: {branch name}
repository: {repo name}
status: complete
---
# Research: {bead title}
**Bead**: {bead-id}
**Date**: {timestamp}
**Git Commit**: {commit hash}
**Branch**: {branch name}
## Research Question
{Original question from bead description}
## Summary
{High-level documentation answering the research question}
## Detailed Findings
### {Component/Area 1}
- Description of what exists (file.ext:line)
- How it connects to other components
- Current implementation details
### {Component/Area 2}
...
## Code References
- `path/to/file.py:123` - Description
- `another/file.ts:45-67` - Description
## Architecture Documentation
{Current patterns, conventions found in codebase}
## Historical Context (from thoughts/)
{Relevant insights from thoughts/ with references}
## Open Questions
{Any areas needing further investigation}
```
### Step 7: Update the bead
```bash
# Add artifact link to bead notes
bd update {bead-id} --notes="Research complete: thoughts/beads-{bead-id}/research.md"
```
### Step 8: Autonomy decision
**For research beads (type=research):**
- If research is complete and comprehensive, close the bead:
```bash
bd close {bead-id} --reason="Research complete. See thoughts/beads-{bead-id}/research.md"
```
- Present summary to user
**For other bead types:**
- Do NOT close the bead
- Present findings and ask how to proceed
### Step 9: Handle follow-up questions
If the user has follow-up questions:
- Append to the same research document
- Add a new section: `## Follow-up Research [{timestamp}]`
- Update the bead notes with the new findings
## Important Guidelines
- Always use parallel Task agents to maximize efficiency
- Always run fresh codebase research - never rely solely on existing documents
- Focus on finding concrete file paths and line numbers
- Research documents should be self-contained
- Document cross-component connections
- Include temporal context (when research was conducted)
- Keep the main agent focused on synthesis, not deep file reading
- **CRITICAL**: You and all sub-agents are documentarians, not evaluators
- **REMEMBER**: Document what IS, not what SHOULD BE
## Example Invocation
```
User: /beads:research nixos-configs-abc123
Assistant: Starting research for bead nixos-configs-abc123: Investigate auth flow
...
```
Or without bead ID:
```
User: /beads:research
Assistant: Let me check for research beads...
[runs bd ready]
Which bead would you like me to research?
```

View File

@@ -1,387 +0,0 @@
---
description: Comprehensive guide for the beads + humanlayer integrated workflow
---
# Beads Workflow Guide
This document describes the integrated workflow combining **beads** (issue tracking) with **humanlayer-style skills** (deep research, planning, implementation).
## Philosophy
### Two Systems, Complementary Purposes
| System | Purpose | Storage |
|--------|---------|---------|
| **Beads** | Track WHAT work exists | `.beads/` (git-synced) |
| **Thoughts** | Store HOW to do the work | `thoughts/` (local or symlinked) |
### Autonomy Model
| Bead Type | Agent Autonomy | Checkpoint |
|-----------|----------------|------------|
| `research` | **Full** - agent closes when satisfied | None |
| `feature`, `task`, `bug` | **Checkpointed** - pause for validation | Per-plan |
**Key insight**: Research produces artifacts. Implementation produces commits. Commits are the review boundary.
## Directory Structure
```
project/
├── .beads/ # Beads database (git-synced)
│ ├── beads.db
│ ├── config.yaml
│ └── issues.jsonl
├── thoughts/ # Artifacts (local or symlink)
│ ├── beads-{id}/ # Per-bead artifacts
│ │ ├── research.md
│ │ ├── plan.md
│ │ └── plan-v1.md # Iteration history
│ └── shared/ # Legacy/non-bead artifacts
│ ├── research/
│ └── plans/
└── home/roles/development/skills/ # Skill definitions
├── beads_research.md
├── beads_plan.md
├── beads_implement.md
└── beads_iterate.md
```
## When to Use What
### Use Beads When:
- Work spans multiple sessions
- Work has dependencies or blockers
- You need to track status across interruptions
- Multiple related tasks need coordination
- Context recovery after compaction matters
### Use TodoWrite When:
- Single-session execution tracking
- Breaking down work within a session
- Tracking progress on a single bead
### Use Both Together:
- Beads track the overall work items
- TodoWrite tracks progress within a session
- Example: Bead for "Implement auth", TodoWrite for each file being edited
## Workflow Patterns
### Pattern 1: Research-First Approach
```
1. Create research bead
bd create --title="Research auth patterns" --type=research --priority=1
2. Run research
/beads:research {bead-id}
→ Agent researches, writes to thoughts/beads-{id}/research.md
→ Agent closes bead when satisfied
3. Create implementation bead
bd create --title="Implement auth" --type=feature --priority=1
4. Plan the implementation
/beads:plan {bead-id}
→ Agent reads prior research, creates plan
→ Plan saved to thoughts/beads-{id}/plan.md
5. Implement
/beads:implement {bead-id}
→ Agent follows plan, pauses for manual verification
→ You validate, agent closes bead
```
### Pattern 2: Direct Implementation
For well-understood tasks without research:
```
1. Create bead
bd create --title="Fix login bug" --type=bug --priority=0
2. Plan and implement
/beads:plan {bead-id}
→ Quick planning based on bead description
/beads:implement {bead-id}
→ Follow plan, pause at checkpoint
```
### Pattern 3: Iterative Planning
When requirements evolve:
```
1. Initial plan
/beads:plan {bead-id}
2. Iterate based on feedback
/beads:iterate {bead-id} - add error handling phase
3. Iterate again if needed
/beads:iterate {bead-id} - split phase 2 into backend/frontend
4. Implement when plan is solid
/beads:implement {bead-id}
```
### Pattern 4: Parallel Work
Using parallel_beads skill for multiple independent tasks:
```
1. Check what's ready
bd ready
2. Select multiple beads
/parallel_beads
→ Select beads to work on
→ Each gets worktree, PR, review
3. Reconcile after PRs merge
/reconcile_beads
```
## Skills Reference
### /beads:research {bead-id}
- Conducts comprehensive codebase research
- Uses parallel sub-agents for efficiency
- Outputs to `thoughts/beads-{id}/research.md`
- **Autonomy**: Can close research beads automatically
### /beads:plan {bead-id}
- Creates detailed implementation plans
- Interactive process with checkpoints
- Outputs to `thoughts/beads-{id}/plan.md`
- Can create dependent implementation beads
### /beads:implement {bead-id}
- Follows plans from thoughts/
- Updates plan checkboxes for resumability
- **Checkpoint**: Pauses after plan completion for manual verification
- Only closes bead after human confirms
### /beads:iterate {bead-id}
- Updates existing plans based on feedback
- Preserves plan structure while making targeted changes
- Saves iteration history as `plan-v{N}.md`
### /parallel_beads
- Orchestrates parallel bead processing
- Creates worktrees, PRs, reviews for multiple beads
- Good for batching independent work
### /reconcile_beads
- Closes beads whose PRs have merged
- Run after merging PRs to keep beads in sync
## Session Protocols
### Starting a Session
```bash
# Check what's available
bd ready
# Pick work and start
bd update {bead-id} --status=in_progress
```
### Ending a Session
```bash
# Always run this checklist:
[ ] git status # Check changes
[ ] git add <files> # Stage code changes
[ ] bd sync # Sync beads
[ ] git commit -m "..." # Commit code
[ ] git push # Push to remote
```
### Resuming Work
```bash
# Find in-progress work
bd list --status=in_progress
# Check bead notes for context
bd show {bead-id}
# Check for partial plan progress
cat thoughts/beads-{id}/plan.md | grep "\[x\]"
```
## Thoughts Directory Patterns
### For Work Repos (via symlink)
```
project/thoughts → ~/thoughts/repos/{repo-name}/
```
- Syncs via codelayer to work remote
- Shared across projects on same machine
### For Personal Repos (local)
```
project/thoughts/ # Regular directory, not symlink
```
- Stays local to project
- Committed with project or gitignored
### Determining Which Pattern
```bash
# Check if thoughts is a symlink
ls -la thoughts
# If symlink, it points to ~/thoughts/repos/{repo}/
# If directory, it's local to this project
```
## Best Practices
### 1. Bead Descriptions Matter
Write clear descriptions - they're the input for research and planning:
```bash
bd create --title="Implement user preferences" --type=feature \
--description="Add user preferences storage and UI.
Requirements:
- Store preferences in SQLite
- Expose via REST API
- Add settings page in UI
See related: thoughts/shared/research/preferences-patterns.md"
```
### 2. Link Artifacts in Beads
Always update bead notes with artifact locations:
```bash
bd update {id} --notes="Research: thoughts/beads-{id}/research.md
Plan: thoughts/beads-{id}/plan.md"
```
### 3. Use Dependencies
Structure work with dependencies:
```bash
# Research blocks planning
bd dep add {plan-bead} {research-bead}
# Planning blocks implementation
bd dep add {impl-bead} {plan-bead}
```
### 4. Trust the Checkpoint Model
- Research beads: Let agent close them
- Implementation beads: Always validate before closing
- If in doubt, err on the side of checkpoints
### 5. Keep Plans Updated
- Check off completed items as you go
- Update notes with progress
- This enables seamless resume across sessions
## Troubleshooting
### "What bead should I work on?"
```bash
bd ready # Shows unblocked work
```
### "Where did the research go?"
```bash
ls thoughts/beads-{id}/
bd show {id} # Check notes for artifact links
```
### "Plan doesn't match reality"
```bash
/beads:iterate {id} # Update plan based on findings
```
### "Session ended mid-implementation"
```bash
bd show {id} # Check notes for progress
cat thoughts/beads-{id}/plan.md | grep "\[x\]" # See completed items
/beads:implement {id} # Resume - will pick up from last checkpoint
```
### "Bead is blocked"
```bash
bd show {id} # See what's blocking
bd blocked # See all blocked beads
```
## Migration Notes
### From Pure Humanlayer to Beads+Humanlayer
Old pattern:
```
thoughts/shared/research/2025-01-01-topic.md
thoughts/shared/plans/2025-01-01-feature.md
```
New pattern:
```
thoughts/beads-{id}/research.md
thoughts/beads-{id}/plan.md
```
The `shared/` structure still works for non-bead artifacts, but prefer per-bead directories for tracked work.
### Existing Content
- Keep existing `thoughts/shared/` content
- New bead-tracked work uses `thoughts/beads-{id}/`
- Reference old research from bead descriptions when relevant
## Design Decisions
### Phase Tracking: Artifacts vs Statuses
**Current approach**: Skills infer workflow phase from artifact presence:
- Has `research.md` → research done
- Has `plan.md` → planning done
- No artifacts → needs research/planning
**Alternative considered**: Explicit phase statuses (`needs_research`, `needs_plan`, `implementing`, etc.)
**Why artifacts win**:
1. **Single source of truth** - Status can't drift from reality
2. **Less state to maintain** - No need to update status when creating artifacts
3. **Works across repos** - No custom status config needed
4. **Skills already check artifacts** - Natural fit with existing behavior
**When explicit statuses would help**:
- Pipeline visualization (e.g., `bd list --status=needs_plan`)
- Agent self-selection by phase
- Team coordination dashboards
**Recommendation**: Keep artifact-inference as primary mechanism. If pipeline visibility becomes important, consider adding statuses that skills auto-set when creating artifacts (advisory, not enforced).
### One Bead Per Feature (Default)
**Current approach**: File one bead per logical feature. Skills handle phases internally.
**Alternative considered**: Separate beads for research → planning → implementation, linked by dependencies.
**Why single bead wins for most work**:
1. **Lower friction** - Quick idea dump without filing 3 tickets
2. **Simpler tracking** - One status to check
3. **Natural grouping** - Artifacts stay together in `thoughts/beads-{id}/`
**When to split into multiple beads**:
- Research reveals the work should be multiple features
- Different phases need different assignees
- Explicit dependency tracking matters (e.g., "auth must ship before payments")
**The discovered-work pattern**: Start with one bead. If research reveals split work, file additional beads with dependencies. Skills guide this naturally.
### Plan Requirements by Type
**Bug fixes** (`type=bug`): Can proceed without plans - usually well-scoped from bug report.
**Features/tasks** (`type=feature`, `type=task`): Should have plans - helps ensure design is sound before implementation.
This is advisory, not enforced. Skills warn but allow override for simple changes.

View File

@@ -1,244 +0,0 @@
---
description: Manage and respond to Gitea/Forgejo PR review comments
---
# Gitea PR Review Comments
This skill enables reading PR review comments and posting inline thread replies on Gitea/Forgejo instances.
## Prerequisites
- `tea` CLI configured with a Gitea/Forgejo instance
- Access token from tea config: `~/.config/tea/config.yml`
- Repository must be a Gitea/Forgejo remote (not GitHub)
## Configuration
Get the Gitea instance URL and token from tea config:
```bash
# Get the default login URL and token
yq -r '.logins[] | select(.name == "default") | .url' ~/.config/tea/config.yml
yq -r '.logins[] | select(.name == "default") | .token' ~/.config/tea/config.yml
```
Or if you have a specific login name:
```bash
yq -r '.logins[] | select(.name == "YOUR_LOGIN") | .url' ~/.config/tea/config.yml
yq -r '.logins[] | select(.name == "YOUR_LOGIN") | .token' ~/.config/tea/config.yml
```
## Commands
### 1. List PR Review Comments
Fetch all reviews and their comments for a PR:
```bash
# Set environment variables
GITEA_URL="https://git.johnogle.info"
TOKEN="<your-token>"
OWNER="<repo-owner>"
REPO="<repo-name>"
PR_NUMBER="<pr-number>"
# Get all reviews for the PR
curl -s -H "Authorization: token $TOKEN" \
"$GITEA_URL/api/v1/repos/$OWNER/$REPO/pulls/$PR_NUMBER/reviews" | jq
# Get comments for a specific review
REVIEW_ID="<review-id>"
curl -s -H "Authorization: token $TOKEN" \
"$GITEA_URL/api/v1/repos/$OWNER/$REPO/pulls/$PR_NUMBER/reviews/$REVIEW_ID/comments" | jq
```
### 2. View All Review Comments (Combined)
```bash
# Get all reviews and their comments in one view
curl -s -H "Authorization: token $TOKEN" \
"$GITEA_URL/api/v1/repos/$OWNER/$REPO/pulls/$PR_NUMBER/reviews" | \
jq -r '.[] | "Review \(.id) by \(.user.login): \(.state)\n Body: \(.body)"'
# For each review, show inline comments
for REVIEW_ID in $(curl -s -H "Authorization: token $TOKEN" \
"$GITEA_URL/api/v1/repos/$OWNER/$REPO/pulls/$PR_NUMBER/reviews" | jq -r '.[].id'); do
echo "=== Review $REVIEW_ID comments ==="
curl -s -H "Authorization: token $TOKEN" \
"$GITEA_URL/api/v1/repos/$OWNER/$REPO/pulls/$PR_NUMBER/reviews/$REVIEW_ID/comments" | \
jq -r '.[] | "[\(.path):\(.line)] \(.body)"'
done
```
### 3. Reply to Review Comments (Web Endpoint Method)
The Gitea REST API does not support replying to review comment threads. The web UI uses a different endpoint:
```
POST /{owner}/{repo}/pulls/{pr_number}/files/reviews/comments
Content-Type: multipart/form-data
```
**Required form fields:**
- `reply`: Review ID to reply to
- `content`: The reply message
- `path`: File path
- `line`: Line number
- `side`: `proposed` or `original`
- `single_review`: `true`
- `origin`: `timeline`
- `_csrf`: CSRF token (required for web endpoint)
**Authentication Challenge:**
This endpoint requires session-based authentication, not API tokens. Options:
#### Option A: Use Browser Session (Recommended)
1. Log in to Gitea in your browser
2. Open browser developer tools and copy cookies
3. Use the session cookies with curl
```bash
# First, get CSRF token from the PR page
CSRF=$(curl -s -c cookies.txt -b cookies.txt \
"$GITEA_URL/$OWNER/$REPO/pulls/$PR_NUMBER/files" | \
grep -oP 'name="_csrf" value="\K[^"]+')
# Post the reply
curl -s -b cookies.txt \
-F "reply=$REVIEW_ID" \
-F "content=Your reply message here" \
-F "path=$FILE_PATH" \
-F "line=$LINE_NUMBER" \
-F "side=proposed" \
-F "single_review=true" \
-F "origin=timeline" \
-F "_csrf=$CSRF" \
"$GITEA_URL/$OWNER/$REPO/pulls/$PR_NUMBER/files/reviews/comments"
```
#### Option B: Create Top-Level Comment (Fallback)
If thread replies are not critical, use the API to create a top-level comment:
```bash
# Create a top-level comment mentioning the review context
curl -s -X POST \
-H "Authorization: token $TOKEN" \
-H "Content-Type: application/json" \
-d "{\"body\": \"Re: @reviewer's comment on $FILE_PATH:$LINE_NUMBER\n\nYour reply here\"}" \
"$GITEA_URL/api/v1/repos/$OWNER/$REPO/issues/$PR_NUMBER/comments"
```
Or use tea CLI:
```bash
tea comment $PR_NUMBER "Re: @reviewer's comment on $FILE_PATH:$LINE_NUMBER
Your reply here"
```
### 4. Submit a New Review
Create a new review with inline comments:
```bash
curl -s -X POST \
-H "Authorization: token $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"body": "Overall review comments",
"event": "COMMENT",
"comments": [
{
"path": "path/to/file.py",
"body": "Comment on this line",
"new_position": 10
}
]
}' \
"$GITEA_URL/api/v1/repos/$OWNER/$REPO/pulls/$PR_NUMBER/reviews"
```
Event types: `COMMENT`, `APPROVE`, `REQUEST_CHANGES`
## Workflow Example
### Reading and Responding to Reviews
1. **Set up environment**:
```bash
export GITEA_URL=$(yq -r '.logins[] | select(.name == "default") | .url' ~/.config/tea/config.yml)
export TOKEN=$(yq -r '.logins[] | select(.name == "default") | .token' ~/.config/tea/config.yml)
export OWNER="johno"
export REPO="nixos-configs"
export PR_NUMBER="5"
```
2. **List all pending review comments**:
```bash
# Get reviews
curl -s -H "Authorization: token $TOKEN" \
"$GITEA_URL/api/v1/repos/$OWNER/$REPO/pulls/$PR_NUMBER/reviews" | \
jq -r '.[] | select(.state == "REQUEST_CHANGES" or .state == "COMMENT") |
"Review \(.id) by \(.user.login) (\(.state)):\n\(.body)\n"'
```
3. **Get detailed comments for a review**:
```bash
REVIEW_ID="2"
curl -s -H "Authorization: token $TOKEN" \
"$GITEA_URL/api/v1/repos/$OWNER/$REPO/pulls/$PR_NUMBER/reviews/$REVIEW_ID/comments" | \
jq -r '.[] | "File: \(.path):\(.line)\nComment: \(.body)\nID: \(.id)\n---"'
```
4. **Respond using top-level comment** (most reliable):
```bash
tea comment $PR_NUMBER "Addressing review feedback:
- File \`path/to/file.py\` line 10: Fixed the issue by...
- File \`other/file.py\` line 25: Updated as suggested..."
```
## API Reference
### Endpoints
| Action | Method | Endpoint |
|--------|--------|----------|
| List reviews | GET | `/api/v1/repos/{owner}/{repo}/pulls/{index}/reviews` |
| Get review | GET | `/api/v1/repos/{owner}/{repo}/pulls/{index}/reviews/{id}` |
| Get review comments | GET | `/api/v1/repos/{owner}/{repo}/pulls/{index}/reviews/{id}/comments` |
| Create review | POST | `/api/v1/repos/{owner}/{repo}/pulls/{index}/reviews` |
| Submit review | POST | `/api/v1/repos/{owner}/{repo}/pulls/{index}/reviews/{id}` |
| Delete review | DELETE | `/api/v1/repos/{owner}/{repo}/pulls/{index}/reviews/{id}` |
| Create issue comment | POST | `/api/v1/repos/{owner}/{repo}/issues/{index}/comments` |
### Review States
- `PENDING` - Draft review not yet submitted
- `COMMENT` - General comment without approval/rejection
- `APPROVE` - Approving the changes
- `REQUEST_CHANGES` - Requesting changes before merge
## Limitations
1. **Thread replies**: The Gitea REST API does not support replying directly to review comment threads. This is a known limitation. Workarounds:
- Use top-level comments with context
- Use the web UI manually for thread replies
- Implement session-based authentication to use the web endpoint
2. **CSRF tokens**: The web endpoint for thread replies requires CSRF tokens, which expire and need to be fetched from the page.
3. **Session auth**: API tokens work for REST API but not for web endpoints that require session cookies.
## Tips
- Always quote file paths and line numbers when responding via top-level comments
- Use `tea pr view $PR_NUMBER --comments` to see all comments
- Use `tea open pulls/$PR_NUMBER` to open the PR in browser for manual thread replies
- Consider using `tea pr approve $PR_NUMBER` after addressing all comments
## See Also
- Gitea API Documentation: https://docs.gitea.com/api/1.20/
- `tea` CLI: https://gitea.com/gitea/tea

View File

@@ -1,281 +0,0 @@
---
description: Orchestrate parallel bead processing with worktrees, PRs, and reviews
---
# Parallel Beads Workflow
This skill orchestrates parallel bead processing using subagents. Each bead gets its own worktree, implementation, PR, and review.
## Phase 1: Selection
1. **Get ready beads**: Run `bd ready` to list all beads with no blockers
2. **Filter by plan readiness**:
For each ready bead, check if it's ready for batch implementation:
- **Has plan** (`thoughts/beads-{id}/plan.md` exists): Include
- **type=bug** without plan: Include (simple bugs can implement directly)
- **type=feature/task** without plan: Exclude with warning
```bash
# Check for plan existence
ls thoughts/beads-{bead-id}/plan.md 2>/dev/null
```
3. **Report skipped beads**:
If any beads were skipped, inform the user:
```
Skipped beads (no plan):
- {bead-id}: {title} (type: feature) - Run /beads_plan {bead-id} first
- {bead-id}: {title} (type: task) - Run /beads_plan {bead-id} first
```
4. **Present selection**: Use `AskUserQuestion` with `multiSelect: true` to let the user choose which beads to work on
- Include bead ID and title for each option
- Only show beads that passed the plan check
- Allow selection of multiple beads
Example:
```
AskUserQuestion with:
- question: "Which beads do you want to work on in parallel?"
- multiSelect: true
- options from filtered bd ready output
```
## Phase 2: Parallel Implementation
For each selected bead, launch a subagent using the Task tool. All subagents should be launched in parallel (single message with multiple Task tool calls).
### Subagent Instructions Template
Each implementation subagent should receive these instructions:
```
Work on bead [BEAD_ID]: [BEAD_TITLE]
1. **Create worktree**:
- Branch name: `bead/[BEAD_ID]`
- Worktree path: `~/wt/[REPO_NAME]/[BEAD_ID]`
- Command: `git worktree add -b bead/[BEAD_ID] ~/wt/[REPO_NAME]/[BEAD_ID]`
2. **Review the bead requirements**:
- Run `bd show [BEAD_ID]` to understand the acceptance criteria
- Note any external issue references (GitHub issues, Linear tickets, etc.)
3. **Extract validation criteria**:
- Check for a plan: `thoughts/beads-[BEAD_ID]/plan.md`
- If plan exists:
- Read the plan and find the "Automated Verification" section
- Extract each verification command (lines starting with `- [ ]` followed by a command)
- Example: `- [ ] Tests pass: \`make test\`` → extract `make test`
- If no plan exists, use best-effort validation:
- Check if `Makefile` exists → try `make test` and `make lint`
- Check if `flake.nix` exists → try `nix flake check`
- Check if `package.json` exists → try `npm test`
- If none found, note "No validation criteria found"
4. **Implement the changes**:
- Work in the worktree directory
- Complete all acceptance criteria listed in the bead
After implementation, run validation:
- Execute each validation command from step 3
- Track results in this format:
```
VALIDATION_RESULTS:
- make test: PASS
- make lint: FAIL (exit code 1: src/foo.ts:23 - missing semicolon)
- nix flake check: SKIP (command not found)
```
- If any validation fails:
- Continue with PR creation (don't block)
- Document failures in bead notes: `bd update [BEAD_ID] --notes="Validation failures: [list]"`
5. **Commit and push**:
- Stage all changes: `git add -A`
- Create a descriptive commit message
- Push the branch: `git push -u origin bead/[BEAD_ID]`
6. **Create a PR**:
- Detect hosting provider from origin URL: `git remote get-url origin`
- If URL contains `github.com`, use `gh`; otherwise use `tea` (Gitea/Forgejo)
- PR title: "[BEAD_ID] [BEAD_TITLE]"
- PR body must include:
- Reference to bead ID: "Implements bead: [BEAD_ID]"
- Any external issue references from the bead (e.g., "Closes #123")
- Summary of changes
- For GitHub (`gh`):
```bash
gh pr create --title "[BEAD_ID] [BEAD_TITLE]" --body "$(cat <<'EOF'
## Summary
[Brief description of changes]
## Bead Reference
Implements bead: [BEAD_ID]
## External Issues
[Any linked issues from the bead]
## Changes
- [List of changes made]
## Validation
[Include validation results from step 4]
| Check | Status | Details |
|-------|--------|---------|
| make test | PASS | |
| make lint | FAIL | src/foo.ts:23 - missing semicolon |
| nix flake check | SKIP | command not found |
EOF
)"
```
- For Gitea (`tea`):
```bash
tea pr create --head bead/[BEAD_ID] --base main \
--title "[BEAD_ID] [BEAD_TITLE]" \
--description "## Summary
[Brief description of changes]
## Bead Reference
Implements bead: [BEAD_ID]
## External Issues
[Any linked issues from the bead]
## Changes
- [List of changes made]
## Validation
[Include validation results from step 4]
| Check | Status | Details |
|-------|--------|---------|
| make test | PASS | |
| make lint | FAIL | src/foo.ts:23 - missing semicolon |
| nix flake check | SKIP | command not found |"
```
7. **Update bead status**:
- Mark the bead as "in_review": `bd update [BEAD_ID] --status=in_review`
- Add the PR URL to the bead notes: `bd update [BEAD_ID] --notes="$(bd show [BEAD_ID] --json | jq -r '.notes')
PR: [PR_URL]"`
8. **Report results**:
- Return:
- PR URL
- Bead ID
- Implementation status (success/failure/blocked)
- Validation summary: `X passed, Y failed, Z skipped`
- List of any validation failures with details
- If blocked or unable to complete, explain what's blocking progress
- If validation failed, include the specific failures so the main agent can summarize them for the user
```
### Launching Subagents
Use `subagent_type: "general-purpose"` for implementation subagents. Launch all selected beads' subagents in a single message for parallel execution:
```
<Task calls for each selected bead - all in one message>
```
Collect results from all subagents before proceeding.
## Phase 3: Parallel Review
After all implementation subagents complete, launch review subagents for each PR.
### Review Subagent Instructions Template
```
Review PR for bead [BEAD_ID]
1. **Detect hosting provider**: Run `git remote get-url origin` - if it contains `github.com` use `gh`, otherwise use `tea`
2. **Read the PR**:
- For GitHub: `gh pr view [PR_NUMBER] --json title,body,additions,deletions,files`
- For Gitea: `tea pr view [PR_NUMBER]`
- View the diff: `git diff main...bead/[BEAD_ID]`
3. **Review against acceptance criteria**:
- Run `bd show [BEAD_ID]` to get the acceptance criteria
- Verify each criterion is addressed
4. **Leave review comments**:
- For GitHub: `gh pr review [PR_NUMBER] --comment --body "[COMMENTS]"`
- For Gitea: `tea pr review [PR_NUMBER] --comment "[COMMENTS]"`
- Include:
- Acceptance criteria checklist (which are met, which might be missing)
- Code quality observations
- Suggestions for improvement
5. **Return summary**:
- Overall assessment (ready to merge / needs changes)
- Key findings
```
Launch all review subagents in parallel.
## Phase 4: Cleanup and Summary
After reviews complete:
1. **Clean up worktrees**:
```bash
git worktree remove ~/wt/[REPO_NAME]/[BEAD_ID] --force
```
Do this for each bead's worktree.
2. **Provide final summary**:
Present a table or list with:
- Bead ID
- PR URL
- Status (success / failed / blocked)
- Validation summary (X/Y passed)
- Review summary
- Any failures or blockers encountered
If any validation failures occurred, list them in a "Validation Failures" section so the user can address them.
Example output:
```
## Parallel Beads Summary
| Bead | PR | Bead Status | Validation | Review |
|------|-----|-------------|------------|--------|
| beads-abc | #123 | in_review | 3/3 passed | Approved |
| beads-xyz | #124 | in_review | 2/3 passed | Needs changes |
| beads-123 | - | open (failed) | - | Blocked by missing dependency |
### Validation Failures
- beads-xyz: `make lint` failed - src/foo.ts:23 missing semicolon
### Failures/Blockers
- beads-123: Could not complete because [reason]
### Next Steps
- Fix validation failures before merging
- Review PRs that need changes
- Address blockers for failed beads
- Run `/reconcile_beads` after PRs are merged to close beads
```
## Error Handling
- **Subagent failures**: If a subagent fails or times out, note it in the summary but continue with other beads
- **PR creation failures**: Report the error but continue with reviews of successful PRs
- **Worktree conflicts**: If a worktree already exists, ask the user if they want to remove it or skip that bead
## Resource Limits
- Consider limiting concurrent subagents to 3-5 to avoid overwhelming system resources
- If user selects more beads than the limit, process them in batches
## Notes
- This workflow integrates with the beads system (`bd` commands)
- Worktrees are created in `~/wt/[REPO_NAME]/` by convention
- Each bead gets its own isolated branch and worktree
- PRs automatically reference the bead ID for traceability

View File

@@ -1,88 +0,0 @@
---
description: Reconcile beads with merged PRs and close completed beads
---
# Reconcile Beads Workflow
This skill reconciles beads that are in `in_review` status with their corresponding PRs. If a PR has been merged, the bead is closed.
## Prerequisites
- Custom status `in_review` must be configured: `bd config set status.custom "in_review"`
- Beads in `in_review` status should have a PR URL in their notes
## Workflow
### Step 1: Find beads in review
```bash
bd list --status=in_review
```
### Step 2: For each bead, check PR status
1. **Get the PR URL from bead notes**:
```bash
bd show [BEAD_ID] --json | jq -r '.[0].notes'
```
Note: `bd show --json` returns an array, so use `.[0]` to access the first element.
Extract the PR URL (look for lines starting with "PR:" or containing pull request URLs).
Extract the PR number: `echo "$NOTES" | grep -oP '/pulls/\K\d+'`
2. **Detect hosting provider**:
- Run `git remote get-url origin`
- If URL contains `github.com`, use `gh`; otherwise use `tea` (Gitea/Forgejo)
3. **Check PR status**:
- For GitHub:
```bash
gh pr view [PR_NUMBER] --json state,merged
```
- For Gitea:
```bash
tea pr list --state=closed
```
Look for the PR number in the INDEX column with STATE "merged".
Note: `tea pr view [PR_NUMBER]` lists all PRs, not a specific one. Use `tea pr list --state=closed` and look for your PR number in the results.
### Step 3: Close merged beads
If the PR is merged:
```bash
bd close [BEAD_ID] --reason="PR merged: [PR_URL]"
```
### Step 4: Report summary
Present results:
```
## Beads Reconciliation Summary
### Closed (PR Merged)
| Bead | PR | Title |
|------|-----|-------|
| beads-abc | #123 | Feature X |
| beads-xyz | #456 | Bug fix Y |
### Still in Review
| Bead | PR | Status | Title |
|------|-----|--------|-------|
| beads-def | #789 | Open | Feature Z |
### Issues Found
- beads-ghi: No PR URL found in notes
- beads-jkl: PR #999 not found (may have been deleted)
```
## Error Handling
- **Missing PR URL**: Skip the bead and report it
- **PR not found**: Report the error but continue with other beads
- **API errors**: Report and continue
## Notes
- This skill complements `/parallel_beads` which sets beads to `in_review` status
- Run this skill periodically or after merging PRs to keep beads in sync
- Beads with closed (but not merged) PRs are not automatically closed - they may need rework

View File

@@ -23,90 +23,55 @@ let
if pkgs.stdenv.isDarwin if pkgs.stdenv.isDarwin
then pkgs.emacs-macport.pkgs.withPackages emacsPackages then pkgs.emacs-macport.pkgs.withPackages emacsPackages
else pkgs.emacs.pkgs.withPackages emacsPackages; else pkgs.emacs.pkgs.withPackages emacsPackages;
# Path to doom config directory (relative to this file)
doomConfigDir = ./doom;
in in
{ {
options.home.roles.emacs = { options.home.roles.emacs = {
enable = mkEnableOption "Doom Emacs with vterm and tree-sitter support"; enable = mkEnableOption "Doom Emacs with vterm and tree-sitter support";
prebuiltDoom = mkOption {
type = types.bool;
default = false;
description = ''
Use nix-doom-emacs-unstraightened to pre-build all Doom packages at
nix build time. This eliminates the need to run `doom sync` after
first boot, making it ideal for live USB images or immutable systems.
When enabled, the doom configuration is read-only (stored in nix store).
'';
};
}; };
config = mkIf cfg.enable (mkMerge [ config = mkIf cfg.enable {
# Common configuration for both modes home.packages = [
{ pkgs.emacs-all-the-icons-fonts
home.packages = [ pkgs.fira-code
pkgs.emacs-all-the-icons-fonts pkgs.fontconfig
pkgs.fira-code pkgs.graphviz
pkgs.fontconfig pkgs.isort
pkgs.graphviz pkgs.nerd-fonts.fira-code
pkgs.isort pkgs.nerd-fonts.droid-sans-mono
pkgs.nerd-fonts.fira-code pkgs.nil # nix lsp language server
pkgs.nerd-fonts.droid-sans-mono pkgs.nixfmt-rfc-style
pkgs.nil # nix lsp language server (pkgs.ripgrep.override {withPCRE2 = true;})
pkgs.nixfmt-rfc-style pkgs.pipenv
(pkgs.ripgrep.override {withPCRE2 = true;}) pkgs.poetry
pkgs.pipenv pkgs.python3
pkgs.poetry ];
pkgs.python3
];
fonts.fontconfig.enable = true; programs.emacs = {
} enable = true;
package = defaultEmacsPackage;
};
# Standard Doom Emacs mode (requires doom sync at runtime) fonts.fontconfig.enable = true;
(mkIf (!cfg.prebuiltDoom) {
programs.emacs = {
enable = true;
package = defaultEmacsPackage;
};
# Mount emacs and tree-sitter grammars from nix store # Mount emacs and tree-sitter grammars from nix store
home.file = { home.file = {
"${config.xdg.configHome}/emacs".source = doomEmacs; "${config.xdg.configHome}/emacs".source = doomEmacs;
}; };
home.sessionPath = [ home.sessionPath = [
"${config.xdg.configHome}/emacs/bin" "${config.xdg.configHome}/emacs/bin"
]; ];
home.sessionVariables = { home.sessionVariables = {
DOOMDIR = "${config.xdg.configHome}/doom"; DOOMDIR = "${config.xdg.configHome}/doom";
DOOMLOCALDIR = "${config.xdg.dataHome}/doom"; DOOMLOCALDIR = "${config.xdg.dataHome}/doom";
}; };
# TODO: Use mkOutOfStoreSymlink instead? # TODO: Use mkOutOfStoreSymlink instead?
home.activation.doomConfig = lib.hm.dag.entryAfter ["writeBoundary"] '' home.activation.doomConfig = lib.hm.dag.entryAfter ["writeBoundary"] ''
# Always remove and recreate the symlink to ensure it points to the source directory # Always remove and recreate the symlink to ensure it points to the source directory
rm -rf "${config.xdg.configHome}/doom" rm -rf "${config.xdg.configHome}/doom"
ln -sf "${config.home.homeDirectory}/nixos-configs/home/roles/emacs/doom" "${config.xdg.configHome}/doom" ln -sf "${config.home.homeDirectory}/nixos-configs/home/roles/emacs/doom" "${config.xdg.configHome}/doom"
''; '';
}) };
# Pre-built Doom Emacs mode (no doom sync needed - ideal for live USB)
(mkIf cfg.prebuiltDoom {
programs.doom-emacs = {
enable = true;
doomDir = doomConfigDir;
doomLocalDir = "${config.xdg.dataHome}/doom";
# Add extra packages that aren't part of Doom but needed for our config
extraPackages = epkgs: [
epkgs.vterm
epkgs.treesit-grammars.with-all-grammars
];
};
})
]);
} }

View File

@@ -167,20 +167,6 @@
claude-code-ide-window-side 'right claude-code-ide-window-side 'right
claude-code-ide-window-width 90)) claude-code-ide-window-width 90))
(use-package! beads
:commands (beads)
:init
(map! :leader
(:prefix ("o" . "open")
(:prefix ("B" . "beads")
:desc "List issues" "B" (cmd! (require 'beads) (beads-list))
:desc "Project issues" "p" (cmd! (require 'beads) (beads-project-list))
:desc "Activity feed" "a" (cmd! (require 'beads) (beads-activity))
:desc "Stale issues" "s" (cmd! (require 'beads) (beads-stale))
:desc "Orphaned issues" "o" (cmd! (require 'beads) (beads-orphans))
:desc "Find duplicates" "d" (cmd! (require 'beads) (beads-duplicates))
:desc "Lint issues" "l" (cmd! (require 'beads) (beads-lint))))))
(after! gptel (after! gptel
(require 'gptel-tool-library) (require 'gptel-tool-library)
(setq gptel-tool-library-use-maybe-safe t (setq gptel-tool-library-use-maybe-safe t
@@ -225,11 +211,16 @@
mu4e-headers-time-format "%H:%M") mu4e-headers-time-format "%H:%M")
;; Sending mail via msmtp ;; Sending mail via msmtp
(setq message-send-mail-function 'message-send-mail-with-sendmail ;; NOTE: message-sendmail-f-is-evil and --read-envelope-from are required
sendmail-program (executable-find "msmtp") ;; to prevent msmtp from stripping the email body when processing headers.
message-sendmail-envelope-from 'header ;; Without these, multipart messages (especially from org-msg) may arrive
mail-envelope-from 'header ;; with empty bodies.
mail-specify-envelope-from t)) (setq sendmail-program (executable-find "msmtp")
send-mail-function #'message-send-mail-with-sendmail
message-send-mail-function #'message-send-mail-with-sendmail
message-sendmail-f-is-evil t
message-sendmail-extra-arguments '("--read-envelope-from")
message-sendmail-envelope-from 'header))
;; Whenever you reconfigure a package, make sure to wrap your config in an ;; Whenever you reconfigure a package, make sure to wrap your config in an
;; `after!' block, otherwise Doom's defaults may override your settings. E.g. ;; `after!' block, otherwise Doom's defaults may override your settings. E.g.

View File

@@ -51,21 +51,11 @@
;; (package! org-caldav) ;; (package! org-caldav)
;; Note: Packages with custom recipes must be pinned for nix-doom-emacs-unstraightened
;; to build deterministically. Update pins when upgrading packages.
(package! gptel :recipe (:nonrecursive t)) (package! gptel :recipe (:nonrecursive t))
(package! claude-code-ide (package! claude-code-ide
:recipe (:host github :repo "manzaltu/claude-code-ide.el") :recipe (:host github :repo "manzaltu/claude-code-ide.el"))
:pin "760240d7f03ff16f90ede9d4f4243cd94f3fed73")
(package! gptel-tool-library (package! gptel-tool-library
:recipe (:host github :repo "aard-fi/gptel-tool-library" :recipe (:host github :repo "aard-fi/gptel-tool-library"
:files ("*.el")) :files ("*.el")))
:pin "baffc3b0d74a2b7cbda0d5cd6dd7726d6ccaca83")
(package! beads
:recipe (:type git :repo "https://codeberg.org/ctietze/beads.el.git"
:files ("lisp/*.el"))
:pin "f40a6461d3c0fa0969311bbb6a1e30d1bba86c88")

View File

@@ -1,72 +0,0 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.home.roles.starship;
in
{
options.home.roles.starship = {
enable = mkEnableOption "starship cross-shell prompt";
};
config = mkIf cfg.enable {
programs.starship = {
enable = true;
enableBashIntegration = true;
enableZshIntegration = true;
settings = {
add_newline = true;
character = {
success_symbol = "[>](bold green)";
error_symbol = "[x](bold red)";
vimcmd_symbol = "[<](bold green)";
};
directory = {
truncation_length = 4;
truncate_to_repo = true;
};
git_branch = {
symbol = "";
format = "[$symbol$branch(:$remote_branch)]($style) ";
};
git_status = {
format = "([$all_status$ahead_behind]($style) )";
};
nix_shell = {
symbol = "";
format = "[$symbol$state( \\($name\\))]($style) ";
};
cmd_duration = {
min_time = 2000;
format = "[$duration]($style) ";
};
# Disable modules that are noisy or rarely needed
package.disabled = true;
nodejs.disabled = true;
python.disabled = true;
ruby.disabled = true;
java.disabled = true;
golang.disabled = true;
rust.disabled = true;
php.disabled = true;
lua.disabled = true;
perl.disabled = true;
terraform.disabled = true;
kubernetes.disabled = true;
docker_context.disabled = true;
aws.disabled = true;
gcloud.disabled = true;
azure.disabled = true;
};
};
};
}

View File

@@ -26,7 +26,6 @@ with lib;
enable = true; enable = true;
autologin = true; autologin = true;
wayland = true; wayland = true;
appLauncherServer = true;
jellyfinScaleFactor = 1.0; jellyfinScaleFactor = 1.0;
}; };
nfs-mounts.enable = true; nfs-mounts.enable = true;

View File

@@ -170,7 +170,6 @@ This document outlines the plan to migrate the john-endesktop server from Arch L
```bash ```bash
blkid /dev/nvme0n1p5 blkid /dev/nvme0n1p5
# Note the UUID for updating hardware-configuration.nix # Note the UUID for updating hardware-configuration.nix
/dev/nvme0n1p5: LABEL="nixos" UUID="5f4ad025-bfab-4aed-a933-6638348059e5" UUID_SUB="4734d820-7b8a-4b7f-853a-026021c1d204" BLOCK_SIZE="4096" TYPE="btrfs" PARTLABEL="data" PARTUUID="9ea025df-cdb7-48fd-b5d4-37cd5d8588eb"
``` ```
8. **Copy your NixOS configuration to the server** 8. **Copy your NixOS configuration to the server**
@@ -389,11 +388,11 @@ After successful migration and 24-48 hours of stable operation:
Pre-migration: Pre-migration:
- [x] nvme0n1p5 removal from media pool complete - [x] nvme0n1p5 removal from media pool complete
- [x] Recent backup verified (< 24 hours) - [ ] Recent backup verified (< 24 hours)
- [x] Maintenance window scheduled - [ ] Maintenance window scheduled
- [x] NixOS ISO downloaded - [ ] NixOS ISO downloaded
- [x] Bootable USB created - [ ] Bootable USB created
- [x] NixOS config builds successfully - [ ] NixOS config builds successfully
During migration: During migration:
- [ ] ZFS pools exported - [ ] ZFS pools exported

View File

@@ -104,23 +104,6 @@ with lib;
# User configuration # User configuration
roles.users.enable = true; roles.users.enable = true;
# k3s agent configuration
roles.k3s-node = {
enable = true;
role = "agent";
# serverAddr defaults to https://10.0.0.222:6443
# tokenFile defaults to /etc/k3s/token
extraFlags = [
# Node labels for workload scheduling
# fast-cpu: This node has a faster CPU than other cluster nodes
"--node-label=fast-cpu=true"
# fast-storage: This node is the NFS host with fast local storage access
"--node-label=fast-storage=true"
# k3s-upgrade=disabled: NixOS manages k3s upgrades via Nix, not system-upgrade-controller
"--node-label=k3s-upgrade=disabled"
];
};
# Time zone # Time zone
time.timeZone = "America/Los_Angeles"; # Adjust as needed time.timeZone = "America/Los_Angeles"; # Adjust as needed

View File

@@ -41,9 +41,14 @@
boot.initrd.luks.devices."luks-b614167b-9045-4234-a441-ac6f60a96d81".device = "/dev/disk/by-uuid/b614167b-9045-4234-a441-ac6f60a96d81"; boot.initrd.luks.devices."luks-b614167b-9045-4234-a441-ac6f60a96d81".device = "/dev/disk/by-uuid/b614167b-9045-4234-a441-ac6f60a96d81";
services.logind.settings.Login = { services.logind.settings.Login = {
HandleLidSwitch = "suspend-then-hibernate";
HandlePowerKey = "hibernate"; HandlePowerKey = "hibernate";
HandlePowerKeyLongPress = "poweroff"; HandlePowerKeyLongPress = "poweroff";
}; };
systemd.sleep.extraConfig = ''
HibernateDelaySec=30m
SuspendState=mem
'';
networking.hostName = "nix-book"; # Define your hostname. networking.hostName = "nix-book"; # Define your hostname.
# networking.wireless.enable = true; # Enables wireless support via wpa_supplicant. # networking.wireless.enable = true; # Enables wireless support via wpa_supplicant.

View File

@@ -17,15 +17,6 @@
enable = true; enable = true;
wayland = true; wayland = true;
}; };
nvidia = {
enable = true;
package = "latest";
graphics.extraPackages = with pkgs; [
mesa
libvdpau-va-gl
libva-vdpau-driver
];
};
users.enable = true; users.enable = true;
}; };
@@ -38,13 +29,28 @@
wsl.wslConf.network.hostname = "wixos"; wsl.wslConf.network.hostname = "wixos";
wsl.wslConf.user.default = "johno"; wsl.wslConf.user.default = "johno";
# WSL-specific environment variables for graphics services.xserver.videoDrivers = [ "nvidia" ];
hardware.graphics = {
enable = true;
extraPackages = with pkgs; [
mesa
libvdpau-va-gl
libva-vdpau-driver
];
};
environment.sessionVariables = { environment.sessionVariables = {
LD_LIBRARY_PATH = [ LD_LIBRARY_PATH = [
"/usr/lib/wsl/lib" "/usr/lib/wsl/lib"
"/run/opengl-driver/lib" "/run/opengl-driver/lib"
]; ];
}; };
hardware.nvidia = {
modesetting.enable = true;
nvidiaSettings = true;
open = true;
package = config.boot.kernelPackages.nvidiaPackages.latest;
};
# This value determines the NixOS release from which the default # This value determines the NixOS release from which the default
# settings for stateful data, like file locations and database versions # settings for stateful data, like file locations and database versions

View File

@@ -25,12 +25,8 @@ with lib;
wayland = true; wayland = true;
x11 = true; x11 = true;
}; };
kodi.enable = true;
nfs-mounts.enable = true; nfs-mounts.enable = true;
nvidia = { nvidia.enable = true;
enable = true;
graphics.enable32Bit = true;
};
printing.enable = true; printing.enable = true;
remote-build.enableBuilder = true; remote-build.enableBuilder = true;
users.enable = true; users.enable = true;
@@ -51,11 +47,27 @@ with lib;
# Fix dual boot clock sync - tell Linux to use local time for hardware clock # Fix dual boot clock sync - tell Linux to use local time for hardware clock
time.hardwareClockInLocalTime = true; time.hardwareClockInLocalTime = true;
# NVIDIA Graphics configuration
services.xserver.videoDrivers = [ "nvidia" ];
hardware.graphics.enable = true;
hardware.graphics.enable32Bit = true;
# Set DP-0 as primary display with 164.90Hz refresh rate # Set DP-0 as primary display with 164.90Hz refresh rate
services.xserver.displayManager.sessionCommands = '' services.xserver.displayManager.sessionCommands = ''
${pkgs.xorg.xrandr}/bin/xrandr --output DP-0 --mode 3440x1440 --rate 164.90 --primary ${pkgs.xorg.xrandr}/bin/xrandr --output DP-0 --mode 3440x1440 --rate 164.90 --primary
''; '';
hardware.nvidia = {
modesetting.enable = true;
nvidiaSettings = true;
package = pkgs.linuxPackages.nvidiaPackages.stable;
open = true;
# For gaming performance
powerManagement.enable = false;
powerManagement.finegrained = false;
};
services.ollama = { services.ollama = {
enable = true; enable = true;
acceleration = "cuda"; acceleration = "cuda";

View File

@@ -1,6 +1,8 @@
{ pkgs, ... }: { pkgs, ... }:
{ {
vulkanHDRLayer = pkgs.callPackage ./vulkan-hdr-layer {};
tea-rbw = pkgs.callPackage ./tea-rbw {}; tea-rbw = pkgs.callPackage ./tea-rbw {};
app-launcher-server = pkgs.callPackage ./app-launcher-server {}; app-launcher-server = pkgs.callPackage ./app-launcher-server {};
claude-code = pkgs.callPackage ./claude-code {}; claude-code = pkgs.callPackage ./claude-code {};
perles = pkgs.callPackage ./perles {};
} }

View File

@@ -0,0 +1,26 @@
{ lib, buildGoModule, fetchFromGitHub }:
buildGoModule rec {
pname = "perles";
version = "unstable-2025-01-09";
src = fetchFromGitHub {
owner = "zjrosen";
repo = "perles";
rev = "main";
hash = "sha256-JgRayb4+mJ1r0AtdnQfqAw2+QRte+licsfZOaRgYqcs=";
};
vendorHash = "sha256-R7UWTdBuPteneRqxrWK51nqLtZwDsqQoMAcohN4fyak=";
# Tests require a real git repository context
doCheck = false;
meta = with lib; {
description = "A TUI for the Beads issue tracking system with BQL query language";
homepage = "https://github.com/zjrosen/perles";
license = licenses.mit;
maintainers = [ ];
mainProgram = "perles";
};
}

View File

@@ -0,0 +1,34 @@
{ lib, stdenv, fetchFromGitHub, meson, pkg-config, vulkan-loader, ninja, writeText, vulkan-headers, vulkan-utility-libraries, jq, libX11, libXrandr, libxcb, wayland, wayland-scanner }:
stdenv.mkDerivation rec {
pname = "vulkan-hdr-layer";
version = "63d2eec";
src = (fetchFromGitHub {
owner = "Zamundaaa";
repo = "VK_hdr_layer";
rev = "869199cd2746e7f69cf19955153080842b6dacfc";
fetchSubmodules = true;
hash = "sha256-xfVYI+Aajmnf3BTaY2Ysg5fyDO6SwDFGyU0L+F+E3is=";
}).overrideAttrs (_: {
GIT_CONFIG_COUNT = 1;
GIT_CONFIG_KEY_0 = "url.https://github.com/.insteadOf";
GIT_CONFIG_VALUE_0 = "git@github.com:";
});
nativeBuildInputs = [ vulkan-headers meson ninja pkg-config jq ];
buildInputs = [ vulkan-headers vulkan-loader vulkan-utility-libraries libX11 libXrandr libxcb wayland wayland-scanner ];
# Help vulkan-loader find the validation layers
setupHook = writeText "setup-hook" ''
addToSearchPath XDG_DATA_DIRS @out@/share
'';
meta = with lib; {
description = "Layers providing Vulkan HDR";
homepage = "https://github.com/Zamundaaa/VK_hdr_layer";
platforms = platforms.linux;
license = licenses.mit;
};
}

View File

@@ -24,6 +24,14 @@ in
pulse.enable = true; pulse.enable = true;
}; };
services.pulseaudio = {
package = pkgs.pulseaudioFull;
extraConfig = ''
load-module module-combine-sink
load-module module-switch-on-connect
'';
};
services.squeezelite = { services.squeezelite = {
#enable = true; #enable = true;
pulseAudio = true; pulseAudio = true;

View File

@@ -1,35 +0,0 @@
# Common configuration shared between NixOS and Darwin
{ lib, pkgs, ... }:
{
config = {
time.timeZone = "America/Los_Angeles";
environment.systemPackages = with pkgs; [
git
glances
pciutils
tree
usbutils
vim
];
nix = {
package = pkgs.nix;
settings = {
experimental-features = [ "nix-command" "flakes" ];
max-jobs = "auto";
trusted-users = [ "johno" ];
substituters = [
];
};
gc = {
automatic = true;
options = "--delete-older-than 10d";
};
};
nixpkgs.config.allowUnfree = true;
};
}

View File

@@ -7,10 +7,6 @@ let
setEnvironmentPath = "${config.system.build.setEnvironment}"; setEnvironmentPath = "${config.system.build.setEnvironment}";
in in
{ {
imports = [
./common.nix
];
config = { config = {
# Salt manages /etc/bashrc, /etc/zshrc, /etc/zshenv # Salt manages /etc/bashrc, /etc/zshrc, /etc/zshenv
# nix-darwin writes to .local variants for nix-specific configuration # nix-darwin writes to .local variants for nix-specific configuration
@@ -47,6 +43,8 @@ in
fi fi
''; '';
time.timeZone = "America/Los_Angeles";
# System preferences # System preferences
system.defaults = { system.defaults = {
# Custom keyboard shortcuts # Custom keyboard shortcuts
@@ -81,5 +79,42 @@ in
}; };
}; };
}; };
environment.systemPackages = with pkgs; [
git
glances
pciutils
tree
usbutils
vim
];
nix = {
package = pkgs.nix;
# distributedBuilds = true;
# buildMachines = [{
# hostName = "z790prors.oglehome";
# system = "x86_64-linux";
# protocol = "ssh-ng";
# sshUser = "johno";
# sshKey = "/root/.ssh/id_ed25519";
# maxJobs = 3;
# speedFactor = 2;
# }];
settings = {
experimental-features = [ "nix-command" "flakes" ];
max-jobs = "auto";
trusted-users = [ "johno" ];
substituters = [
];
};
gc = {
automatic = true;
options = "--delete-older-than 10d";
};
};
nixpkgs.config.allowUnfree = true;
}; };
} }

View File

@@ -4,12 +4,10 @@ with lib;
{ {
imports = [ imports = [
./common.nix
./audio ./audio
./bluetooth ./bluetooth
./btrfs ./btrfs
./desktop ./desktop
./k3s-node
./kodi ./kodi
./nfs-mounts ./nfs-mounts
./nvidia ./nvidia
@@ -33,6 +31,7 @@ with lib;
LC_TELEPHONE = "en_US.UTF-8"; LC_TELEPHONE = "en_US.UTF-8";
LC_TIME = "en_US.UTF-8"; LC_TIME = "en_US.UTF-8";
}; };
time.timeZone = "America/Los_Angeles";
services.xserver.xkb = { services.xserver.xkb = {
layout = "us"; layout = "us";
@@ -50,7 +49,42 @@ with lib;
# Enable the OpenSSH daemon. # Enable the OpenSSH daemon.
services.openssh.enable = true; services.openssh.enable = true;
# NixOS-specific gc option (not available on Darwin) environment.systemPackages = with pkgs; [
nix.gc.randomizedDelaySec = "14m"; git
glances
pciutils
tree
usbutils
vim
];
nix = {
package = pkgs.nix;
# distributedBuilds = true;
# buildMachines = [{
# hostName = "z790prors.oglehome";
# system = "x86_64-linux";
# protocol = "ssh-ng";
# sshUser = "johno";
# sshKey = "/root/.ssh/id_ed25519";
# maxJobs = 3;
# speedFactor = 2;
# }];
settings = {
experimental-features = [ "nix-command" "flakes" ];
max-jobs = "auto";
trusted-users = [ "johno" ];
substituters = [
];
};
gc = {
automatic = true;
randomizedDelaySec = "14m";
options = "--delete-older-than 10d";
};
};
nixpkgs.config.allowUnfree = true;
}; };
} }

View File

@@ -1,81 +0,0 @@
{ lib, config, pkgs, ... }:
with lib;
let
cfg = config.roles.k3s-node;
in
{
options.roles.k3s-node = {
enable = mkEnableOption "Enable k3s node";
role = mkOption {
type = types.enum [ "server" "agent" ];
default = "agent";
description = "k3s role: server (control plane) or agent (worker)";
};
serverAddr = mkOption {
type = types.str;
default = "https://10.0.0.222:6443";
description = "URL of k3s server to join (required for agents, used for HA servers)";
};
tokenFile = mkOption {
type = types.path;
default = "/etc/k3s/token";
description = "Path to file containing the cluster join token";
};
clusterInit = mkOption {
type = types.bool;
default = false;
description = "Initialize a new cluster (first server only)";
};
extraFlags = mkOption {
type = types.listOf types.str;
default = [];
description = "Additional flags to pass to k3s";
};
gracefulNodeShutdown = mkOption {
type = types.bool;
default = true;
description = "Enable graceful node shutdown";
};
openFirewall = mkOption {
type = types.bool;
default = true;
description = "Open firewall ports for k3s";
};
};
config = mkIf cfg.enable {
# k3s service configuration
services.k3s = {
enable = true;
role = cfg.role;
tokenFile = cfg.tokenFile;
extraFlags = cfg.extraFlags;
gracefulNodeShutdown.enable = cfg.gracefulNodeShutdown;
serverAddr = if (cfg.role == "agent" || !cfg.clusterInit) then cfg.serverAddr else "";
clusterInit = cfg.role == "server" && cfg.clusterInit;
};
# Firewall rules for k3s
networking.firewall = mkIf cfg.openFirewall {
allowedTCPPorts = [
6443 # k3s API server
10250 # kubelet metrics
] ++ optionals (cfg.role == "server") [
2379 # etcd clients (HA)
2380 # etcd peers (HA)
];
allowedUDPPorts = [
8472 # flannel VXLAN
];
};
};
}

View File

@@ -22,7 +22,7 @@ in
appLauncherServer = { appLauncherServer = {
enable = mkOption { enable = mkOption {
type = types.bool; type = types.bool;
default = false; default = true;
description = "Enable HTTP app launcher server for remote control"; description = "Enable HTTP app launcher server for remote control";
}; };
port = mkOption { port = mkOption {

View File

@@ -8,89 +8,9 @@ in
{ {
options.roles.nvidia = { options.roles.nvidia = {
enable = mkEnableOption "Enable the nvidia role"; enable = mkEnableOption "Enable the nvidia role";
# Driver configuration options
open = mkOption {
type = types.bool;
default = true;
description = "Use the open source nvidia kernel driver (for Turing and newer GPUs).";
};
modesetting = mkOption {
type = types.bool;
default = true;
description = "Enable kernel modesetting for nvidia.";
};
nvidiaSettings = mkOption {
type = types.bool;
default = true;
description = "Enable the nvidia-settings GUI.";
};
package = mkOption {
type = types.enum [ "stable" "latest" "beta" "vulkan_beta" "production" ];
default = "stable";
description = "The nvidia driver package to use.";
};
powerManagement = {
enable = mkOption {
type = types.bool;
default = false;
description = "Enable nvidia power management (useful for laptops, not recommended for desktops).";
};
finegrained = mkOption {
type = types.bool;
default = false;
description = "Enable fine-grained power management for Turing and newer GPUs.";
};
};
graphics = {
enable = mkOption {
type = types.bool;
default = true;
description = "Enable hardware graphics support.";
};
enable32Bit = mkOption {
type = types.bool;
default = false;
description = "Enable 32-bit graphics libraries (needed for some games).";
};
extraPackages = mkOption {
type = types.listOf types.package;
default = [];
description = "Extra packages to add to hardware.graphics.extraPackages.";
};
};
}; };
config = mkIf cfg.enable { config = mkIf cfg.enable {
# Set xserver video driver
services.xserver.videoDrivers = [ "nvidia" ];
# Graphics configuration
hardware.graphics = {
enable = cfg.graphics.enable;
enable32Bit = cfg.graphics.enable32Bit;
extraPackages = cfg.graphics.extraPackages;
};
# NVIDIA driver configuration
hardware.nvidia = {
modesetting.enable = cfg.modesetting;
nvidiaSettings = cfg.nvidiaSettings;
open = cfg.open;
package = config.boot.kernelPackages.nvidiaPackages.${cfg.package};
powerManagement.enable = cfg.powerManagement.enable;
powerManagement.finegrained = cfg.powerManagement.finegrained;
};
# Additional packages for nvidia support
environment.systemPackages = with pkgs; [ environment.systemPackages = with pkgs; [
libva-utils libva-utils
nvidia-vaapi-driver nvidia-vaapi-driver

View File

@@ -1,22 +0,0 @@
#!/usr/bin/env bash
# Build Live USB ISO from flake configuration
# Creates an uncompressed ISO suitable for Ventoy and other USB boot tools
# Usage: nix run .#build-liveusb
# Or: ./scripts/build-liveusb.sh
set -euo pipefail
REPO_ROOT="${REPO_ROOT:-$(git rev-parse --show-toplevel 2>/dev/null || pwd)}"
echo "Building Live USB ISO..."
nix build "${REPO_ROOT}#nixosConfigurations.live-usb.config.system.build.isoImage" --show-trace
if ls "${REPO_ROOT}/result/iso/"*.iso 1> /dev/null 2>&1; then
iso_file=$(ls "${REPO_ROOT}/result/iso/"*.iso)
echo "Build complete!"
echo "ISO location: $iso_file"
echo "Ready for Ventoy or dd to USB"
else
echo "Build failed - no ISO file found"
exit 1
fi