John Ogle 10efafd92e feat(local-inference): replace ollama with llama-swap + llama.cpp on zix790prors
- Add local-inference NixOS role using llama-swap (from nixpkgs-unstable)
  with llama.cpp (CUDA-enabled, from nixpkgs-unstable)
- Serves Qwen3.6-35B-A3B via HuggingFace auto-download with --cpu-moe
- Add nixosSpecialArgs for nixpkgs-unstable module access
- Configure opencode with llama-local provider pointing to zix790prors:8080
- Update gptel from Ollama backend to OpenAI-compatible llama-swap backend
- Remove ollama service from zix790prors
2026-04-16 15:20:37 -07:00
2026-04-08 11:47:11 -07:00
2026-04-08 11:47:11 -07:00
2026-04-08 11:47:11 -07:00
Description
No description provided
24 MiB
Languages
Nix 80.4%
Emacs Lisp 10.1%
Shell 6.2%
Python 3.3%