[zix790prors] More local LLM updates

Using qwen3:30b explicitly. The default "qwen3" from ollama was pulling
a very outdated model apparently (qwen3:8b). qwen3:4b and qwen3:30b are the newest.

Also sets up some defaults for gptel that have been useful
This commit is contained in:
2025-09-11 08:39:36 -07:00
parent 6bf0a37533
commit 56b1111f54
2 changed files with 14 additions and 8 deletions

View File

@@ -83,12 +83,16 @@
(setq! gptel-api-key (my/get-rbw-password "openai-api-key-chatgpt-el")
gptel-default-mode 'org-mode
gptel-use-tools t
gptel-confirm-tool-calls 'always)
gptel-confirm-tool-calls 'always
gptel-include-reasoning 'ignore
gptel-model "qwen3:30b")
(gptel-make-ollama "Ollama-Local"
:host "localhost:11434"
:stream t
:models '(deepseek-r1 deepseek-r1-fullctx qwen3 qwen3-coder llama3.1 qwen2.5-coder mistral-nemo gpt-oss))
;; Set default backend to be Ollama-Local
(setq! gptel-backend
(gptel-make-ollama "Ollama-Local"
:host "localhost:11434"
:stream t
:models '(deepseek-r1 deepseek-r1-fullctx qwen3:30b qwen3:4b llama3.1 qwen2.5-coder mistral-nemo gpt-oss)))
;; Define custom tools
(gptel-make-tool