Configuration
Profiles, system prompts, file sandboxing, and chat parameter tuning — all in one place.
settings Opening Settings
Click the Settings gear icon (⚙) in the top-right toolbar to open the Settings dialog. It contains four tabs:
person Profiles
Profiles are saved configurations you can switch between instantly. Each profile has its own:
| Setting | Description |
|---|---|
| name | Display name shown in the profile switcher |
| model | Default Ollama model to preselect for this profile |
| systemPrompt | The system message sent to the model every conversation |
| allowedDirs | Folders the AI can read/write (see File Access below) |
| chatParams | Default chat parameter preset for new conversations |
description System Prompt
The system prompt is a hidden instruction sent to the model before your first message in every new conversation. Use it to set the AI's role, tone, response language, or any persistent instructions.
folder_managed File Access (Allowed Directories)
LexiChat restricts file tools to a whitelist of directories. The AI cannot read or write files outside these paths — this is enforced in the Rust backend, not just the prompt.
- Open Settings → select your profile
- Under Allowed Directories, click Add folder
- A system folder picker opens — navigate to the desired folder and confirm
- The path appears in the list; click × to remove it later
~) to the allowed list. Instead, add specific working folders (e.g. ~/Documents/LexiChat) to minimise the blast radius of accidental writes.
tune Chat Parameters (Per-Conversation)
The sliders icon next to the attach button opens a quick-access panel for adjusting the current conversation's behaviour. Changes apply immediately to the next message you send and reset when you start a new chat.
science Advanced Parameters
Click "Advanced settings…" at the bottom of the chat params panel to access raw model parameters. These map directly to the Ollama API's options object.
| Parameter | Range | What it does |
|---|---|---|
| temperature | 0–2 | Controls randomness. Lower = more deterministic, higher = more creative. |
| top_p | 0–1 | Nucleus sampling. Limits token selection to the top-P probability mass. |
| top_k | 1–100 | Limits selection to the top K most likely tokens at each step. |
| repeat_penalty | 0.5–2 | Penalises recently used tokens to reduce repetition. |
| seed | integer | Fix a seed for reproducible outputs. Leave blank for random. |
| num_ctx | tokens | Override the context window size in tokens. |
| num_predict | tokens | Maximum number of tokens to generate in the response. |
| system_prompt_override | text | Replace the profile system prompt just for this conversation. |
| keep_alive | e.g. "5m" | How long Ollama keeps the model loaded after the last request. |
bookmarks Default Overrides
Go to Settings → Defaults to set the chat parameter defaults that apply when you start a new conversation (or when a profile doesn't specify its own). These are the same controls as the per-conversation panel, but they're saved persistently and used as the starting point for every new chat.