Features Overview

A complete tour of what LexiChat can do for you — out of the box, no cloud required.

chat Chat & Streaming Responses

Core

Every response streams token-by-token directly from the local Ollama server — there's no buffering and no waiting for a full response before you start reading. The chat interface renders Markdown (headings, bold, code blocks, lists) as it arrives.

  • check_circle Full Markdown rendering with syntax-highlighted code blocks
  • check_circle Copy individual messages or entire conversations
  • check_circle Full conversation history maintained across the session
  • check_circle New Chat button to clear context and start fresh
  • check_circle Stop generation mid-stream

folder_open File Tools

Built-in tools

LexiChat can read, write, and list files on your machine. You control which folders it has access to in Settings — by default it can only access directories you explicitly allow.

read_file
Read any file in your allowed directories. Great for asking the AI to summarise, translate, or analyse a document.
write_file
Have the AI draft and save a document, create a script, or save output to a file. You choose the destination folder.
list_directory
Browse folder contents so the AI can find the right file to read or reference in its answer.
lock
Sandboxed access: File tools are strictly limited to the directories you configured. The AI cannot read or write outside those paths — any attempt is blocked at the Rust layer.

image Vision / Image Analysis

Requires a vision-capable model

Attach images directly to your message using the paperclip icon. With a vision model like llava or gemma3, the AI can describe, analyse, extract text from, or answer questions about the image.

  • check_circle Supports JPEG, PNG, WebP, GIF
  • check_circle Multiple images per message
  • check_circle Images are sent locally to Ollama — never uploaded to any cloud
  • check_circle Preview thumbnails shown before sending

person Profiles & Personas

Settings → Profiles

Profiles let you instantly switch the AI's persona, default model, system prompt, and parameter defaults. Create a profile for each context you work in:

💻
Developer
Technical system prompt, code-optimised model, long context window, Precise style
✍️
Writer
Creative writing persona, higher temperature, longer responses, Creative style
📊
Analyst
Data-focused prompt, file access enabled, Precise style, medium responses
🌍
Translator
Multilingual system prompt, balanced style, translation-focused model

Each profile stores its own system prompt, allowed file directories, default model, and chat parameter defaults. Switch profiles instantly from the top bar.

tune Model & Parameter Control

Chat params button · Settings → Defaults

Fine-tune how the AI responds with a two-tier parameter system — simple presets for everyday use, raw numbers for power users.

Tier 1 — Quick presets
Response Style
Precise — factual, low creativity
Balanced — default
Creative — higher variance
Response Length
Short — concise answers
Medium — balanced
Long — comprehensive
Auto — model decides
Memory
Standard — model default
Extended — 8 K context
Tier 2 — Advanced (click "Advanced settings…")
Temperature0–2
Top-P0–1
Top-K1–100
Repeat Penalty0.5–2
Seedinteger
Context size (num_ctx)tokens
Max tokens (num_predict)tokens
System prompt overridetext

shield Privacy by Design

Core principle

LexiChat is built around local-first principles. Your conversations, files, and data stay on your machine.

check_circle
Models run locally via Ollama — no API keys required, no cloud inference
check_circle
Conversations are not stored or logged anywhere beyond your RAM/session
check_circle
Web search uses DuckDuckGo with no tracking or user profiling
check_circle
File access is sandboxed to explicit directories you choose — no silent reads