Why I Chose Tauri for Building a Local LLM App on macOS / Windows
I needed to build a desktop app that runs entirely locally, powered by a local LLM. The target platforms were macOS and Windows. When I consulted Claude Code on framework selection, it recommended Tauri.
I had never written Rust before, but to cut to the chase — as long as you let AI agents write the code, having no Rust experience is no problem at all.
Why Tauri
What AI Recommended
The main reasons it was recommended were as follows:
- Small binary size — Significantly lighter than Electron. Since a local LLM already consumes substantial resources, the app itself should be as lightweight as possible
- Uses OS-native WebView — No bundled Chromium means lower memory consumption
- Rust backend — System-level tasks like LLM process management and filesystem operations can be written safely
- Security model — Even for a local app, having fine-grained permission control is a plus
Comparison with Other Cross-Platform Tools
For a detailed comparison with other frameworks such as Electron, Flutter Desktop, and React Native, see Cross-Platform Development Tools Comparison. Since memory is a precious resource when running a local LLM, Tauri’s lightweight footprint was the deciding factor.
Tech Stack
Here is the tech stack we adopted:
| Layer | Technology |
|---|---|
| Frontend | React 19 / TypeScript / Vite |
| Desktop Shell | Tauri 2 |
| Backend | Rust (orchestration, storage, policy enforcement) |
| Local LLM | llama.cpp + Qwen2.5-7B-Instruct (GGUF Q4_K_M) |
| Database | SQLite (rusqlite) |
The React + Vite + TypeScript frontend stack was also chosen by the AI.
Architecture
Overall Structure
graph TB
A[Frontend - WebView<br/>React / Vite / TypeScript] -- "invoke (IPC)" --> B[Tauri Commands<br/>Rust]
B -- "spawn process" --> C[llama.cpp<br/>Local LLM]
B -- "read/write" --> D[(SQLite)]
B -- "emit events" --> A
Frontend calls Rust functions through Tauri commands (#[tauri::command]). The Rust side orchestrates external processes and returns progress to the frontend in real time as events.
Tauri Command Design
17 commands are registered on the Rust side. Grouped by responsibility:
| Category | Example Commands |
|---|---|
| Setup | run_first_time_setup, install_models, run_self_test |
| Job Management | create_job, run_job, cancel_job, load_job, export_job |
| Settings & Glossary | get_settings, update_settings, upsert_glossary_term |
| Diagnostics | get_app_summary, get_runtime_contract, get_setup_status |
The frontend calls these via invoke() from @tauri-apps/api, and receives progress for long-running tasks through listen(). Since it is event-driven rather than polling, the UI updates in real time.
Sidecar Design
External binaries (such as llama.cpp) are resolved through a 3-tier fallback:
- Environment variable override — e.g.
LLAMA_CPP_BIN - Bundled binaries — under
binaries/macos/orbinaries/windows/ - System PATH — fallback
This means system-installed binaries are used during development, while bundled binaries are used in distribution. Tauri’s bundle.resources setting allows bundling resources into the app.
{
"bundle": {
"resources": [
"../resources",
"../binaries"
]
}
}Local LLM Integration
We run Qwen2.5-7B-Instruct (GGUF Q4_K_M format, approximately 4.7 GB) locally using llama.cpp.
A notable feature is the use of JSON schema-constrained generation. The --json-schema option in llama.cpp enforces the output structure, and the Rust side performs validation. Retry logic is applied for malformed outputs.
Long texts are chunked into 6,000-character segments, each summarized independently, then consolidated in a final merge pass. This design avoids token limits while producing structured output.
Why Not Knowing Rust Is Not a Problem
Honestly, Rust was the biggest concern when choosing Tauri. However, once development started, this concern was largely resolved.
The Era of AI Agents Writing Code
Currently, Codex writes most of the code. The workflow looks like this:
- Describe requirements in natural language — e.g. “ローカルの llama.cpp にリクエストを送って、JSON スキーマ制約付きで生成したい” (Send a request to the local llama.cpp and generate with JSON schema constraints)
- Codex generates Rust code — including Tauri command definitions, error handling (
thiserror), and type definitions - Build and verify — if there are compile errors, ask Codex to fix them
- Fine-tune with Claude Code — confirm code intent, discuss refactoring
Since the Rust compiler is strict, AI-generated code that compiles successfully comes with a certain level of quality assurance. This is a reassurance that dynamically-typed languages cannot offer.
Skills Required of Me
You do not need to write Rust, but I have found the following skills to be necessary:
- Giving precise instructions to AI — the ability to describe what you want to build concretely
- Reviewing generated code — understanding structure and intent, even without being able to read every line
- Understanding Tauri’s architecture — knowing the boundary between the frontend (WebView) and the backend (Rust)
- Debugging triage — determining whether a problem is on the frontend or backend side
In other words, “the ability to design and make decisions” matters more than “the ability to write code.”
Conclusion
I adopted Tauri for building a cross-platform app powered by a local LLM. I consulted an AI agent for the selection, and AI agents write the code. Even without being able to write Rust yourself, development can proceed as long as you can handle design and review.
In the actual app, the Rust backend orchestrates external processes like llama.cpp, manages jobs with SQLite, and communicates with the frontend through events. These system-level tasks are exactly where Rust’s safety shines, and choosing Tauri was the right call.
In an era where AI writes the code, the criteria for framework selection has shifted from “can I write it” to “is it the optimal technology.” Tauri is a solid choice for building lightweight, secure desktop apps, and it pairs well with local LLMs. AI bridges the language barrier.
That’s all from the field — you can now build Tauri apps even if you can’t write Rust.