by eclaire-labs
Local-first, open-source AI assistant for your data. Unify tasks, notes, docs, photos, and bookmarks. Private, self-hosted, and extensible via APIs.
# Add to your Claude Code skills
git clone https://github.com/eclaire-labs/eclaire[!IMPORTANT]
Pre-release / Development Status
Eclaire is currently in pre-release and under active development.
Expect frequent updates, breaking changes, and evolving APIs/configuration.
If you deploy it, please backup your data regularly and review release notes carefully before upgrading.
[!WARNING]
Security Warning
Do NOT expose Eclaire directly to the public internet.
This project is designed to be self-hosted with privacy and security in mind, but it is not hardened for direct exposure.We strongly recommend placing it behind additional security layers such as:
- Tailscale or other private networks/VPNs
- Cloudflare Tunnels
- A reverse proxy with authentication
Eclaire is a local-first, open-source AI that organizes, answers, and automates across tasks, notes, documents, photos, bookmarks and more.
There are are lot of existing frameworks and libraries enabling various AI capabilities; few deliver a complete product allowing users to get things done. Eclaire assembles proven building blocks into a cohesive, privacy-preserving solution you can run yourself.
With AI gaining rapid adoption, there is a growing need for alternatives to closed ecosystems and hosted models, especially for personal, private, or otherwise sensitive data.
No comments yet. Be the first to share your thoughts!
setup.sh flow, plus a streamlined compose.yamlSee the CHANGELOG for full details.
mkdir eclaire && cd eclaire
curl -fsSL https://raw.githubusercontent.com/eclaire-labs/eclaire/main/setup.sh | sh
The script will:
After setup completes:
# 1. Start your LLM servers (in separate terminals)
# Models download automatically on first run if not already cached
llama-server -hf unsloth/Qwen3-14B-GGUF:Q4_K_XL --ctx-size 16384 --port 11500
llama-server -hf unsloth/gemma-3-4b-it-qat-GGUF:Q4_K_XL --ctx-size 16384 --port 11501
# 2. Start Eclaire
docker compose up -d
Complete the setup wizard — open http://localhost:3000 in your browser, or run ./eclaire onboard for CLI setup. The wizard guides you through admin account creation, AI provider configuration, and model selection.
Configuration lives in two places:
.env - secrets, database settings, portsdocker compose down
Eclaire uses AI models for two purposes:
Apple Silicon: Mac users can leverage MLX for optimized local inference. See the configuration guide for details.
Use the CLI to manage models:
./eclaire model list
See AI Model Configuration for detailed setup and model recommendations.
Eclaire follows a modular architecture with clear separation between the frontend, backend API, background workers, and data layers.