Talk to your Mac, query your docs, no cloud required. On-device voice AI + RAG
# Add to your Claude Code skills
git clone https://github.com/RunanywhereAI/RCLIRCLI is an on-device voice AI for macOS. A complete STT + LLM + TTS pipeline running natively on Apple Silicon — 38 macOS actions via voice, local RAG over your documents, sub-200ms end-to-end latency. No cloud, no API keys.
Powered by MetalRT, a proprietary GPU inference engine built by RunAnywhere, Inc. specifically for Apple Silicon.
<table> <tr> <td width="50%" align="center"> <strong>Voice Conversation</strong><br> <em>Talk naturally — RCLI listens, understands, and responds on-device.</em><br><br> <a href="https://youtu.be/qeardCENcV0"> <img src="assets/demos/demo1-voice-conversation.gif" alt="Voice Conversation Demo" width="100%"> </a> <br><sub>Click for full video with audio</sub> </td> <td width="50%" align="center"> <strong>App Control</strong><br> <em>Control Spotify, adjust volume — 38 macOS actions by voice.</em><br><br> <a href="https://youtu.be/eTYwkgNoaKg"> <img src="assets/demos/demo2-spotify-volume.gif" alt="App Control Demo" width="100%"> </a> <br><sub>Click for full video with audio</sub> </td> </tr> <tr> <td width="50%" align="center"> <strong>Models</strong><br> <em>Browse models, hot-swap LLMs — all from the TUI.</em><br><br> <a href="https://youtu.be/HD1aS37zIGE"> <img src="assets/demos/demo3-benchmarks.gif" alt="Models & Benchmarks Demo" width="100%"> </a> <br><sub>Click for full video with audio</sub> </td> <td width="50%" align="center"> <strong>Document Intelligence (RAG)</strong><br> <em>Ingest docs, ask questions by voice — ~4ms hybrid retrieval.</em><br><br> <a href="https://youtu.be/8FEfbwS7cQ8"> <img src="assets/demos/demo4-rag-documents.gif" alt="RAG Demo" width="100%"> </a> <br><sub>Click for full video with audio</sub> </td> </tr> </table>Real-time screen recordings on Apple Silicon — no cloud, no edits, no tricks.
No comments yet. Be the first to share your thoughts!
[IMPORTANT] Requires macOS 13+ on Apple Silicon. MetalRT engine requires M3 or later. M1/M2 Macs fall back to llama.cpp automatically.
One command:
curl -fsSL https://raw.githubusercontent.com/RunanywhereAI/RCLI/main/install.sh | bash
Or via Homebrew:
brew tap RunanywhereAI/rcli https://github.com/RunanywhereAI/RCLI.git
brew install rcli
rcli setup # required — downloads AI models (~1GB, one-time)
...