Jarvis from the marvel universe
# Add to your Claude Code skills
git clone https://github.com/codewithbro95/J.A.R.V.I.S...sneak peek at the ui(still in dev) here
The J.A.R.V.I.S Large Language Model, 100% offline
We have used custom jarvis dataset and various other open datasets on the internet of jarvis's dialog with stark to fine-tune on top of Llama2-7b
Jarvis is being built with privacy in mind, everything runs locally. This fine-tuned model is better at responding like jarvis and producing response in the best jarvis-tone possible.
Try: What time is it? what is the date of today?Try: hey jarvis, help me search for the best african dish out thereTry: hey jarvis, play Girls like you by Maroon 5 on youtubeollama run llava" ) Try:"what is this?" "what are you looking at?", "tell me what you see", "describe this" or even "describe what you see"No comments yet. Be the first to share your thoughts!
| Directory | Description | Technology |
|-----------|-------------|------------|
| /koki | UI Codebase - JARVIS HUD interface & desktop app(wip) | React + TypeScript + Electron |
| /modules | Backend functionality - NLP, voice processing, tools | Python |
| /dataset | Training data and model preparation notebooks | Python + Parquet |
| /ollama | Custom model integration and configurations | Go + Ollama |
| Root | Main scripts, configuration, and entry points | Python |
I. You will need Ollama to download and install the model for use locally.
ollama run fotiecodes/jarvis
This will install the model jarvis model locally.
3. From here you can already chat with jarvis from the command line by running the same command ollama run fotiecodes/jarvis or ollama run fotiecodes/jarvis:latest to run the lastest stable release.
II. After installing the model locally and started the ollama sever and can confirm it is working properly, clone this repositry and run the main.py file
Please check the .env.example file and add the neccessary env variables before running.
That's it, you can start talking to Jarvis✨
Please go to CONTRIBUTOR.md for more info.
To-Do:
Llama-2-7b-chat-jarvis"(WIP)for offline tts we switched from Kokoro to LuxTTS with custom module here JarvisLuxTTS, an amazing low latency tts(still working on this)
Note: after cloning the repo you need to type the following command to install all the necessary libries. this installs from the roor directory and sub directories
find . -name "requirements.txt" | while read req; do echo "Installing from $req..."; pip install -r "$req"; done
Note: the voice name is already set in the .env file(the actual jarvis voice will be coming soon) this one is closest we have at the moment to jarvis's voice.
Well, simple. I have always wanted to have my very own Jarvis, now i'm not talking about a siri clone or google home assitant clone. I am talking about my very own Jarvis, talks like jarvis, response like jarvis, feels like jarvis in the most accurate way possible.
Special thanks to:
This project is licensed under the MIT License - see the LICENSE file for details. 📄
If you have any questions, suggestions, or need assistance, please open an issue:)
Disclaimer: Jarvis can make mistakes. Consider checking important information.