a self-evolving intelligent companion
# Add to your Claude Code skills
git clone https://github.com/vibeforge1111/vibeship-spark-intelligenceSelf-evolving intelligence layer for AI agents.
Spark automatically:
.learnings/| Situation | Action | |-----------|--------| | Session start | Spark auto-loads relevant cognitive context | | Tool fails | Spark captures error pattern, suggests recovery | | Pattern validated 3+ times | Consider promotion to CLAUDE.md | | Recurring workflow identified | Extract as skill | | Mind available | Sync for cross-project learning |
Spark learns in these categories:
| Category | What It Learns |
|----------|----------------|
| self_awareness | Overconfidence, blind spots, struggle areas |
| user_understanding | Preferences, expertise, communication style |
| reasoning | WHY things work, not just that they work |
| context | When patterns apply vs don't apply |
| wisdom | General principles across contexts |
| meta_learning | How to learn, when to ask vs act |
| communication | Explanation styles that work |
# Check system status
python cli.py status
# Sync to Mind
python cli.py sync
# Write learnings to markdown
python cli.py write
# Promote ready insights
python cli.py promote
# Sync bootstrap context to platform files
python cli.py sync-context
# Preview/apply decay-based pruning
python cli.py decay
# View recent learnings
python cli.py learnings --limit 20
Learns constantly. Adapts with your flow. Runs 100% on your machine as a local AI companion that turns past work into future-ready behavior. It is designed to be beyond a learning loop.
You do work -> Spark captures memory -> Spark distills and transforms it -> Spark delivers advisory context -> You act with better context -> Outcomes re-enter the loop
Spark Intelligence is a self-evolving AI companion designed to grow smarter through use.
It is:
The goal is to keep context, patterns, and practical lessons in a form that your agent can actually use at the right moment.
Prerequisites:
No comments yet. Be the first to share your thoughts!
Add to .claude/settings.json:
{
"hooks": {
"PostToolUse": [{
"matcher": "",
"hooks": [{
"type": "command",
"command": "python /path/to/Spark/hooks/observe.py"
}]
}],
"PostToolUseFailure": [{
"matcher": "",
"hooks": [{
"type": "command",
"command": "python /path/to/Spark/hooks/observe.py"
}]
}]
}
}
Spark automatically syncs learnings to workspace files:
AGENTS.md — Workflow patternsTOOLS.md — Tool insightsSOUL.md — User preferencesWhen Mind is running (python -m mind.lite_tier), Spark syncs learnings for:
| File | Contents |
|------|----------|
| .learnings/LEARNINGS.md | All cognitive insights |
| .learnings/ERRORS.md | Error patterns and recoveries |
| ~/.spark/cognitive_insights.json | Raw insight data |
| ~/.spark/queue/events.jsonl | Event queue |
Insights are auto-promoted when:
Promotion targets:
CLAUDE.md — Wisdom, reasoning, context rulesAGENTS.md — Meta-learning, self-awarenessTOOLS.md — Tool-specific context rulesSOUL.md — User understanding, communicationfrom lib import (
get_cognitive_learner,
sync_all_to_mind,
write_all_learnings,
check_and_promote,
)
# Get learner instance
cognitive = get_cognitive_learner()
# Learn something
cognitive.learn_why(
what_worked="Read before Edit",
why_it_worked="Prevents content mismatch errors",
context="File editing workflow"
)
# Sync to Mind
sync_all_to_mind()
# Write to markdown
write_all_learnings()
# Promote proven insights
check_and_promote()
wingetpipcurl + bashWindows one-command bootstrap (clone + venv + install + start + health):
irm https://raw.githubusercontent.com/vibeforge1111/vibeship-spark-intelligence/main/install.ps1 | iex
Optional re-check (from repo root):
.\.venv\Scripts\python -m spark.cli up
.\.venv\Scripts\python -m spark.cli health
If you already cloned the repo, run the local bootstrap:
.\install.ps1
If you are running from cmd.exe or another shell:
powershell -NoProfile -ExecutionPolicy Bypass -Command "irm https://raw.githubusercontent.com/vibeforge1111/vibeship-spark-intelligence/main/install.ps1 | iex"
Mac/Linux one-command bootstrap (clone + venv + install + start):
curl -fsSL https://raw.githubusercontent.com/vibeforge1111/vibeship-spark-intelligence/main/install.sh | bash
Then verify runtime readiness (second command, from repo root):
./.venv/bin/python -m spark.cli up
./.venv/bin/python -m spark.cli health
Mac/Linux manual install:
git clone https://github.com/vibeforge1111/vibeship-spark-intelligence
cd vibeship-spark-intelligence
python3 -m venv .venv && source .venv/bin/activate
python -m pip install -e .[services]
If your system uses PEP 668 / external package management, this avoids the
externally-managed-environment error:
python3 -m venv .venv
source .venv/bin/activate
python -m pip install -e .[services]
Or run directly with editable install:
python -m pip install vibeship-spark-intelligence[services]
python -m spark.cli up
# Check health
python -m spark.cli health
# View what Spark has learned
python -m spark.cli learnings
Windows: run start_spark.bat from the repo root.
Lightweight mode (core only, no Pulse/watchdog): spark up --lite
Spark works with any coding agent that supports hooks or event capture.
| Agent | Integration | Guide |
|-------|------------|-------|
| Claude Code | Hooks (PreToolUse, PostToolUse, UserPromptSubmit) | docs/claude_code.md |
| Codex | Session JSONL hook bridge (shadow/observe rollout) | docs/CODEX_HOOK_BRIDGE_ROLLOUT.md |
| Cursor / VS Code | tasks.json + emit_event | docs/cursor.md |
| OpenClaw | Session JSONL tailer | docs/openclaw/ |
vibeship-spark-pulse app; spark_pulse.py is a redirector) and local Meta-Ralph views.spark status, spark learnings, spark promote, spark up/down, and more.Your Agent (Claude Code / Cursor / OpenClaw)
-> hooks capture events
-> queue -> bridge worker -> pipeline
-> quality gate (Meta-Ralph) -> cognitive learner
-> distillation -> transformation -> advisory packaging
-> pre-tool advisory surfaced + context files refreshed
Spark ships with an Obsidian integration that turns the entire intelligence pipeline into a human-readable vault you can browse, search, and query — every insight, every decision, every quality verdict, visible in one place.
# From the spark-intelligence repo:
python scripts/generate_observatory.py --force --verbose
This reads your ~/.spark/ state files and generates ~465+ markdown pages in under 1 second.
Default vault location: ~/Documents/Obsidian Vault/Spark-Intelligence-Observatory
To change it, edit observatory.vault_dir in ~/.spark/tuneables.json or config/tuneables.json.
Spark-Intelligence-Observatory_observatory/flow.md — the main pipeline dashboardThe vault comes pre-configured with:
If Dataview didn't auto-install from the pre-configured vault:
Dashboard.md queries will now render live tables_observatory/flow.md)A live Mermaid diagram of the full 12-stage pipeline with embedded metrics — queue depth, processing rate, insight counts, advisory follow rate, and more. Plus a system health table with status badges.
_observatory/stages/)Each pipeline stage gets its own page with health metrics, recent activity, and upstream/downstream links:
| Stage | What it shows | |-------|--------------| | Event Capture | Hook heartbeat, session tracking | | Queue | Pending events, file size, overflow | | Pipeline | Processing rate, batch size, empty cycles | | Memory Capture | Importance scores, category distribution | | Meta-Ralph | Quality verdicts, pass rate, score distribution | | Cognitive Learner | Insight count, reliability leaders, categories | | EIDOS | Episodes, steps, distillations, predict-evaluate loop | | Advisory | Follow rate, source effectiveness, recent advice | | Promotion | Targets, recent activity, result distribution | | Chips | Domain modules, per-chip activity and size | | Predictions | Outcomes, surprise tracking, link rate | | Tuneables | Current config, all sections listed |
_observatory/explore/)Click into any data store and browse individual items:
| Dataset | Pages | What you see | |---------|-------|-------------| | Cognitive Insights | ~150 detail pages | Reliability, validations, evidence, counter-examples | | EIDOS Distillations | ~90 detail pages | Statement, confidence, domains, triggers | | EIDOS Episodes | ~100 detail pages | Goal, outcome, every step with prediction vs evaluation | | Meta-Ralph Verdicts | ~100 detail pages | Score breakdown (6 dimensions), input text, issues | | Advisory Decisions | Index table | Every emit/suppress/block decision with reasons | | Implicit Feedback | Index table | Followed/ignored signals, per-tool follow rates | | Retrieval Routing | Index table | Route distribution, why advice was/wasn't surfaced | | Tuneable Evolution | Index table | Parameter changes over time with impact analysis | | Promotions | Index table | What got promoted to CLAUDE.md and why | | Advisory Effectiveness | Index table | Source effectiveness, overall follow rate |
_observatory/flow.canvas)A spatial layout of the pipeline — drag, zoom, click thr