Provide AI agents with full Tor network access and dark web data through a zero-config OpenClaw skill or standalone tool.
# Add to your Claude Code skills
git clone https://github.com/christinminor459/OnionClawby JacobJandon Β· MIT License Β· github.com/JacobJandon/OnionClaw
OnionClaw routes all requests through the Tor network. It searches 12 verified-live dark web search engines simultaneously, fetches .onion hidden service pages, rotates Tor circuits, and produces structured OSINT reports using the Robin investigation pipeline.
Install Python dependencies:
pip3 install requests[socks] beautifulsoup4 python-dotenv stem
Run the interactive first-run wizard (sets up .env and torrc in one step):
python3 {baseDir}/setup.py
Or set up manually:
cp {baseDir}/.env.example {baseDir}/.env
# Edit {baseDir}/.env β add LLM_PROVIDER + API key (optional; search and fetch work without one)
Start Tor (required before any command):
# Linux: sudo apt install tor && sudo systemctl start tor
# macOS: brew install tor && brew services start tor
# Custom: tor -f /tmp/tor_data/torrc & (setup.py creates this)
Enable circuit rotation (ControlPort) β required for renew.py:
Add to /etc/tor/torrc:
ControlPort 9051
CookieAuthentication 1
Then restart Tor. setup.py does this automatically.
Always run this first before any search or fetch.
by JacobJandon
OpenClaw skill + standalone tool β full Tor / dark web access for AI agents
OnionClaw gives AI agents full access to the Tor network and .onion hidden services. It runs as an OpenClaw skill (drop-in, zero config beyond a .env file) and also works standalone from any terminal.
Based on the SICRY engine β 18 dark web search engines, Robin OSINT pipeline, four LLM analysis modes.
# As an OpenClaw skill:
cp -r OnionClaw ~/.openclaw/skills/onionclaw
# β agent now has 7 dark web commands available in every session
# Standalone:
python3 check_tor.py # verify Tor
python3 search.py --query "ransomware healthcare"
python3 pipeline.py --query "acme.com data leak" --mode corporate
Autonomous agents paired with the Tor network will be one of the most dangerous automation stacks on the internet within the next five years. OnionClaw is living proof that the rabbit hole goes deeper than most people think.
This tool is built for legitimate OSINT, threat intelligence, and security research. But the same primitives β anonymous routing, bulk scraping, AI-driven synthesis, zero-attribution browsing, automated identity rotation β are precisely what make this combination genuinely dangerous in the wrong hands.
No comments yet. Be the first to share your thoughts!
python3 {baseDir}/check_tor.py
Output: exit IP address and tor_active: true/false. If tor_active is false, tell the user to start Tor and stop.
Get a fresh exit node and new identity. Use between investigation sessions or when you need a new IP.
python3 {baseDir}/renew.py
Output: success: true/false. If false, the user needs to ensure ControlPort 9051 is enabled and TOR_DATA_DIR is set in .env.
Search all 12 verified-live dark web engines simultaneously. Returns deduplicated {title, url, engine} results.
Basic search:
python3 {baseDir}/search.py --query "SEARCH_TERM"
With result limit:
python3 {baseDir}/search.py --query "SEARCH_TERM" --max 30
Specific engines only:
python3 {baseDir}/search.py --query "SEARCH_TERM" --engines Ahmia Tor66 Ahmia-clearnet
Available engines: Ahmia, OnionLand, Amnesia, Torland, Excavator, Onionway, Tor66, OSS, Torgol, TheDeepSearches, DuckDuckGo-Tor, Ahmia-clearnet
Tip: Use short keyword queries (β€5 words). Dark web indexes respond better to focused keywords than natural-language questions.
Fetch the full content of any .onion URL or clearnet URL through Tor.
python3 {baseDir}/fetch.py --url "http://SOME.onion/path"
Output: title, text (first 3000 chars), link list, HTTP status. If status is 0 or error is set, the hidden service is unreachable.
Ping all 12 engines via Tor and get latency + status for each.
python3 {baseDir}/check_engines.py
Output: per-engine up/down status, latency in ms. Use this before a large search run to pass only alive engines to --engines.
Analyse scraped dark web content with an LLM. Produces a structured sectioned report.
From a string:
python3 {baseDir}/ask.py --query "INVESTIGATION_QUERY" --mode MODE --content "RAW_TEXT"
From a file:
python3 {baseDir}/ask.py --query "INVESTIGATION_QUERY" --mode MODE --file /path/to/content.txt
From stdin (pipe):
echo "CONTENT" | python3 {baseDir}/ask.py --query "QUERY" --mode MODE
Analysis modes:
| Mode | Use for |
|---|---|
| threat_intel | General OSINT (default) β artifacts, insights, next steps |
| ransomware | Malware/C2/MITRE TTPs, victim orgs, indicators |
| personal_identity | PII/breach exposure, severity, protective actions |
| corporate | Leaked credentials/code/docs, IR recommendations |
With custom focus:
python3 {baseDir}/ask.py --query "QUERY" --mode threat_intel --custom "Focus on cryptocurrency wallet addresses"
Runs the complete Robin pipeline: refine query β check live engines β search β filter best results β batch scrape β OSINT analysis.
python3 {baseDir}/pipeline.py --query "INVESTIGATION_QUERY" --mode MODE
With more results:
python3 {baseDir}/pipeline.py --query "INVESTIGATION_QUERY" --mode ransomware --max 50 --scrape 10
Without an LLM key (raw results only):
python3 {baseDir}/pipeline.py --query "INVESTIGATION_QUERY" --no-llm
Options:
--query β investigation topic (natural language OK β it gets refined automatically)--mode β threat_intel (default), ransomware, personal_identity, corporate--max β max raw results from search (default 30)--scrape β how many pages to batch-fetch (default 8)--custom β custom LLM instructions appended to the mode prompt--out FILE β write final report to a file--no-llm β skip refine/filter/ask steps; dump raw scraped content (no API key needed)python3 {baseDir}/check_tor.py β verify connectedpython3 {baseDir}/search.py --query "X" β search all enginespython3 {baseDir}/fetch.py --url "URL" on 2-3 top resultspython3 {baseDir}/ask.py --mode threat_intel --query "X" --content "..." on combined textpython3 {baseDir}/check_tor.pypython3 {baseDir}/pipeline.py --query "company.com data leak credentials" --mode corporatepython3 {baseDir}/check_tor.pypython3 {baseDir}/pipeline.py --query "GROUP_NAME ransomware" --mode ransomwarepython3 {baseDir}/check_tor.pypython3 {baseDir}/fetch.py --url "URL"status: 0, the site is temporarily down.check_engines.py first and filter by alive engines.ask) require an API key in {baseDir}/.env. Search and fetch work without any key.sync_sicry.py fetches the latest (or a tagged) sicry.py from the upstream
SICRYβ’ GitHub repo and overwrites the
bundled copy inside OnionClaw. Run it after a new SICRYβ’ release is published.
OnionClaw will automatically notify you whenever a newer release is
available β a one-line message is printed at startup of pipeline.py if an
update exists. You can also check on demand:
# Quick CLI update check (prints latest version, release URL, upgrade command):
python3 {baseDir}/pipeline.py --check-update
# Programmatic check from any Python environment:
import sicry
r = sicry.check_update()
if not r["up_to_date"]:
print(f"Update: {r['current']} β {r['latest']} {r['url']}")
Upgrade after a new release:
git -C {baseDir} pull # update the whole repo
python3 {baseDir}/sync_sicry.py # re-sync sicry.py from SICRYβ’
The check is clearnet-only (not through Tor), uses a 4-second timeout, and is always silent on network errors.
# Pull latest main branch:
python3 {baseDir}/sync_sicry.py
# Pull a specific release tag:
python3 {baseDir}/sync_sicry.py --tag v1.2.0
# Preview without writing (dry run):
python3 {baseDir}/sync_sicry.py --dry-run
Development workflow (when editing sicry.py locally):
OnionClaw/sicry.py.cp OnionClaw/sicry.py Sicry/sicry.pysync_sicry.py --tag vX.Y.Z
to update their OnionClaw copy.Flags:
--tag REF β git ref or tag to fetch (default: main)--dry-run β show what would happen without writing anythingThis is not a warning tucked in fine print. It is the whole point of writing it down openly.
| Use case | What it looks like |
|---|---|
| Dark-web crawling | Automated, headless spidering of .onion services at scale β forums, paste sites, markets, leak boards β with full identity rotation between every request. No human ever touches a keyboard. |
| Threat intelligence | Continuous monitoring of ransomware group blogs, initial access broker ads, CVE exploit drops, and actor chatter long before it surfaces on clearnet feeds. |
| Marketplace monitoring | Price tracking, stock alerts, vendor reputation scraping, and availability checks across darknet markets β the same logic a researcher uses to track fentanyl price trends is the same logic a supplier uses to undercut competitors. |
| Credential surveillance | Watching paste boards, breach dumps, and forum leaks for specific email domains, API keys, SSH keys, or internal hostnames the moment they appear β at a scale no human analyst can match. |
| Deanonymisation research | Cross-correlating .onion service metadata with clearnet traces, timing attacks, correlation of writing style and PGP keys β used both by law enforcement hunting criminals and by threat actors hunting journalists and dissidents. |
| Criminal automation | Autonomous agents placing orders, posting ads, messaging vendors, managing mule accounts, draining wallets β an entire criminal operation running without a human ever in the loop. |
| Disinformation infrastructure | Coordinated persona networks on hidden boards, fabricated document drops timed to bleed into legitimate OSINT pipelines, synthetic intelligence that reads real but originates from nowhere. |
| Zero-day brokerage | Automated monitoring of exploit vendor channels, private CVE auction boards, and vulnerability markets β buy-side and sell-side intelligence gathered faster than any human analyst. |
The 2026 internet is already at the edge of this. Within five years, AI agents that can:
β¦represent a qualitative shift from human criminals using tools to autonomous criminal infrastructure operating at machine speed with no human in the loop. The bottleneck has always been human attention. Remove it and the scaling properties of dark web operations change completely.
OnionClaw demonstrates all four of those primitives working together today. The full pipeline.py step β query refinement β multi-engine search β result filtering β batch scrape β LLM synthesis β identity rotation β is a complete autonomous dark web intelligence loop. Remove the OSINT framing and it is equally a complete autonomous dark web operation loop. The code is the same either way.
Security tools that pretend the dual-use problem does not exist are more dangerous than ones that name it directly. If you are building on top of OnionClaw:
"acme.com credential leak" for a pentest or "rival vendor SSH keys" for espionage..onion services may be illegal in your country regardless of intent or findings.OnionClaw is published for defensive research, red-team engagements, and threat intelligence work. The code does not know the difference between those uses and their inverse. You do. Build accordingly.
Seven commands expose the complete Tor OSINT toolkit:
| Command | What it does |
|---|---|
| check_tor.py | Verify Tor is active, show current exit IP |
| renew.py | Rotate Tor circuit β new exit node, new identity |
| check_engines.py | Ping all 18 dark web search engines, show latency |
| search.py | Search up to 18 engines simultaneously, deduplicated results |
| fetch.py | Fetch any .onion or clearnet URL through Tor |
| ask.py | LLM OSINT analysis of scraped content (4 modes) |
| pipeline.py | Full Robin pipeline: refine β search β filter β scrape β analyse |
127.0.0.1:9050)requests[socks] beautifulsoup4 python-dotenv stemask.py and pipeline.py analysis step)Linux (Debian/Ubuntu):
apt install tor && tor &
macOS:
brew install tor && tor &
With control port (needed for renew.py):
cat > /tmp/onionclaw_tor.conf << 'EOF'
SocksPort 9050
ControlPort 9051
CookieAuthentication 1
DataDirectory /tmp/tor_data
EOF
tor -f /tmp/onionclaw_tor.conf &
Then set TOR_DATA_DIR=/tmp/tor_data in .env.
pip install requests[socks] beautifulsoup4 python-dotenv stem
# Option A β clone directly
git clone https://github.com/JacobJandon/OnionClaw ~/.openclaw/skills/onionclaw
# Option B β copy local folder
cp -r OnionClaw ~/.openclaw/skills/onionclaw
.env in the skill folder:cp ~/.openclaw/skills/onionclaw/.env.example ~/.openclaw/skills/onionclaw/.env
nano ~/.openclaw/skills/onionclaw/.env # add LLM key if desired
onionclaw in the agent context whenever the user asks about dark web topics.Verify OpenClaw can see the skill:
openclaw skills list
# β onionclaw π§
Search the Tor dark web...
OpenClaw trigger phrases:
After install, start a new session β existing sessions will not pick up the new skill.
No OpenClaw required. Every script runs directly from a terminal:
git clone https://github.com/JacobJandon/OnionClaw
cd OnionClaw
pip install requests[socks] beautifulsoup4 python-dotenv stem
cp .env.example .env
# Edit .env β add LLM key if desired (optional for most commands)
Copy .env.example to .env and fill in what you need:
# ββ Tor ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
TOR_SOCKS_HOST=127.0.0.1
TOR_SOCKS_PORT=9050
TOR_CONTROL_HOST=127.0.0.1
TOR_CONTROL_PORT=9051
# TOR_CONTROL_PASSWORD=your_password # only if HashedControlPassword in torrc
# TOR_DATA_DIR=/tmp/tor_data # DataDirectory path for cookie auth
# ββ LLM (needed only for ask.py and pipeline.py analysis step) ββββββββββ
LLM_PROVIDER=openai # openai | anthropic | gemini | ollama | llamacpp
OPENA