by Pthahnix
De-Anthropocentric Research Engine — AI-powered academic research automation with deep literature survey, gap analysis, idea generation, experiment design & execution. Combines iterative deep research, adversarial debate, evolutionary generation, and distributed GPU execution.
# Add to your Claude Code skills
git clone https://github.com/Pthahnix/De-Anthropocentric-Research-EngineHuman-centered AI-assisted research can no longer sustain the next great leaps of our civilization. What we need is not just more tools, but an AI researcher that thinks and acts independently — a new entity to replace the human role in science. This is DARE.
🚧 Personal side project. Actively under development.
DARE is not a tool that helps you do research. It is the researcher. You set the direction — DARE searches, reads, discovers gaps, generates ideas, designs experiments, and executes them on GPUs. Autonomously. Iteratively. Without asking for permission.
No comments yet. Be the first to share your thoughts!
The bottleneck in modern research is not data or compute — it's the human in the loop. Every existing "AI research assistant" still requires a human to decide what to search, what to read, which gaps matter, and which ideas are worth pursuing. DARE removes this bottleneck entirely. The human provides only the initial direction; everything after that is autonomous.
This isn't about replacing researchers — it's about creating a parallel research capacity that operates on timescales and breadths impossible for any individual.
DARE's architecture follows a military command hierarchy — not because research is war, but because the decomposition pattern is remarkably effective:
General (Meta-Strategy) → "Take that hill" → WHAT to research
Colonel (Strategy) → "Flank from the east" → WHEN and WHY
Captain (Tactic) → "Squad A cover, B move" → HOW to combine
Sergeant (SOP) → "Fire, reload, advance" → HOW to execute
Each layer has a single concern and calls only the layer directly below it. A Strategy never touches MCP tools directly; a Tactic never decides research direction. This strict layering means every component is independently testable, replaceable, and composable.
Traditional MCP tools are dumb functions — they take input, return output, no reasoning involved. DARE's dare-agents tools are fundamentally different. Each of the 49 tools is a single-responsibility LLM micro-agent with its own system prompt, personality, and reasoning chain.
When DARE runs "root-cause-drilling", it's not calling a template — it's spawning an AI agent whose entire existence is devoted to drilling from surface symptoms to root causes. When "debate-critic" runs, it genuinely tries to destroy the idea it's reviewing. This is what makes DARE's outputs qualitatively different from prompt-chaining systems.
Built on pi-ai — a lightweight framework for building LLM-powered tools as MCP servers.
Every significant output in DARE goes through adversarial debate before being accepted. The Proposer-Critic-Judge architecture isn't decoration — it's the core quality mechanism:
Ideas that survive this gauntlet are genuinely robust. Ideas that don't are discarded or refined. No hand-waving, no "sounds good to me."
Most AI systems optimize for a single quality metric — they'll give you 10 variations of the same good idea. DARE uses MAP-Elites, a quality-diversity algorithm that maintains a population of ideas spanning multiple dimensions of variation. The result: you get the best idea in each niche, not 10 copies of the same insight.
AI agents naturally take the path of least resistance — searching a handful of papers and declaring victory. DARE embeds hard enforcement mechanisms directly into every skill to prevent this:
<HARD-GATE> that blocks the strategy from exiting its loop until 80% of the budget is met. The AI cannot stop early no matter how "satisfied" it feels.You ask a question
↓
┌─────────────────────────────────────────────────────────────┐
│ Phase 0: Brainstorming (structured requirement clarification) │
│ Phase 1: Intake (research brief) │
│ Phase 2: Research Loop (Stages 1-3, up to 7 rounds) │
│ ├── Literature Survey (S:20 / M:40 / L:60+ papers) │
│ ├── Gap Analysis (S:10 / M:15 / L:25+ papers) │
│ ├── Insight (7-step pipeline) │
│ ├── Ideation (cross-domain discovery → 31 methods × 5) │
│ └── Review → Selective Redo → Review (score ≥ 8/10) │
│ Phase 3: Experiment Design │
│ Phase 4: GPU Execution (remote pod, fully autonomous) │
└─────────────────────────────────────────────────────────────┘
↓
Results returned via git
Each stage runs SEARCH → READ → REFLECT → EVALUATE cycles with autonomous gap discovery and dynamic stopping. No human in the loop.
Four-layer skill hierarchy where each layer calls only the layer below:
┌──────────────────────────────────────────────────────────────┐
│ META-STRATEGY (/dare) │
│ Entry point — orchestrates the full research pipeline │
├──────────────────────────────────────────────────────────────┤
│ STRATEGY (8) │
│ intake, lit-survey, gap-analysis, insight, ideation, │
│ round, paper-writing, method-evolve │
├──────────────────────────────────────────────────────────────┤
│ TACTIC (15) │
│ academic-research, web-research, insight, multiagent-debate, │
│ review, idea-generation, idea-augmentation, scamper, │
│ component-surgery, cross-domain-collision, and more │
├──────────────────────────────────────────────────────────────┤
│ SOP (60) │
│ Single-responsibility wrappers around dare-agents tools │
├──────────────────────────────────────────────────────────────┤
│ TOOL LAYER (MCP servers — atomic operations) │
│ dare-agents, dare-scholar, dare-web, apify, brave, runpod │
└──────────────────────────────────────────────────────────────┘
The core engine of v3. 49 tools built with pi-ai, each a single-responsibility LLM micro-agent with its own system prompt.
| Category | Tools | Count | |----------|-------|-------|