by strukto-ai
A Unified Virtual Filesystem For AI Agents
# Add to your Claude Code skills
git clone https://github.com/strukto-ai/mirageMirage is a Unified Virtual File System for AI Agents: a single tree that mounts services and data sources like S3, Google Drive, Slack, Gmail, and Redis side-by-side as one filesystem.
AI agents reach every backend with the same handful of Unix-like tools, and pipelines compose across services as naturally as on a local disk. It's a simulated environment, agents see one filesystem underneath. Any LLM that already knows bash can use Mirage out of the box, with zero new vocabulary.
const ws = new Workspace({
'/data': new RAMResource(),
'/s3': new S3Resource({ bucket: 'logs' }),
'/slack': new SlackResource({}),
'/github': new GitHubResource({}),
})
await ws.execute('grep alert /slack/general/*.json | wc -l')
await ws.execute('cat /github/mirage/README.md')
await ws.execute('cp /s3/report.csv /data/local.csv')
// Register a new command, available across every mount.
ws.command('summarize', ...)
// Override a command for a specific resource + filetype —
// `cat` on a Parquet file in /s3 renders rows as JSON instead of raw bytes.
ws.command('cat', { resource: 's3', filetype: 'parquet' }, ...)
await ws.execute('summarize /github/mirage/README.md')
await ws.execute('cat /s3/events/2026-05-06.parquet | jq .user')
No comments yet. Be the first to share your thoughts!
mirage-ai package and the mirage CLIuv add mirage-ai
This installs both the mirage library and the mirage CLI binary.
Pick the package that matches your runtime:
npm install @struktoai/mirage-node # Node.js servers and CLIs
npm install @struktoai/mirage-browser # browser / edge runtimes
npm install @struktoai/mirage-core # runtime-agnostic primitives
@struktoai/mirage-node and @struktoai/mirage-browser both pull in @struktoai/mirage-core automatically.
curl -fsSL https://strukto.ai/mirage/install.sh | sh
Or via your package manager of choice:
npm install -g @struktoai/mirage-cli
uvx mirage-ai
npx @struktoai/mirage-cli
from mirage import Workspace
from mirage.resource.gdocs import GDocsConfig, GDocsResource
from mirage.resource.ram import RAMResource
from mirage.resource.s3 import S3Config, S3Resource
from mirage.resource.slack import SlackConfig, SlackResource
ws = Workspace({
"/data": RAMResource(),
"/s3": S3Resource(S3Config(bucket="my-bucket")),
"/slack": SlackResource(SlackConfig()),
"/docs": GDocsResource(GDocsConfig()),
})
await ws.execute("cp /s3/report.csv /data/report.csv")
await ws.execute("grep alert /s3/data/log.jsonl | wc -l")
ws.snapshot("demo.tar")
import {
Workspace,
RAMResource,
S3Resource,
SlackResource,
GDocsResource,
} from '@struktoai/mirage-browser'
const ws = new Workspace({
'/data': new RAMResource(),
'/s3': new S3Resource({ bucket: 'my-bucket' }),
'/slack': new SlackResource({}),
'/docs': new GDocsResource({}),
})
await ws.execute('cp /s3/report.csv /data/report.csv')
await ws.execute('grep alert /s3/data/log.jsonl | wc -l')
mirage workspace create ws.yaml --id demo
mirage execute --workspace_id demo --command "cp /s3/report.csv /data/report.csv"
mirage provision --workspace_id demo --command "cat /s3/data/large.jsonl"
mirage workspace snapshot demo demo.tar
mirage workspace load demo.tar --id demo-restored
Mirage drops into the major agent application frameworks as a sandbox or tool layer. Your agent runs against the same mount tree it would in bash, so swapping the model or runtime never changes the surface.
The MirageSandboxClient plugs a Workspace into the OpenAI Agents SDK as a sandbox: bash commands the agent runs execute against your mounts.
from agents import Runner
from agents.run import RunConfig
from agents.sandbox import SandboxAgent, SandboxRunConfig
from mirage.agents.openai_agents import MirageSandboxClient
client = MirageSandboxClient(ws)
agent = SandboxAgent(
name="Mirage Sandbox Agent",
model="gpt-5.4-nano",
instructions=ws.file_prompt,
)
result = await Runner.run(
agent,
"Summarize /s3/data/report.parquet into /report.txt.",
run_config=RunConfig(sandbox=SandboxRunConfig(client=client)),
)
mirageTools(ws) exposes the workspace as a typed AI SDK tool set, so any model wired into the AI SDK can read and write across mounts, in Node or the browser.
import { generateText } from 'ai'
import { openai } from '@ai-sdk/openai'
import { mirageTools } from '@struktoai/mirage-agents/vercel'
import { buildSystemPrompt } from '@struktoai/mirage-agents/openai'
const { text } = await generateText({
model: openai('gpt-5.4-nano'),
system: buildSystemPrompt({ mountInfo: { '/': 'In-memory filesystem' } }),
prompt: "Use readFile to read /docs/paper.pdf, then describe what's in it.",
tools: mirageTools(ws),
})
LangChain, Pydantic AI, CAMEL, OpenHands, and Mastra adapters live alongside these.
Every Workspace ships with a two-layer cache so repeated work against remote backends (S3, GDrive, Slack, …) hits local state instead of the network:
import { RedisFileCacheStore, RedisIndexCacheStore, Workspace } from 'mirage/node'
const ws = new Workspace(
{ '/s3': new S3Resource({ bucket: 'my-bucket' }) },
{
cache: new RedisFileCacheStore({ url: 'redis://localhost:6379/0', limit: '8GB' }),
index: new RedisIndexCacheStore({ url: 'redis://localhost:6379/0', ttl: 600 }),
},
)
import { S