by thatsme
BEAM-native personal AI agent built on Elixir/OTP. Runs on your hardware. Your data stays yours.
# Add to your Claude Code skills
git clone https://github.com/thatsme/AlexClawA BEAM-native personal autonomous AI agent built on Elixir/OTP.
AlexClaw monitors the world (RSS feeds, web sources, GitHub repositories, APIs), accumulates knowledge, executes workflows autonomously on schedule, and communicates with its owner via Telegram. It routes every task to the cheapest available LLM that satisfies the required reasoning tier — including fully local models.
Designed as a single-user personal agent. Not a platform. Not a marketplace. One codebase, fully auditable, running on your infrastructure.
"I didn't plan most of this. I just kept solving the next problem."

light / medium / heavy / local) with priority-based selection. All providers (cloud and local) are stored in PostgreSQL and fully manageable from the admin UI. Tracks daily usage per provider in ETS. Ships with default providers (Gemini, Claude, Ollama, LM Studio) seeded on first boot — add, remove, or reconfigure any provider at runtime.No comments yet. Be the first to share your thoughts!
local (configurable). Deterministic pre-filter handles obvious decisions without an LLM call (0ms). Plan validation rejects malformed steps before execution. Working memory compression every 3 iterations. Proportional time budget scales with plan size. Real-time user intervention: pause, resume, steer, abort, step override. Full audit trail — every prompt, response, skill call, rubric score, and working memory snapshot persisted. Skill outputs embedded to pgvector for future session context. Available from the chat page in Reasoning mode.
api_request step config. Manual re-discovery via "Discover" button.gemini-embedding-001, Ollama nomic-embed-text, or any OpenAI-compatible endpoint). 768-dimension vectors with HNSW index. All skills that store knowledge auto-embed in the background.knowledge_entries table for documentation and reference material, isolated from news/conversation memory. Scraper skills fetch, chunk, and embed documentation from hexdocs.pm, Erlang/OTP source (GitHub), Elixir stdlib source, Learn You Some Erlang, and existing skill code. Chat integrates both Knowledge and Memory search with a context source selector. ~7200 embeddings across 6 knowledge kinds.send_to_workflow and receive_from_workflow skills. Auto-discovery, node status monitoring, and per-workflow node assignment from the admin UI. docker-compose_swarm.yml included for local multi-node testing./mcp endpoint. Bearer token auth, policy-based tool restrictions, dynamic tool list refresh via PubSub. Built on anubis_mcp with Streamable HTTP transport.Deprecation Notice (v0.3.15):
web_browse,web_search, andrss_collectorare deprecated and will be removed in v0.4.0. Use the new composable pattern instead:web_fetch → llm_transform,web_search_fetch → llm_transform,rss_fetch → llm_score → llm_transform. See the v0.3.15 release notes for migration examples.

| Skill | Description |
|---|---|
| web_fetch | Fetch a URL, return extracted text (no LLM) |
| web_search_fetch | Search DuckDuckGo + fetch pages, return raw content (no LLM) |
| rss_fetch | Fetch RSS feeds, dedup, filter recent, return JSON items (no LLM) |
| llm_transform | Run a prompt template through the LLM (workflow glue step) |
| llm_score | Batch-score items for relevance via single LLM call |
| rss_collector | Fetch + score + notify all-in-one (deprecated, use rss_fetch → llm_score) |
| web_search | Search + synthesize (deprecated, use web_search_fetch → llm_transform) |
| web_browse | Fetch + summarize (deprecated, use web_fetch → llm_transform) |
| research | Deep research with memory context |
| conversational | Free-text LLM conversation |
| telegram_notify | Send a Telegram message as a workflow step |
| discord_notify | Send workflow output to a Discord channel. Configurable channel_id per step — deliver to different channels in the same workflow |
| api_request | REST client with API resource discovery — auto-discovers OpenAPI specs, resolves URLs from assigned resources, supports {base_url} interpolation |
| github_security_review | Fetch PR/commit diff, run LLM security analysis |
| google_calendar | Fetch upcoming Google Calendar events |
| google_tasks | Manage Google Tasks lists and items |
| db_backup | PostgreSQL backup with gzip compression and weekly rotation to host-mounted path |
| shell | Execute whitelisted OS commands for container introspection (2FA-gated) |
| web_automation | Browser automation via headless Playwright sidecar (experimental) |
| coder | Generate dynamic skills from natural language via local LLM |
| send_to_workflow | Send data to a workflow on another BEAM node |
| receive_from_workflow | Gate: accepts remote triggers when placed as step 1 |
| hexdocs_scraper | Scrape hexdocs.pm docs into knowledge base embeddings (dynamic) |
| erlang_docs_scraper | Fetch Erlang/OTP docs from GitHub into knowledge base (dynamic) |
| lyse_scraper | Scrape Learn You Some Erlang chapters into knowledge base (dynamic) |
| elixir_source_scraper | Fetch Elixir stdlib source from GitHub for pattern learning (dynamic) |
| skill_source_indexer | Index existing skill source code into knowledge base (dynamic) |

Load custom skills at runtime — no code changes, no Docker rebuild, no restart. Drop an .ex file into the skills volume (or upload via the admin UI), and it compiles into the running VM immediately.
SkillAPI only. Undeclared permissions are denied at runtime. Context-aware PolicyEngine evaluates chain depth, capability tokens, and configurable policy rules.external/0. Dynamic skills are AST-scanned at load time — undeclared HTTP/socket calls are rejected (fail-closed).