by GetBindu
Bindu: Turn any AI agent into a living microservice - interoperable, observable, composable.
# Add to your Claude Code skills
git clone https://github.com/GetBindu/BinduBindu (read: binduu) turns any AI agent into a production microservice. Build your agent in any framework — Agno, LangChain, OpenAI SDK, even plain TypeScript — call bindufy(), and get a service with DID identity, A2A protocol, OAuth2 auth, and crypto payments. No infrastructure code. No rewriting.
Works with Python, TypeScript, and Kotlin. Built on open protocols: A2A, AP2, and X402.
Before installing Bindu, ensure you have:
OPENROUTER_API_KEY, OPENAI_API_KEY, or MINIMAX_API_KEY in your environment variables. Free OpenRouter models are available for testing. MiniMax AI offers M2.7 with 1M context window.# Check Python version
uv run python --version # Should show 3.12 or higher
# Check UV installation
uv --version
On some Windows systems, git may not be recognized in Command Prompt even after installation due to PATH configuration issues.
If you face this issue, you can use GitHub Desktop as an alternative:
No comments yet. Be the first to share your thoughts!
GitHub Desktop allows you to clone, manage branches, commit changes, and open pull requests without using the command line.
# Install Bindu
uv add bindu
# For development (if contributing to Bindu)
# Create and activate virtual environment
uv venv --python 3.12.9
source .venv/bin/activate # On macOS/Linux
# .venv\Scripts\activate # On Windows
uv sync --dev
| Issue | Solution |
|-------|----------|
| uv: command not found | Restart your terminal after installing UV. On Windows, use PowerShell |
| Python version not supported | Install Python 3.12+ from python.org |
| Virtual environment not activating (Windows) | Use PowerShell and run .venv\Scripts\activate |
| Microsoft Visual C++ required | Download Visual C++ Build Tools |
| ModuleNotFoundError | Activate venv and run uv sync --dev |
Create your agent script my_agent.py:
import os
from bindu.penguin.bindufy import bindufy
from agno.agent import Agent
from agno.tools.duckduckgo import DuckDuckGoTools
from agno.models.openai import OpenAIChat
# Define your agent
agent = Agent(
instructions="You are a research assistant that finds and summarizes information.",
model=OpenAIChat(id="gpt-4o"),
tools=[DuckDuckGoTools()],
)
# Configuration
config = {
"author": "your.email@example.com",
"name": "research_agent",
"description": "A research assistant agent",
"deployment": {
"url": os.getenv("BINDU_DEPLOYMENT_URL", "http://localhost:3773"),
"expose": True,
},
"skills": ["skills/question-answering", "skills/pdf-processing"]
}
# Handler function
def handler(messages: list[dict[str, str]]):
"""Process messages and return agent response.
Args:
messages: List of message dictionaries containing conversation history
Returns:
Agent response result
"""
result = agent.run(input=messages)
return result
# Bindu-fy it
bindufy(config, handler)
# Use tunnel to expose your agent to the internet
# bindufy(config, handler, launch=True)

Your agent is now live at the URL configured in deployment.url.
Set a custom port without code changes:
# Linux/macOS
export BINDU_PORT=4000
# Windows PowerShell
$env:BINDU_PORT="4000"
Existing examples that use http://localhost:3773 are automatically overridden when BINDU_PORT is set.
Same pattern, different language. Create index.ts:
import { bindufy } from "@bindu/sdk";
import OpenAI from "openai";
const openai = new OpenAI();
bindufy({
author: "your.email@example.com",
name: "research_agent",
description: "A research assistant agent",
deployment: { url: "http://localhost:3773", expose: true },
skills: ["skills/question-answering"],
}, async (messages) => {
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages: messages.map(m => ({
role: m.role as "user" | "assistant" | "system",
content: m.content,
})),
});
return response.choices[0].message.content || "";
});
Run it:
npm install @bindu/sdk openai
npx tsx index.ts
The SDK launches the Bindu core automatically in the background. Your agent is live at http://localhost:3773 — same A2A protocol, same DID, same everything.
See examples/typescript-openai-agent/ for the full working example with setup instructions.
Try Bindu without setting up Postgres, Redis, or any cloud services. Runs entirely locally using in-memory storage and scheduler.
python examples/beginner_zero_config_agent.py
Smallest possible working agent:
import os
from bindu.penguin.bindufy import bindufy
def handler(messages):
return [{"role": "assistant", "content": messages[-1]["content"]}]
config = {
"author": "your.email@example.com",
"name": "echo_agent",
"description": "A basic echo agent for quick testing.",
"deployment": {
"url": os.getenv("BINDU_DEPLOYMENT_URL", "http://localhost:3773"),
"expose": True,
},
"skills": []
}
bindufy(config, handler)
# Use tunnel to expose your agent to the internet
# bindufy(config, handler, launch=True)
Run the agent:
# Start the agent
python examples/echo_agent.py
Input:
curl --location 'http://localhost:3773/' \
--header 'Content-Type: application/json' \
--data '{
"jsonrpc": "2.0",
"method": "message/send",
"params": {
"message": {
"role": "user",
"parts": [
{
"kind": "text",