yomo

by yomorun

Pending

šŸ¦– Serverless AI Agent Framework with Geo-distributed Edge AI Infra.

1,881stars
140forks
Go
Added 12/27/2025
AI Agentsa2a-protocolchatgptclaude-codecursordistributed-cloudfunction-callinggeodistributedsystemslow-latencymcpopenaiquicrealtimeserverlessstream-processingyomo
Installation
# Add to your Claude Code skills
git clone https://github.com/yomorun/yomo
README.md
<p align="center"> <img width="200px" height="200px" src="https://blog.yomo.run/static/images/logo.png" /> </p>

YoMo Go codecov Discord

YoMo is an open-source LLM Function Calling Framework for building scalable and ultra-fast AI Agents. šŸ’š We care about: Empowering Exceptional Customer Experiences in the Age of AI

We believe that seamless and responsive AI interactions are key to delivering outstanding customer experiences. YoMo is built with this principle at its core, focusing on speed, reliability, and scalability.

🌶 Features

| | Features | | | -- | ------------ | -- | | āš”ļø | Low-Latency MCP | Guaranteed by implementing atop the QUIC Protocol. Experience significantly faster communication between AI agents and MCP server. | | šŸ” | Enhanced Security | TLS v1.3 encryption is applied to every data packet by design, ensuring robust security for your AI agent communications. | | šŸš€ | Strongly-Typed Language | Build robust AI agents with complete confidence through type-safe function calling, enhanced error detection, and seamless integration capabilities. Type safety prevents runtime errors, simplifies testing, and enables IDE auto-completion. Currently support TypeScript and Go. | | šŸ“ø | Effortless Serverless DevOps | Streamline the entire lifecycle of your LLM tools, from development to deployment. Significantly reduces operational overhead, allowing you to focus exclusively on creating innovative AI agent functionalities. | | šŸŒŽ | Geo-Distributed Architecture | Bring AI inference and tools closer to your users with our globally distributed architecture, resulting in significantly faster response times and a superior user experience for your AI agents. |

šŸš€ Getting Started

Let's build a simple AI agent with LLM Function Calling to provide weather information:

Step 1. Install CLI

curl -fsSL https://get.yomo.run | sh

Verify the installation:

yomo version

Step 2. Start the server

Create a configuration file my-agent.yaml:

name: my-agent
host: 0.0.0.0
port: 9000

auth:
  type: token
  token: SECRET_TOKEN

bridge:
  ai:
    server:
      addr: 0.0.0.0:9000 ## OpenAI API compatible endpoint
      provider: vllm     ## llm to use

    providers:
      vllm:
        api_endpoint: http://127.0.0.1:8000/v1
        model: meta-llama/Llama-4-Scout-17B-16E-Instruct

      ollama:
        api_endpoint: http://localhost:11434

Launch the server:

yomo serve -c my-agent.yaml

Step 3. Implement the LLM Function Calling

Create a type-safe function ...