by ZenGuard-AI
The fastest Trust Layer for AI Agents
# Add to your Claude Code skills
git clone https://github.com/ZenGuard-AI/fast-llm-security-guardrails<a href="https://docs.zenguard.ai/" target="_blank"><img src="https://img.shields.io/badge/docs-view-green" alt="Documentation"></a>
<a href="https://colab.research.google.com/github/ZenGuard-AI/fast-llm-security-guardrails/blob/main/docs/colabs/zenguard_library.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
This repository is archived and read-only.
We believe that AI Agents are going to change the world. However, the general public still needs to be convinced that AI Agents are safe and secure. ZenGuard's goal is to build trust in AI Agents.
ZenGuard is a real-time trust layer for AI Agents. It protects AI agents during the runtime from prompt attacks, data leakage, and misuse. ZenGuard Trust Layer is built for production and is ready to be deployed in your business to ultimately increaset your company's success in the AI era.
Start by installing ZenGuard package:
Using pip:
pip install zenguard
Using poetry:
poetry add zenguard
Jump into our Quickstart Guide to easily integrate ZenGuard with your AI Agents.
Integration with LangChain <a href="https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/integrations/tools/zenguard.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open LangChain Integration in Colab" /></a>
Integration with [LlamaIndex](https://llamahu...