by HeadyZhang
Static security scanner for LLM agents — prompt injection, MCP config auditing, taint analysis. 49 rules mapped to OWASP Agentic Top 10 (2026). Works with LangChain, CrewAI, AutoGen.
# Add to your Claude Code skills
git clone https://github.com/HeadyZhang/agent-auditFind security vulnerabilities in your AI agent code before they reach production.
AI agents are not just chatbots. They execute code, call tools, and touch real systems, so one unsafe input path can become a production incident.
subprocess/eval and become command executionIf your team ships agent features, owns CI security gates, or operates MCP servers and tool integrations, this is a high-probability risk surface rather than an edge case. You likely need this before every merge if agent code can trigger tools, commands, or external systems.
Agent Audit catches these issues before deployment with an analysis core designed for agent workflows today: tool-boundary taint tracking, MCP configuration auditing, and semantic secret detection, with room to extend into learning-assisted detection over time.
Think of it as security linting for AI agents, with 53 rules mapped to the OWASP Agentic Top 10 (2026).
pip install agent-audit
agent-audit scan ./your-agent-project
# Show only high+ findings
agent-audit scan . --severity high
# Fail CI when high+ findings exist
agent-audit scan . --fail-on high
--severity controls what is reported. --fail-on controls when the command exits with code 1.
Sample report output:
╭──────────────────────────────────────────────────────────────────────────────╮
│ Agent Audit Security Report │
│ Scanned: ./your-agent-project │
│ Files analyzed: 2 │
│ Risk Score: 8.4/10 (HIGH) │
╰────────────...
No comments yet. Be the first to share your thoughts!