by splx-ai
A security scanner for your LLM agentic workflows
# Add to your Claude Code skills
git clone https://github.com/splx-ai/agentic-radarThe Agentic Radar is designed to analyze and assess agentic systems for security and operational insights. It helps developers, researchers, and security professionals understand how agentic systems function and identify potential vulnerabilities.
It allows users to create a security report for agentic systems, including:
The comprehensive HTML report summarizes all findings and allows for easy reviewing and sharing.
Agentic Radar includes mapping of detected vulnerabilities to well-known security frameworks 🛡️.
If you only care about visualization, try out the Agentic Visualizer.
It is a web-based tool that allows you to visualize agentic workflows in a user-friendly way.
No comments yet. Be the first to share your thoughts!
There are none! Just make sure you have Python (pip) installed on your machine.
pip install agentic-radar
# Check that it is installed
agentic-radar --version
Some features require extra installations, depending on the targeted agentic framework. See more below.
CrewAI extras are needed when using one of the following features in combination with CrewAI:
You can install Agentic Radar with extra CrewAI dependencies by running:
pip install "agentic-radar[crewai]"
[!WARNING] This will install the
crewai-toolspackage which is only supported on Python versions >= 3.10 and < 3.13. If you are using a different python version, the tool descriptions will be less detailed or entirely missing.
OpenAI Agents extras are needed when using one of the following features in combination with OpenAI Agents:
You can install Agentic Radar with extra OpenAI Agents dependencies by running:
pip install "agentic-radar[openai-agents]"
Agentic Radar now supports two main commands:
scanScan code for agentic workflows and generate a report.
agentic-radar scan [OPTIONS] FRAMEWORK:{langgraph|crewai|n8n|openai-agents|autogen}
Example:
agentic-radar scan langgraph -i path/to/langgraph/example/folder -o report.html
testTest agents in an agentic workflow for various vulnerabilities. Requires OPENAI_API_KEY set as environment variable.
agentic-radar test [OPTIONS] FRAMEWORK:{openai-agents} ENTRYPOINT_SCRIPT_WITH_ARGS
Example:
agentic-radar test openai-agents "path/to/openai-agents/example.py"
See more about this feature here.
Agentic Prompt Hardening automatically improves detected system prompts in your agentic workflow and displays them in the report. It transforms simple agent instructions into high-quality structured system prompts which follow best prompt engineering practices.
[!NOTE]
Currently supported frameworks (with more to come): OpenAI Agents, CrewAI, Autogen
It is quite straightforward to use:
Set your OPENAI_API_KEY environment variable by running export OPENAI_API_KEY=<api_key>.
Run Agentic Radar with the --harden-prompts flag, for example:
agentic-radar scan openai-agents --harden-prompts -i examples/openai-agents/
basic/lifecycle_example -o report.html
Agentic Radar now supports testing your agent workflows at runtime to identify critical vulnerabilities through simulated adversarial inputs.
This includes automated testing for:
Currently supported for:
The test command launches your agentic workflow with a test suite designed to simulate malicious or adversarial inputs. These tests are designed based on real-world attack scenarios aligned with the OWASP LLM Top 10.
[!NOTE]
This feature requires OPENAI_API_KEY or AZURE_OPENAI_API_KEY set as an environment variable. You can set it via command line or inside a .env file.
Test is run like:
agentic-radar test <framework> "<path/to/the/workflow/main.py any-necessary-args>"
For example:
agentic-radar test openai-agents "examples/openai-agents/basic/lifecycle_example.py"
The tool injects itself into the agentic workflow provided by user, detects necessary information and runs the prepared tests.
You can pass custom tests to Agentic Radar by creating a YAML file with the following structure:
include_default_tests: false
tests:
- name: TestA
input: Mention A.
success_condition: Attack succeeded if A is mentioned.
- name: TestB
input: Mention B.
success_condition: Attack succeeded if B is mentioned.
name is the name of the test, input is the input text to be passed to the agent, and success_condition is a description of what constitutes a