Overview
Prowler’s LLM provider enables comprehensive security testing of large language models using red team techniques. It integrates with promptfoo to provide extensive security evaluation capabilities.Prerequisites
Before using the LLM provider, ensure the following requirements are met:- promptfoo installed: The LLM provider requires promptfoo to be installed on the system
- LLM API access: Valid API keys for the target LLM models to test
- Email verification: promptfoo requires email verification for red team evaluations
Installation
Install promptfoo
Install promptfoo using one of the following methods: Using npm:Verify Installation
Configuration
Step 1: Email Verification
promptfoo requires email verification for red team evaluations. Set the email address:Step 2: Configure LLM API Keys
Set up API keys for the target LLM models. For OpenAI (default configuration):Step 3: Generate Test Cases (Optional)
Prowler provides a default suite of red team tests but to customize the test cases, generate them first:Usage
Basic Usage
Run LLM security testing with the default configuration:Custom Configuration
Use a custom promptfoo configuration file:Output Options
Generate reports in various formats:Concurrency Control
Adjust the number of concurrent tests:Default Configuration
Prowler includes a comprehensive default LLM configuration that provides:- Target Models: OpenAI GPT models by default
- Security Frameworks:
- OWASP LLM Top 10
- OWASP API Top 10
- MITRE ATLAS
- NIST AI Risk Management Framework
- EU AI Act compliance
- Test Coverage: Over 5,000 security test cases
- Plugin Support: Multiple security testing plugins
Advanced Configuration
Custom Test Suites
Create custom test configurations by modifying the promptfoo config file inprowler/config/llm_config.yaml
or pass a custom configuration with --config-file
flag: