Added in: 5.8.0
Prowler Lighthouse AI integrates Large Language Models (LLMs) with Prowler security findings data.
Here’s what’s happening behind the scenes:
- The system uses a multi-agent architecture built with LanggraphJS for LLM logic and Vercel AI SDK UI for frontend chatbot.
- It uses a “supervisor” architecture that interacts with different agents for specialized tasks. For example,
findings_agentcan analyze detected security findings, whileoverview_agentprovides a summary of connected cloud accounts. - The system connects to the configured LLM provider to understand user’s query, fetches the right data, and responds to the query.
Lighthouse AI supports multiple LLM providers including OpenAI, Amazon Bedrock, and OpenAI-compatible services. For configuration details, see Using Multiple LLM Providers with Lighthouse.
- The supervisor agent is the main contact point. It is what users interact with directly from the chat interface. It coordinates with other agents to answer users’ questions comprehensively.

All agents can only read relevant security data. They cannot modify your data or access sensitive information like configured secrets or tenant details.
Set up
Getting started with Prowler Lighthouse AI is easy:- Navigate to Configuration → Lighthouse AI
- Click Connect under the desired provider (OpenAI, Amazon Bedrock, or OpenAI Compatible)
- Enter the required credentials
- Select a default model
- Click Connect to save
For detailed configuration instructions for each provider, see Using Multiple LLM Providers with Lighthouse.

Adding Business Context
The optional business context field lets you provide additional information to help Lighthouse AI understand your environment and priorities, including:- Your organization’s cloud security goals
- Information about account owners or responsible teams
- Compliance requirements for your organization
- Current security initiatives or focus areas

