A real-time security testing platform for LLM applications, designed to help developers test and validate their AI security implementations against the OWASP Top 10 LLM vulnerabilities.
This tool helps developers:
- Test AI/LLM applications for security vulnerabilities
- Validate prompt injection defenses
- Analyze security boundaries
- Track and export security test results
- Understand common attack patterns
This tool specifically tests against the OWASP Top 10 vulnerabilities for Large Language Model Applications:
-
LLM01: Prompt Injection (Critical)
- System prompt leakage
- Direct/indirect injection
- Role-playing attacks
-
LLM02: Sensitive Information Disclosure (Critical)
- Training data exposure
- API key leakage
- PII disclosure
-
LLM03: Supply Chain (High)
- Model source verification
- Training data integrity
- Dependency security
-
LLM04: Data and Model Poisoning (Critical)
- Training manipulation
- Fine-tuning attacks
- Adversarial inputs
-
LLM05: Improper Output Handling (Critical)
- Code injection
- XSS through responses
- Malicious content
-
LLM06: Excessive Agency (High)
- Unauthorized actions
- Scope expansion
- Permission boundaries
-
LLM07: System Prompt Leakage (Critical)
- Instruction extraction
- Configuration disclosure
- Security control exposure
-
LLM08: Vector/Embedding Weaknesses (High)
- Embedding manipulation
- Similarity attacks
- Vector space exploitation
-
LLM09: Misinformation (High)
- False information generation
- Fact manipulation
- Source misattribution
-
LLM10: Unbounded Consumption (High)
- Resource exhaustion
- Token limit abuse
- Cost escalation
- Real-time prompt security analysis
- Security score calculation
- Detailed risk assessment
- Pattern matching against known vulnerabilities
- Live security feedback
- Test history tracking
- Security statistics dashboard
- Dark/Light mode support
- Export test results
- OWASP Top 10 LLM checks
- Prompt injection detection
- Sensitive data scanning
- Resource usage monitoring
- Rate limiting
- Backend: Node.js + Express
- Frontend: HTML + CSS + JavaScript
- Security: Helmet, CORS, Rate Limiting
- Real-time: WebSocket
- Logging: Winston
- Clone the repository:
git clone https://github.com/yourusername/llm-security-lab.git
cd llm-security-lab
- Install dependencies:
npm install
- Create a .env file:
cp .env.example .env
# Edit .env with your configuration
- Start the development server:
npm run dev
-
Push your code to GitHub
-
The repository includes a
render.yaml
for easy deployment:
services:
- type: web
name: security-docs-assistant
env: node
buildCommand: npm install
startCommand: node src/server.js
-
Deploy steps:
- Go to dashboard.render.com
- Click "New +"
- Select "Web Service"
- Connect your GitHub repository
- Render will automatically detect the configuration
-
Set up environment variables in Render dashboard:
- NODE_ENV
- PORT
- JWT_SECRET
- CORS_ORIGIN
- OPENAI_API_KEY
- Pattern matching against known attack vectors
- Context boundary validation
- Role-based access control
- Sensitive data detection
- Content filtering
- Response sanitization
- Token usage monitoring
- Rate limiting
- Request validation
- Real-time security scoring
- Risk assessment
- Audit logging
- Test history tracking
The web interface provides:
-
Prompt Testing Area
- Input validation
- Character counting
- Template support
-
Security Analysis
- Real-time security score
- Risk identification
- Mitigation suggestions
-
Statistics Dashboard
- Total prompts tested
- Security issues found
- Success rate
- Average response time
-
History Management
- Test history tracking
- Export functionality
- Clear history option
- Fork the repository
- Create your feature branch
- Commit your changes
- Push to the branch
- Create a Pull Request
MIT License - See LICENSE file for details