🌟Open Source Initiative

Building AI Security Together

Our commitment to the security community: free tools, open research, and collaborative development of solutions that make AI safer for everyone.

12.8k
GitHub Stars
420+
Contributors
2.3M+
Downloads

Our Open Source Projects

Explore our comprehensive collection of AI security tools, libraries, research, and community projects.

🛠️

Security Tools

Free command-line tools and utilities for AI security

vibeguard-cli

Open-source command-line scanner for AI security vulnerabilities

Active
TypeScript
2.4k
MIT License
Key Features
OWASP LLM Top 10 detection
Multiple language support
CI/CD integration ready
JSON/SARIF output formats

prompt-injection-detector

Lightweight library to detect prompt injection attempts in LLM inputs

Active
Python
1.8k
Apache-2.0 License
Key Features
Real-time detection
Multiple detection strategies
Configurable sensitivity
Zero dependencies

ai-security-scanner

Multi-language static analysis tool for AI/ML applications

Beta
Go
1.2k
MIT License
Key Features
Static code analysis
Dependency scanning
Model security checks
Custom rule engine
📚

Security Libraries

Developer libraries and SDKs for integrating AI security

llm-guard

Production-ready security middleware for LLM applications

Active
TypeScript
892
MIT License
Key Features
Input/output sanitization
Rate limiting and throttling
Audit logging
Express/FastAPI middleware

secure-rag

Security-first Retrieval Augmented Generation framework

Active
Python
743
Apache-2.0 License
Key Features
Vector database security
Query sanitization
Access control integration
Embeddings validation

ai-audit-logger

Comprehensive audit logging for AI systems

Active
Rust
456
MIT License
Key Features
Structured logging
Tamper-proof logs
Multiple backends
Privacy-preserving
🔬

Research & Datasets

Academic research, vulnerability datasets, and security benchmarks

llm-vulnerability-dataset

Comprehensive dataset of real-world LLM vulnerabilities

Active
Data
1.5k
CC BY-SA 4.0 License
Key Features
10,000+ vulnerability samples
OWASP LLM Top 10 coverage
Multiple languages
Continuously updated

ai-security-benchmarks

Standardized benchmarks for evaluating AI security tools

Active
Python
679
MIT License
Key Features
Standardized test suites
Performance metrics
Reproducible results
Tool comparison framework

prompt-injection-taxonomy

Comprehensive taxonomy and classification of prompt injection attacks

Active
Research
543
CC BY 4.0 License
Key Features
Attack classification
Defense categorization
Real-world examples
Research collaboration
👥

Community Projects

Community-driven tools and initiatives

awesome-ai-security

Curated list of AI security resources, tools, and papers

Active
Markdown
3.2k
CC0 License
Key Features
500+ curated resources
Weekly updates
Community contributions
Categorized listings

ai-security-workshops

Free workshops and training materials for AI security

Active
Educational
892
CC BY-SA 4.0 License
Key Features
Hands-on exercises
Video tutorials
Slide decks
Practice environments

security-code-patterns

Collection of secure coding patterns for AI applications

Active
Multi-language
567
MIT License
Key Features
Best practice examples
Anti-pattern warnings
Framework-specific guides
Code templates

Featured Contributors

Meet some of the amazing people who make our open source projects possible

👩‍🔬

Dr. Sarah Chen

Principal Researcher

Lead maintainer of LLM vulnerability dataset

🐙 View GitHub Profile
👨‍💻

Marcus Rodriguez

Security Engineer

Core contributor to prompt injection detector

🐙 View GitHub Profile
👨‍🔬

Alex Kim

Research Scientist

AI security benchmarks development

🐙 View GitHub Profile
👩‍🎓

Dr. Lisa Wang

Academic Researcher

Taxonomy and classification research

🐙 View GitHub Profile

Join Our Community

Help us build better AI security tools. Every contribution makes the AI ecosystem safer.

👩‍💻

Code Contributors

Submit bug fixes and improvements
Add new security detection rules
Support new languages/frameworks
Contributing Guide
🔬

Researchers

Contribute vulnerability datasets
Publish research findings
Collaborate on benchmarks
Research Community
📖

Documentation

Improve documentation
Write tutorials and guides
Translate content
Edit Documentation

💰 Sponsor Our Work

Help sustain open source AI security development. Your sponsorship supports research, tool development, and community programs.

Stay Updated

Get notified about new open source releases, research publications, and community events.

No spam, unsubscribe anytime. We respect your privacy.