
Guide
AIVSS Scoring System For OWASP
The OWASP AIVSS v0.5 introduces a scoring system to assess security risks in agentic AI, combining traditional and AI-specific metrics to guide vulnerability evaluation and risk management.

Guide
AI Controls Matrix
Developed with industry experts, AICM modernizes the CCM for AI with 18 domains, 243 controls, an AI-CAIQ, mappings to major frameworks, and includes diagrams and examples to support rapid adoption.

Guide
Large Language Model Security Requirements for Supply Chain
The WDTA AI-STR-03 standard outlines a multi-layered framework to secure the LLM supply chain, emphasizing lifecycle protection, ML-BOM, Zero Trust, and continuous monitoring to ensure AI system integrity and reliability.

Guide
Top 10 Agentic AI Security Risks: Key Threats and Mitigation Strategies
Agentic AI is revolutionizing industries but brings new security risks. The independent "Agentic AI Top Threats" initiative identifies these threats, uniting experts across sectors for adaptive, industry-wide security solutions.

Guide
Generative AI: Proposed Shared Responsibility Model
Explore how Generative AI and Large Language Models (LLMs), like ChatGPT and Bard, are transforming enterprises with cloud platforms as the backbone. Learn how Microsoft Azure, Google Vertex AI, and Amazon SageMaker lead the charge in this AI-driven revolution.

Guide
Securing the Future of AI Agents
AI agents are transforming industries with GenAI-powered automation and innovation but pose unique security challenges. This GitHub guide addresses the **Top 10 Security Risks for AI Agents**, offering actionable solutions for vulnerabilities like memory manipulation and knowledge base poisoning. It empowers organizations to ensure safe, secure AI deployment while driving progress.

Guide
The AI Ethics Revolution—A Brief Timeline
This Medium article traces the evolution of AI ethics, covering its historical roots, pivotal milestones like ethical frameworks and public controversies, and the growing shift from academia to mainstream awareness. It also explores current challenges and efforts for responsible AI, offering a comprehensive timeline for understanding AI’s ethical progression.

Guide
NIST AI Risk Management Framework - AI RMF
The NIST AI Risk Management Framework (AI RMF) guides organizations in managing AI risks. It focuses on safe, trustworthy AI by addressing bias, security, and ethics. The flexible framework includes governance, risk assessment, mitigation, and continuous improvement, enabling industries to responsibly deploy AI while aligning with best practices and regulations.