MATS
AI safety research & governance framework for responsible AI development.
MATS (Machine Intelligence Research Institute's Alignment Research Program) provides governance frameworks and safety-focused research for AI systems. Used by researchers, AI teams, and organizations building governance practices. Focus on technical safety research and institutional best practices for AI alignment and responsible deployment.
Adjacent tooling.
AI Governance & Compliance (EY Global)
Enterprise AI governance and compliance framework aligned with EU AI Act requirements.
AI Trust Services (KPMG)
KPMG's trusted AI framework for governance, risk, and compliance.
Aporia
Monitor, test, and safeguard LLMs in production with observability and guardrails.
Atlan
Data lineage and governance for AI systems with policy enforcement.
Centraleyes
AI-powered risk register and policy management for EU AI Act compliance.
Certa
AI-driven third-party risk assessments and compliance management.