COMPAS Recidivism Risk Score Data and Analysis
Public dataset exposing bias in criminal risk assessment AI systems.
ProPublica's COMPAS dataset documents racial bias in algorithmic recidivism prediction, enabling researchers and auditors to evaluate AI fairness. Used by compliance teams, data scientists, and policymakers to assess algorithmic discrimination risks. Provides real-world evidence for high-stakes AI system auditing and bias detection methodologies.
Adjacent tooling.
AI Governance & Compliance (EY Global)
Enterprise AI governance and compliance framework aligned with EU AI Act requirements.
AI Trust Services (KPMG)
KPMG's trusted AI framework for governance, risk, and compliance.
Aporia
Monitor, test, and safeguard LLMs in production with observability and guardrails.
Centraleyes
AI-powered risk register and policy management for EU AI Act compliance.
Certa
AI-driven third-party risk assessments and compliance management.
Credo AI
Map AI initiatives to regulatory frameworks with compliance scoring.