Interpreting Machine Learning Models with the iml Package
R package for interpreting and explaining machine learning model predictions.
The iml package provides tools for model-agnostic interpretation of machine learning models through feature importance, partial dependence plots, and SHAP values. Used by data scientists and AI auditors to understand model behavior and generate explanations. Supports transparency requirements in high-risk AI system documentation and compliance workflows.
Adjacent tooling.
AI Governance & Compliance (EY Global)
Enterprise AI governance and compliance framework aligned with EU AI Act requirements.
AI Trust Services (KPMG)
KPMG's trusted AI framework for governance, risk, and compliance.
Aporia
Monitor, test, and safeguard LLMs in production with observability and guardrails.
Centraleyes
AI-powered risk register and policy management for EU AI Act compliance.
Certa
AI-driven third-party risk assessments and compliance management.
Credo AI
Map AI initiatives to regulatory frameworks with compliance scoring.