What-If Tool (Google)
Interactive tool for testing and understanding ML model behavior and fairness.
What-If Tool enables data scientists and ML engineers to visually probe model predictions, test fairness across demographic groups, and understand feature importance without coding. Built by Google's PAIR research team, it supports multiple model types and data formats. Organizations use it for bias detection, model debugging, and demonstrating AI decision transparency to stakeholders.
Adjacent tooling.
AI Trust Services (KPMG)
KPMG's trusted AI framework for governance, risk, and compliance.
Aporia
Monitor, test, and safeguard LLMs in production with observability and guardrails.
Lumenova AI
Enterprise platform automating AI governance, risk assessment, and fairness monitoring.
ModelOp
AI ethics platform for model monitoring, bias detection, and governance.
Robust Intelligence
AI security platform detecting adversarial vulnerabilities and model failures.
Sardine
AI risk management for fraud detection with governance oversight.