ML Explainability
26 vendors curated. Independent ranking, no paid placement.
Arize AI
Monitor LLM and ML model performance, detect drift, and debug issues in production.
LangSmith
Trace, debug, and monitor LLM applications for transparency and risk control.
Neuronpedia
Interpretability platform for understanding neural network behavior and safety.
A Living and Curated Collection of Explainable AI Methods
Curated reference collection of XAI methods for model transparency and interpretability.
Adversarial Model Analysis
Open-source toolkit for adversarial testing and model interpretability.
AI FactSheets 360 (IBM)
Open-source toolkit for AI transparency, bias detection, and responsible model development.
AI Snake Oil
Exposes AI hype and provides practical guidance for responsible AI deployment.
ALEPlot
R package for Accumulated Local Effects plots to interpret ML model predictions.
Debugging Machine Learning Models
Debug ML models to understand failures and improve transparency.
Distill
Interactive visualizations for understanding and debugging machine learning models.
FairLearn
Open-source toolkit for detecting and mitigating AI model bias and fairness issues.
Fairness and Machine Learning: Limitations and Opportunities
Free textbook and resource guide on fairness limitations in machine learning systems.
Getting a Window into your Black Box Model
Reason codes for NFL models: interpretability for black-box AI systems.
IML
Open-source ML interpretability library for understanding model decisions.
Interpretable Machine Learning using Counterfactuals
Explainable AI through counterfactual examples for model transparency.
Interpreting Machine Learning Models with the iml Package
R package for interpreting and explaining machine learning model predictions.
Introduction to Responsible Machine Learning
Educational framework for building interpretable, fair, and accountable ML systems.
MadryLab
Adversarial robustness research lab advancing AI security and trustworthiness.
Partial Dependence Plots in R
R package for interpreting model predictions through partial dependence visualization.
ResponsibleAI
Open-source toolkit for responsible AI development and model explainability.
TensorBoard Projector
Interactive visualization tool for understanding neural network embeddings and model behavior.
Tracing the thoughts of a large language model
Interpretability research enabling auditable LLM decision tracing.
What-If Tool (Google)
Interactive tool for testing and understanding ML model behavior and fairness.
Fiddler AI
Monitor model drift, detect bias, and explain ML/LLM decisions in production.
Model Transparency Ratings
Ratings system for AI model transparency and accountability across deployments.
SynthID-Text
Watermark AI-generated text for transparency and provenance verification.