Directory / Categories / Bias & Fairness Testing
Category

Bias & Fairness Testing

33 vendors curated. Independent ranking, no paid placement.

Lumenova AI

Enterprise platform automating AI governance, risk assessment, and fairness monitoring.

enterprise

ModelOp

AI ethics platform for model monitoring, bias detection, and governance.

enterprise

Sardine

AI risk management for fraud detection with governance oversight.

enterprise

Evidently AI

ML monitoring and testing platform for model performance, bias, and data drift.

freemium

Adversarial Model Analysis

Open-source toolkit for adversarial testing and model interpretability.

free

AI Badness: An open catalog of generative AI badness

Open catalog documenting generative AI failure modes and risks.

free

AI FactSheets 360 (IBM)

Open-source toolkit for AI transparency, bias detection, and responsible model development.

free

AI Snake Oil

Exposes AI hype and provides practical guidance for responsible AI deployment.

free

AI Vulnerability Database

Open database and tools for identifying and managing AI vulnerabilities and risks.

free

COMPAS Recidivism Risk Score Data and Analysis

Public dataset exposing bias in criminal risk assessment AI systems.

free

FairLearn

Open-source toolkit for detecting and mitigating AI model bias and fairness issues.

free

Fairness and Machine Learning: Limitations and Opportunities

Free textbook and resource guide on fairness limitations in machine learning systems.

free

Have I Been Trained?

Check if your training data was used to train AI models without consent.

free

Interpretable Machine Learning using Counterfactuals

Explainable AI through counterfactual examples for model transparency.

free

Introduction to Responsible Machine Learning

Educational framework for building interpretable, fair, and accountable ML systems.

free

MadryLab

Adversarial robustness research lab advancing AI security and trustworthiness.

free

OWASP AI Testing Guide

Open-source testing methodology for AI security, bias, and compliance risks.

free

RAI Toolkit

Open-source toolkit for responsible AI development and bias assessment.

free

Real Toxicity Prompts - Allen Institute for AI

Dataset for testing language models against toxic outputs and unsafe behavior.

free

Responsible AI Institute

Open-source framework for building and auditing responsible AI systems.

free

ResponsibleAI

Open-source toolkit for responsible AI development and model explainability.

free

Trust-LLM-Benchmark Leaderboard

Benchmark suite evaluating LLM trustworthiness across safety, fairness, and robustness.

free

What-If Tool (Google)

Interactive tool for testing and understanding ML model behavior and fairness.

free

AI Ethics Lab

AI ethics framework and governance tools for responsible AI deployment.

unknown

FairNow

Continuous fairness monitoring and bias remediation for high-stakes AI systems.

unknown

Fiddler AI

Monitor model drift, detect bias, and explain ML/LLM decisions in production.

unknown

GEM

Benchmark suite for evaluating AI model risks and bias across governance frameworks.

unknown

Holistic AI

AI governance platform auditing systems against regulatory frameworks

unknown

Knostic

AI governance platform with bias detection and compliance dashboards.

unknown

OSD Bias Bounty

Crowdsourced bias detection for AI systems through structured bounty programs.

unknown

Redwood Research

AI safety research lab building tools for measuring and improving model alignment.

unknown

SolasAI

Detect algorithmic bias and ensure fairness compliance in AI decisions.

unknown

SynthID-Text

Watermark AI-generated text for transparency and provenance verification.

unknown