Redwood Research
AI safety research lab building tools for measuring and improving model alignment.
Redwood Research conducts AI safety research and develops tools for evaluating and improving language model behavior. They focus on mechanistic interpretability, model auditing, and alignment testing. Used by ML teams seeking deeper insight into model risks and behavioral validation. Emphasizes rigorous evaluation methodologies over compliance checklists.
Adjacent tooling.
AI Trust Services (KPMG)
KPMG's trusted AI framework for governance, risk, and compliance.
Aporia
Monitor, test, and safeguard LLMs in production with observability and guardrails.
Dataiku EU AI Act Readiness
Platform helping organizations assess and manage EU AI Act compliance risks.
DataRobot
Real-time AI governance, monitoring and compliance platform for enterprises.
Earthian AI
Enterprise risk management platform purpose-built for AI systems.
IBM watsonx.governance
Unified AI governance platform for model lifecycle management and compliance tracking.