Elevate

How an AI Bias Audit wasn’t enough to avoid litigation risk- Workday story

In May 2025, a California federal court certified a nationwide collective action alleging Workday’s AI hiring tools systematically discriminated against applicants over 40. Lead plaintiff Derek Mobley, rejected from over 100 roles via Workday’s platform, argued the AI screening software disproportionately filtered out older candidates.

Workday AI Discrimination Lawsuit: Lessons in AI Governance and AI Risk Management 

This lawsuit reflects widespread concerns about AI bias in hiring practices that extend far beyond Workday. The American Civil Liberties Union has warned that AI hiring tools “pose an enormous danger of exacerbating existing discrimination in the workplace.”

  • Inherent Bias Amplification: AI systems trained on historical hiring data often perpetuate existing demographic biases. Workday’s tools allegedly learned from past recruitment patterns that disadvantaged older applicants, embedding age-based discrimination into algorithmic decisions.
  • Opacity in Decision-Making: Many AI hiring tools operate as “black boxes,” lacking transparency into how candidates are scored or rejected. Plaintiffs argued Workday’s system provided no explanation for rejections, making bias detection nearly impossible. Perhaps Workday could have added reasoning explanations during their process, but those should be thoroughly tested, and for the time to come, with a human in the loop throughout the reasoning and response process.
  • Third-Party Liability Risks: Companies using third-party AI tools, like Workday’s platform, face unforeseen legal exposure. The court’s expanded interpretation of “employer” status means vendors and clients alike may share responsibility for discriminatory outcomes. 
  • Regulatory Scrutiny Escalation: The Equal Employment Opportunity Commission (EEOC) filed an amicus brief supporting the lawsuit, signaling intensified regulatory focus on AI fairness. Non-compliance with evolving standards (e.g., EU AI Act) risks penalties and reputational harm. 

It is worth noting that Workday did conduct AI Bias Audits and followed the practices indicated by the EEOC. However, since there is no separate test data to compare the ‘clean’ test data with the HR hiring systems’ outputs, the manner in which these audits are conducted is based on the Adverse Impact (4/5th rule). 

See results of the study directly from Workday’s website: https://www.workday.com/en-us/legal/responsible-ai-and-bias-mitigation.html. However, it is worth noting that the HR NY Law focuses on Race and Gender and not Age, which is the plaintiff’s main allegation for discrimination (over 40 years of age discrimination).   

Key Takeaways

Whether Workday can prove that its tool didn’t have embedded bias in its selection criteria or not, this brings additional considerations of responsibilities for companies that rely on these tools to speed up the hiring processes.  Are you performing effective AI governance reviews on your technology vendors and their practices?  Who is responsible for what in cases like this, when using a third-party tool can increase litigation risks?  Do you work for a technology company that embeds AI in its product? 

Effective governance and reviews will definitely show due care and due diligence in addressing current or future AI risks.   Existing compliance frameworks, such as ISO 27001 for information security and emerging standards like ISO 42001 for AI management systems, help organizations align AI governance with broader risk management strategies and ensure consistent application of best practices across all technology deployments. 

How Elevate Can Help

We understand that navigating AI governance complexities requires specialized expertise. Our blend of AI development and cybersecurity expertise can help you build and test your AI governance controls to mitigate litigation and real risks related to AI.  We can perform Bias and Transparency Audits in addition to providing guidance and templates to set up effective AI governance and AI risk management practices.