Sign In
  • April 16, 2026

    Legal Challenges to AI in Hiring: FCRA Has Entered the Chat

    For the past several years, the use of AI hiring tools has been a hot topic, along with legal challenges to these tools alleging that their algorithms result in discriminatory outcomes. Kate O’Malley discusses a newly filed class action that takes a different approach to challenging AI hiring tools by alleging violations of the Fair Credit Reporting Act (FCRA).

    By Katherine M. O'Malley

    The use of artificial intelligence in candidate screening and hiring processes has been a “hot” employment issue for the past several years.

    This is driven, in part, by the rapid and widespread adoption of these tools by employers: in March 2025, a Forbes article declared that “[t]he world is on the verge of a seismic shift in how talent is hired, one that will redefine the fabric of work itself.”

    That same article included Gallup survey statistics showing that, as of March 2025, 93% of Fortune 500 chief human resource officers were already integrating AI tools into their HR processes.

    Kate O’Malley headshot Kate O’Malley, Marquette 2014, is senior compliance counsel at Accruent in Stevens Point, where she leads the company’s global incident reporting and investigation program.

    Legal Challenges Under Employment Discrimination Frameworks and the Mobley Case

    The widespread integration of AI into employers’ hiring processes has led to claims that AI algorithms are resulting in discriminatory outcomes.

    Mobley v. Workday, Inc., filed in the U.S. District Court for the Northern District of California in 2023, illustrates the typical AI hiring discrimination claims being litigated over the past few years. Derek Mobley, an African American man over the age of 40, claimed that he applied to dozens of jobs with employers who use Workday’s AI hiring tools, and was rejected each time. Mobley claimed that sometimes he was rejected within hours or minutes of applying, which suggested that Workday’s automated AI, and not a human representative of the employers, was responsible for the rejection decision.

    Mobley’s case alleges that Workday’s “smart” AI tools deprioritized him based on his protected characteristics and automatically incorporated prior employer bias. Mobley claims that the allegedly biased AI algorithm not only impacted him, but also a broad class of applicants, bringing claims under the Age Discrimination in Employment Act, Title VII of the Civil Rights Act of 1964, and the Americans with Disabilities Act.

    In July 2024, the court held that a provider of AI-driven applicant screening tools, like Workday, could be considered an “agent” of its clients (i.e., the employers seeking to hire the candidates) and therefore could be liable for discrimination.

    The court previously granted preliminary collective certification, allowing applicants until March 7, 2026, to opt in to the defined class.

    A New Approach: Eightfold’s FCRA Claim

    While early cases like Mobley focused on discriminatory outcomes, newer litigation is targeting the underlying data practices themselves.

    In January 2026, a new class action filed in California’s Contra Costa County Superior Court takes aim at AI hiring tools in a new way; instead of alleging that the AI’s algorithm was biased, the plaintiffs alleged that the algorithm scraped personal data and used that data to assign job applicants a “score” without giving the required disclosures under the Fair Credit Reporting Act.

    Eightfold AI is an AI hiring platform used by many employers, including Microsoft, PayPal, Starbucks, Chevron, and others. In Kistler et al. v. Eightfold AI, Inc., the named plaintiffs alleged that they applied to jobs through URLs containing “Eightfold.AI,” but never advanced further in the interview process. The plaintiffs allege that the AI tool scraped online data beyond what candidates submitted and then used that data to generate a numerical “Match Score” between 0 and 5. The plaintiffs allege that candidates were ranked according to these scores, and those with lower ratings were filtered out by the AI before a human ever reviewed the application.

    The complaint alleges that, by assembling and evaluating personal data to generate a hiring-related report, Eightfold was acting as a credit reporting agency under FCRA, and the “Match Scores” constitute consumer reports used for employment purposes.

    Plaintiffs allege that applicants were never told that their data was being compiled, and they were never given copies of the reports or a chance to dispute any errors, in violation of FCRA’s certification, notification, disclosure, authorization, and dispute requirements. Eightfold denies the allegations and has stated that it “operates on data intentionally shared by candidates or provided by our customers.”

    The Eightfold theory of liability is noteworthy because, if plaintiffs can establish that FCRA applies to Eightfold’s applicant scoring algorithm, they can bypass the higher bar of proving bias under traditional discrimination laws. However, the court has yet to determine whether credit reporting statutes apply to AI-based hiring tools, and whether such AI vendors are assembling or evaluating information “for the purpose of providing consumer reports to third parties,” as required by FCRA.

    Employers' Takeaways

    As these cases continue to progress, employers should consider the following:

    • Review your vendor contracts. Employers leveraging AI tools in their hiring processes should review and consider re-papering their vendor contracts to ensure that there is transparency on data sources, and that potential legal exposure is reflected in the contract.
    • Thoroughly vet potential vendors. For new AI vendors, consider asking about (1) what data their tool relies on, (2) where that data is sourced from, and (3) whether the tool is generating a score or ranking that could be used to make a hiring decision.
    • Inventory the AI tools you are currently using. Many companies do not have a comprehensive view of which roles are using AI screening tools for their hiring processes, which vendors provide those tools, or what data sources are being leveraged in connection with the process. Establishing what AI tools are currently being used, and how those tools work, is a key first step.

    As litigation evolves, employers should expect scrutiny not only of outcomes, but of the data and processes underlying AI-driven hiring decisions.

    This article was originally published on the State Bar of Wisconsin’s Labor & Employment Law Section Blog. Visit the State Bar sections or the Labor & Employment Law Section webpages to learn more about the benefits of section membership.






    Need help? Want to update your email address?
    Contact Customer Service, (800) 728-7788

    Labor & Employment Law Section Blog is published by the State Bar of Wisconsin; blog posts are written by section members. To contribute to this blog, contact Jessica Simons and review Author Submission Guidelines. Learn more about the Labor & Employment Law Section or become a member.

    Disclaimer: Views presented in blog posts are those of the blog post authors, not necessarily those of the Section or the State Bar of Wisconsin. Due to the rapidly changing nature of law and our reliance on information provided by outside sources, the State Bar of Wisconsin makes no warranty or guarantee concerning the accuracy or completeness of this content.

    © 2026 State Bar of Wisconsin, P.O. Box 7158, Madison, WI 53707-7158.

    State Bar of Wisconsin Logo

Join the conversation! Log in to leave a comment.

News & Pubs Search

-
Format: MM/DD/YYYY