Secret AI Scores Decide Your Job Fate
Eightfold AI’s Unregulated Dossier Machine Exposed
Erin Kistler, a seasoned product manager with 19 years of experience, applied for senior roles at PayPal in December 2025. Sruti Bhaumik, a technical program manager with a master’s degree, sought positions at Microsoft. Both were qualified. Both were rejected. Neither ever spoke to a human recruiter. Instead, an artificial intelligence platform called Eightfold AI had already assigned them a “Match Score” based on billions of data points scraped from the internet, job boards, and even their own application histories. They never consented to the evaluation. They never saw the report. And under federal law, that process may be illegal.
A class action complaint filed in California Superior Court on January 20, 2026, alleges that Eightfold AI Inc., a Santa Clara-based technology firm, has been operating as an unregistered consumer reporting agency for years, assembling secret dossiers on millions of job applicants and selling “likelihood of success” scores to corporate giants including Microsoft, PayPal, Morgan Stanley, Starbucks, Chevron, and Bayer. The lawsuit claims Eightfold’s AI-powered “Talent Intelligence Platform” violates the Fair Credit Reporting Act (FCRA) and California’s Investigative Consumer Reporting Agencies Act (ICRAA) by failing to provide mandatory disclosures, obtain proper consent, or allow applicants to review and dispute the information used against them.
How the Black Box Works
Eightfold markets its platform as the “world’s largest, self-refreshing source of talent data,” trained on more than 1.5 billion global data points, including over one million job titles, one million skills, and profiles of more than one billion workers. When a candidate applies to a job through an employer that uses Eightfold, the system immediately begins assembling a dossier from multiple sources: the applicant’s résumé, public professional profiles (LinkedIn, GitHub, Stack Overflow, Crunchbase), device activity, cookies, location data, and even inferences drawn from behavior patterns. That data is then fed into a proprietary large language model (LLM) that generates a “Match Score” on a scale of 0 to 5, ranking candidates against one another based on predicted “culture fit,” future career trajectory, and other opaque criteria.
- 1.5+ billion global data points powering Eightfold’s AI
- 100+ corporate clients including Microsoft, PayPal, BNY, Chevron
- 38% of large companies deploy AI to match and rank job applicants
- 0 meaningful opportunity for applicants to review or dispute their scores
According to the complaint, Eightfold’s system is designed to operate invisibly. “Job applicants have no opportunity to view any of the third-party data or to correct inaccuracies in these reports,” the filing states. A low Match Score can mean an application is discarded before a human ever sees it, and applicants are never told why they were filtered out. The process echoes the exact harms Congress sought to prevent when it passed the FCRA in 1970: large-scale, computerized decision-making based on secret, unverified information.
The Plaintiffs’ Stories: Qualified, Scored, and Rejected
Erin Kistler holds a computer science degree from Ohio State and nearly two decades of product management experience across government, media, and tech. In December 2025, she applied for two senior product manager roles at PayPal using an online portal that redirected to “eightfold.ai/careers.” She never received a standalone disclosure informing her that a consumer report would be generated, nor did she authorize Eightfold to collect data beyond her résumé. She was not interviewed and received no job offer. PayPal, the suit alleges, uses Eightfold’s Evaluation Tools to score candidates, and Kistler believes her application was filtered out by the algorithm.
Sruti Bhaumik, a Walnut Creek resident with a master’s degree and over ten years of project management experience, applied to Microsoft in July and December 2025. Both applications required her to sign in through an “eightfold.ai” domain. Two days after her December submission, she received an automated rejection. She never saw the data Eightfold assembled about her, never received a summary of her consumer rights, and was given no chance to correct any inaccuracies in her consumer file. “I had no idea my entire professional footprint was being mined and judged by a machine I couldn’t question,” Bhaumik said in a statement through counsel.
Regulators Have Warned: AI Hiring Must Follow the Law
The Consumer Financial Protection Bureau (CFPB) issued guidance in 2024 specifically addressing AI-powered employment screening tools. The agency stated that an entity “could ‘assemble’ or ‘evaluate’ consumer information within the meaning of the term ‘consumer reporting agency’ if the entity collects consumer data in order to train an algorithm that produces scores or other assessments about workers for employers.” The CFPB also emphasized that third parties providing scores derived from public data or pooled employer records are subject to FCRA requirements.
Eightfold, the complaint alleges, ignores these requirements. The company does not obtain certifications from employers that they have provided FCRA-mandated disclosures to applicants, nor does it ensure that job seekers receive a copy of their report and a summary of rights before adverse action is taken. The result is a nationwide class of applicants who have been evaluated, ranked, and often rejected based on information they cannot access, let alone dispute.
Corporate Greed Meets Algorithmic Opacity
Eightfold’s business model relies on a classic neoliberal formula: privatize data collection, monetize worker profiles, and offload compliance costs onto the individuals being evaluated. With a client list that includes financial behemoths, tech giants, and consumer brands, the company has positioned itself as an essential gatekeeper in the hiring process. Yet its methods remain shrouded. The Match Score algorithm is proprietary; the data sources are aggregated from “public sources” but never disclosed to the candidate; and the training data includes billions of career trajectories harvested without individual consent.
This is not a fringe practice. According to recent reports cited in the lawsuit, nearly two-thirds of large companies (those with more than 5,000 employees) use AI technology like Eightfold’s to screen candidates. The same report notes that 38 percent of firms deploy AI to match and rank applicants. For workers, this means the traditional job application process, already fraught with anxiety, now includes an invisible, unaccountable layer of algorithmic judgment.
The FCRA was enacted in an era of mainframe computers and punch cards, but its drafters understood the danger of “impersonal ‘blips’ in a solid and unthinking machine.” Today’s “blips” are neural networks trained on the detritus of our digital lives. Eightfold’s alleged failure to comply with half-century-old consumer protections exposes a broader corporate accountability crisis: when profit-driven AI systems determine access to economic opportunity, the people most affected are kept deliberately in the dark.
What Comes Next
The lawsuit seeks certification of a nationwide class of all U.S. residents subjected to Eightfold’s Evaluation Tools during job applications, as well as a California subclass. It asks for statutory damages up to $1,000 per FCRA violation, punitive damages, and an injunction requiring Eightfold to comply with federal and state consumer reporting laws. For Kistler, Bhaumik, and potentially millions of others, the case represents a long-overdue reckoning for the hidden infrastructure of AI hiring.
In the meantime, job seekers have few ways to know if their applications are being processed by Eightfold. The company’s technology is embedded in the career portals of major employers, often only detectable by a URL containing “eightfold.ai.” And once a profile is created, Eightfold retains the data for its own purposes, including training future models and evaluating other candidates. That means your personal data, scraped from a 2025 job application, could be used to rank a stranger in 2027 without your knowledge.
The complaint is a stark reminder that technology does not exempt corporations from the law. As the CFPB has made clear, the FCRA’s protections extend to algorithmic scores and digital dossiers. Eightfold AI now faces a legal battle that could reshape how AI hiring tools are regulated in the United States. For the workers whose livelihoods hang on a score they cannot see, the stakes could not be higher.
Reporting based on the class action complaint Kistler et al. v. Eightfold AI Inc. (Contra Costa County Superior Court, Jan. 20, 2026). This article focuses on corporate practices and their impact on workers; it does not constitute legal advice.
💡 Explore Corporate Misconduct by Category
Corporations harm people every day — from wage theft to pollution. Learn more by exploring key areas of injustice.
- 💀 Product Safety Violations — When companies risk lives for profit.
- 🌿 Environmental Violations — Pollution, ecological collapse, and unchecked greed.
- 💼 Labor Exploitation — Wage theft, worker abuse, and unsafe conditions.
- 🛡️ Data Breaches & Privacy Abuses — Misuse and mishandling of personal information.
- 💵 Financial Fraud & Corruption — Lies, scams, and executive impunity.