Introduction
In June 2021, plaintiff Derek Mobley charged Workday—a human resources management service for employers that uses artificial intelligence-driven screening processes—for allegedly violating Title VII of the Civil Rights Act of 1964, the Age Discrimination in Employment Act of 1967, and the Americans with Disabilities Act of 1990. Mobley, an African-American man over the age of 40 and diagnosed with anxiety and depression, submitted between 80 and 100 applications using the Workday platform, almost all of which included his education information—such as his graduation date from a historically Black college—as well as his completed assessments and personality tests. Mobley additionally possessed a Bachelor’s degree in Finance from Morehouse College and an Associate’s Degree in Network Systems Administration, from ITT Technical Institute. Mobley’s many rejections despite these qualifications caused him to suspect that Workday was automatically filtering out his applications because of his age, race, and/or disability, especially because most of his application rejections were received around midnight and only minutes after he applied.
Workday filed a motion to dismiss the charges which the trial court subsequently denied. In July 2024, a San Francisco trial court decided to proceed with the class action lawsuit, holding that Workday can face direct liability because it is “participating in the decision-making process by recommending some candidates to move forward and rejecting others” and “engag[ing] in conduct that is at the heart of equal access to employment opportunities.” This decision has propelled the lawsuit forward in favor of Mobley.
As artificial intelligence (AI) becomes increasingly integrated into hiring processes, Mobley v. Workday becomes a pivotal case in examining the intersection between data science, employment law, and civil rights. The outcome of this case will likely set important precedents for how courts should address allegations of discrimination in AI-driven hiring, who should be held accountable when bias is detected, and what legal safeguards should be in place to ensure equitable hiring practices.
Applicable Law
According to Title VII of the Civil Rights Act of 1964, “It is the policy of the Government of the United States to provide equal opportunity in Federal employment for all persons, to prohibit discrimination in employment because of race, color, religion, sex, national origin, handicap, age, sexual orientation, gender identity, or status as a parent.” Title VII’s establishment in 1964 additionally created the U.S. Equal Employment Opportunity Commission (EEOC), a bipartisan commission meant to prevent unlawful employment discrimination. The commission consists of five members, who are appointed by the President and approved by the senate for five-year terms. Title VII and the EEOC attempt to eradicate two types of discrimination: disparate impact (facially neutral but discriminatory) and disparate treatment (facially discriminatory by forbidding decisions based on identifying factors). Mobley argues Workday’s algorithmic tools “disparately impacted applicants.”
Other laws relevant to Mobley v. Workday are the Age Discrimination in Employment Act of 1967 (ADEA) which prohibits discrimination based on age (forty and over), and the Americans with Disabilities Act of 1990 (ADA) which prohibits employment discrimination against qualified individuals based on a disability. Under the ADA, it is additionally considered discriminatory to screen out individuals with disabilities using employment tests, such as the personality tests and various assessments Mobley took.
Discrimination among hiring processes has frequently been quantified with the “four-fifths rule,” where the hiring rate of a group of applicants with a shared identifying factor must not be less than four-fifths (⅘) or eighty percent of the highest hiring rate for another group of applicants. (29 C.F.R. §1607.4(D)). For example, suppose the “selection rate,” the percentage of applicants hired, for female applicants was 30% and 60% for male applicants. In that case, the ratio (30/60 or 50%) may be evidence of hiring discrimination against female applicants because the ratio is less than ⅘. The “four-fifths rule” is one of the few methods of quantifying employment discrimination (See Johnson v. City of Memphis (2005) 355 F.Supp.2d 911). The rule acts as a general rule of thumb but may be inappropriate in certain cases, such as when comparing smaller differences in selection rates.
Background
In trying to uphold the values of Title VII to provide equal opportunity to candidates regardless of identifying factors, employers have increasingly relied on artificial intelligence for their hiring processes. AI can match candidates on platforms like LinkedIn or GitHub, sort through hundreds of resumes, write job descriptions across platforms, and rely on data-driven insights to standardize candidate evaluation. In automating these initial screening processes, this technology can save recruiters the many hours it takes to sift through every applicant. Not to mention, by using machine learning techniques, AI can ideally eradicate the subjectivity often involved in human decision-making, replacing it with an objective standard to identify the ideal candidate for a particular job position. So it seems paradoxical that AI would perpetuate employment discrimination rather than create a more objective standard.
Hiring providers, both human and AI, prioritize “cultural fit,” which describes a candidate’s alignment with core values and collective behaviors. However, this focus may be the primary obstacle behind equitable hiring selections. While these qualifications are important, they can also act as confounding variables that distract the algorithm from fair hiring processes. According to Jobvite’s 2016 annual recruiting survey, “60% of recruiters rate culture fit of highest importance when making a hiring decision,” ranking the variable’s importance above that of college prestige, cover letters, and GPA.
To assess “culture fit,” AI uses data from current employees to identify patterns, using that analysis to determine which candidates match the employer’s ideal candidate. The use of prior data can be problematic and may bias groups in numerous ways—zip code data can make inferences about a candidate’s racial or cultural background, and keywords in applications can push candidates higher or lower. Between 2014 and 2017 Amazon tested an AI-driven algorithm system meant to suggest best hires, but immediately shut it down when the system ranked applications with the word “women” lowest. Another client decided not to use a resume screening tool after it found applicants who either played high school lacrosse or were named “Jared” to be ideal job candidates. While the candidate named “Jared” who played lacrosse would be the algorithm’s perfect candidate for said company, these variables can also distract from qualities of an employee that actually matter, such as work values or statistical qualifications. More broadly, if current employees at a company exclude certain ethnic groups (intentionally or not), the algorithm will likely also exclude those groups during hiring processes, regardless of that candidate’s qualifications. The reality is, “cultural fit” is different for every employer, and the idea of the “perfect” candidate differs based on each company’s respective values. AI’s methodical hiring process relies on pre-existing data to make decisions for future candidates, a limitation compared to subjective human hiring methods that can adapt to candidate nuance.
AI’s consideration of “cultural fit” has often been perceived as discriminatory, and this is especially evident in disability discrimination cases. In 2017, investment banks like Goldman Sachs started using “Hirevue,” an online interview service that utilizes artificial intelligence to produce an employability score based on a job candidate’s gestures and tone during an online video interview. The issue is that this process unfairly ranks candidates with disabilities (particularly speech/motion impediments) or accents, simply because their mannerisms and expressions differ from what would typically be considered normal to that company. Candidates with mental disabilities may also be filtered out by software like Hirevue. In another class action lawsuit, plaintiff Kyle Behm, a bipolar college student, filed several EEOC complaints. Behm, diagnosed with bipolar disorder, was required to take a personality test before applying to several minimum-wage jobs at supermarkets and retail stores. Each of Behm’s applications was rejected, but when a person without bipolar disorder answered the survey, the application was advanced. Behm blamed the personality tests (modeled from the “Five Factor Model” test) and brought charges of unlawful discrimination on the basis of mental disability. In all of his job applications, Mobley was required to “take a Workday-branded assessment and/or personality test,” which contained questions that Mobley alleges exposed his depression and anxiety. Although Mobley’s disabilities can be considered “covert,” personality tests, like the ones Mobley took through Workday, can be a method of continuing to perpetuate disability discrimination.
It is not surprising then, that amidst the revolutionary progress of artificial intelligence, Mobley v. Workday is not the only case in which an employment entity violated state and federal laws. (See Raines v. U.S. Healthworks Medical Group (2023) 15 Cal.5th 268 [ongoing class action alleging AI screens out black job applicants]; Estate of Gene B. Lokken et al. v. UnitedHealth Group, Inc. et al. [class action alleging a national health insurance company used AI to make coverage decisions for clients and therefore prevent access to health care services]; Equal Employment Opportunity Commission v. iTutorGroup, Inc. et al. (E.D.N.Y. May 05, 2022) [defendant, a tutoring service for students in China, allegedly “screened resumes with AI, to automatically reject female applicants over the age of 55 and male applicants over the age of 60”].) However, the algorithm itself does not have the power to make prejudicial decisions, which raises questions such as: should employers bear the burden of artificial intelligence’s “mistakes” within hiring processes? If not the employer, who is to be held liable for the alleged discrimination—the developer, the employee, or another entity? Should we continue to use AI to simplify hiring processes or will it only perpetuate the employment discrimination its implementation was intended to dispel?
Employment Agency v. Agents of Employers
The particular novelty surrounding Mobley v. Workday is that the lawsuit is against a technology vendor, which is ambiguously not an employer, employee, or labor organization. Title VII, the ADEA, and the ADA all agree to define an “employment agency” as “any person regularly undertaking with or without compensation to procure employees for an employer or to procure for employees opportunities to work for an employer and includes an agent of such a person.” 42 USCA §2000e; 42 USCA §1211; 29 USCA §630. In their initial motion, Mobley claims Workday acted as an employment agency and is thus liable for claims under Title VII.
The EEOC filed an amicus brief agreeing with Mobley that Workday is an employment agency, an employer, and an agent of employers. First, they argue the algorithmic tools of Workday perform the same function as traditional employment agencies by being able to make automated decisions about accepting or denying a candidate. The EEOC contends that active solicitation or requirement is not necessarily required for a non-traditional agency to be considered an employment agency. The EEOC additionally argues that for the same reasons, Workday acts as both an indirect employer and an agent of employers. Workday, in their motion to dismiss the charges, counters that claim, arguing an employment agency must “actively engage with employees to obtain employment opportunities” or “actively engage with employers to find prospective employees,” and that involvement in the hiring process is not sufficient to classify a company as an “employment agency.”
Judge Rita Lin rejected Mobley’s claim on July 12 2024 that Workday acted as an employment agency, finding “no support for the conclusory allegations that Workday was the entity recruiting or soliciting candidates.” Though not an employment agency, Workday was considered an “agent” of employment, as decided by the trial court. Judge Lin used a fictional scenario of spreadsheet software that sorts and filters out any candidates over the age of 40. While this spreadsheet does not actively make hiring decisions, “Workday does qualify as an agent” because the algorithmic tools “perform a traditional hiring function of rejecting candidates at the screening stage and recommending who to advance to subsequent stages.” This decision, while not ideal for either the plaintiff or the EEOC, has established a multitude of precedents that hold technology vendors like Workday accountable for maintaining some liability in employment discrimination cases, rendering them no longer independent entities.
Potential Legal Safeguards
There is no sign that AI’s progress will slow down in the near future and as various industries increasingly embrace AI, its pervasiveness becomes harder to ignore. The primary problem amidst the technology’s growth is the “black box” problem, a concept where the intricacies of these algorithms supersede human knowledge. The idea is that we understand what we input into the system and we understand the output, but everything in between, how the input is made into the output, is really a black box. The lack of transparency makes it impossible to place the blame on any individual entity, a problem that becomes especially pertinent when dealing with the livelihoods of employees. Ajunwara writes in “The Paradox of Automation as Anti-Bias Intervention,” “concomitant with the belief in data objectivity is the uncritical acquiescence to data-driven algorithmic decision-making as the final arbiter on any given inquiry. Thus, the results of algorithmic systems are heeded as oracular proclamations; they are accepted at face value without any attempt to analyze or further interpret them” (1688). It becomes increasingly important to implement audit reports or other documentation to increase transparency, both within hiring companies and among employees. Auditing should be conducted by third parties who can maintain an impartial approach, and hiring assessments should be done regularly, “before and after deployment” in order to “ensure applicants are protected.” Frequent auditing may incentivize hiring companies to conduct rigorous testing of their software.
One method of ensuring transparency is by delegating liability before usage of the software, which could help mitigate the legal issues present in Mobley v. Workday. The EEOC claims liability can be put on the employer if they discover the tool adversely impacted an individual but fails to “adopt a less discriminatory algorithm that was considered during the development process.” Liability engendered by a company’s knowledge of the discriminatory effects of insufficient accountability can be avoided if sufficient measures are taken prior to working with a vendor, such as the “negotiation and draft[ing] the indemnity, warranty, liability cap carve-outs, and other risk allocation provisions of the contract with the vendor.” Legal actions such as these, a full understanding of the technology vendors, and sufficient disclaimers to applicants are all ways companies can mitigate legal repercussions. Further, the EEOC “has the authority to investigate discrimination charges against employers and, if necessary, file a lawsuit.” Its recent investigation of iTutor indicates its increased enforcement of fair hiring processes, which sets a precedent for the EEOC’s intervention regarding AI in employment decision-making. The EEOC has additionally given a set of recommendations to prevent violations of the ADA, such as by providing alternative testing formats to accommodate those with disabilities. In May 2023 the EEOC issued a technical guidance document specifically on how to quantify a violation of employment discrimination as a result of AI-driven hiring processes, which may assist future trial courts in their decision-making.
Yet another method of reducing ambiguity is by implementing clearer legislation from federal, state, and local bodies. Certain states and local counties have already adopted changes regarding the extent to which they allow AI within hiring companies. New York City’s Automated Employment Decision Tools Law (“Local Law 144”), which officially took effect in July 2023, prohibits employers and employment agencies’ use of an AEDT (an automated employment decision tool) unless bias audits are regularly done and prior notice is given about the AEDT to candidates and employees. The law does not yet apply to the entirety of New York, instead limited to offices in the city, meaning there is still ambiguity regarding remote positions when an office is in the city. Similarly, Illinois adopted the Artificial Intelligence Video Interview Act (820 ILCS 42/1) which requires employers to (1) “notify each applicant before the interview that artificial intelligence may be used to analyze the applicant’s video interview,” (2) “provide each applicant with information before the interview explaining how the artificial intelligence works and what general types of characteristics it uses to evaluate applicants,” and (3) obtain “consent from the applicant to be evaluated by the artificial intelligence program” before using artificial intelligence to analyze applicant-submitted videos. (180 ILCS 42/5.)
Conclusion
What we know for sure is that our current legal framework is insufficient to address AI’s progress, and updating both the laws surrounding equal employment and creating new laws that can establish stronger precedents is of utmost importance. In her testimony to the EEOC, Pauline Kim wrote, “Existing doctrine was developed with human decision-makers in mind and does not always fit the risks of discrimination posed by automated systems…This interpretation of disparate treatment law is too simplistic.” Title VII, ADEA, and ADA, while necessary to protect the rights of employees, create more ambiguity because aspects of these laws are outdated. Mobley v. Workday demonstrated the complications such ambiguity can bring about, in particular, that surrounding the term “employment agency,” because the laws were not originally created to also take into consideration automation’s relevance in defining such terms. As cases like Mobley v. Workday and Equal Employment Opportunity Commission v. iTutorGroup, Inc. et al. become head-scratching questions for judicial review, it is more important than ever that we set relevant precedents that act as a guide for the future. Judge Lin’s decision that Workday acts in some part as an agent in the hiring process, and the subsequent liability imposed on Workday, is a good precedent for future hiring companies and technology vendors, raising important liability considerations, regardless of the outcome of this case.