Algorithmic Managers: The Risks of AI in Hiring and Evaluating Employees
By Brady Sanders, J.D., University of Virginia School of Law, expected 2026
ABSTRACT
The use of AI in the workplace to recruit and evaluate employees has grown rapidly in recent years. While AI offers potential benefits such as reducing human bias and increasing efficiency in the recruitment process, these tools present risks, including decisions influenced by algorithmic bias and privacy concerns. This essay begins by discussing recent government actions that have reignited the discussion surrounding AI in the workplace. This essay then explores broader issues with AI generally and considers how these issues specifically arise in recruitment and evaluation. Lastly, the essay concludes by discussing potential solutions to reduce bias, increase transparency, and strengthen privacy.
INTRODUCTION
On his first day in office, President Donald Trump established the Department of Government Efficiency (“DOGE”) to help modernize and increase the government’s productivity.[1] Many believed that Artificial Intelligence (“AI”) would play a vital role in helping DOGE achieve its goals. It has now been alleged that DOGE will use AI to evaluate government employees.[2] While this announcement caused shock and outrage among many people, utilizing AI in this way is not unprecedented.
The private sector has used AI for many years to aid in hiring, monitoring, and managing employees.[3] However, this has not come without criticism. Many are concerned that using AI to hire and evaluate employees can result in privacy harms and negatively impact minorities. This essay will examine these concerns and then discuss potential solutions to minimize these risks.
II. POTENTIAL ISSUES
a. The AI Bias Problem
The use of AI in the workplace has grown significantly in recent years. While this trend has been widely celebrated, there is a growing concern that AI can have negative, unintended consequences. Namely, critics worry that these systems can potentially make biased decisions due to improper training. To better understand the critiques relating to AI, this essay will begin by briefly describing what AI is.
AI broadly refers to various forms of automated and algorithmic decision-making.[4] Algorithms are mathematical functions or sets of rules that a computer follows to produce an output.[5] To make accurate predictions, these algorithms are trained with large amounts of data with known outcomes. Using this data, the computer can decipher patterns and can use probabilities to determine the most likely output for an unknown input. Because AI predictions are based on probabilities, they involve some variance, meaning the outputs may differ depending on the data or context provided.
Many companies have implemented AI in the hiring and employee evaluation process in hopes of reducing the amount of human bias injected into these procedures under the assumption that algorithms are objective and will produce unbiased conclusions. However, it is not quite that simple. Often, the training data for these algorithms underrepresent a group in society, resulting in inaccurate predictions for these minority populations. These inaccurate predictions that disadvantage groups of people are often referred to as algorithmic bias.[6] Some of the most common factors that contribute to this bias include underreported groups, proxy variables for race (a characteristic that serves as an indirect indicator of race), and preexisting social prejudices embedded in the data.[7]
While both humans and algorithms can make the same biased decisions, biased AI can be especially dangerous for three reasons:
(1) automation produces this harm at a much quicker pace and with a wider breadth,
(2) many people will assume that a computer is objective in reaching a prediction and will follow the suggestion without scrutiny, and
(3) it can create a positive feedback loop.
This third point is particularly troubling. A positive feedback loop occurs when a system incorporates biased information and then retrains itself with the result from that biased pick. For example, if an algorithm concluded that people named “John” are better suited for a job and only picked people named “John,” it could entrench this bias by observing that people named “John” tend to do well in the company. In reality, the “Johns” could perform at the same level as people with different names, but the AI would make this faulty association because it systematically excluded everyone else. While “John” is a seemingly benign example,[8] this could be replicated based on protected characteristics such as race, gender, or national origin.
b. Issues in the Recruitment Process
Companies have implemented AI in various stages throughout the hiring process—from screening resumes to evaluating responses in automated interviews—because AI can enhance candidate quality, increase efficiency, and reduce recruiting costs.[9] Many companies have also implemented AI systems striving to reduce human bias that often harms minorities during the hiring process.[10] However, some have suggested that implementing AI in the hiring process can itself result in illegal discriminatory effects due to algorithmic bias, for example, preferring candidates with the name “John”.[11] Notwithstanding these flaws, many companies have pushed forward and continued to implement AI in the recruitment process.[12]
When evaluating resumes from candidates, algorithms trained on potentially biased data can significantly impact applicants. One recent case, Mobley v. Workday, Inc., demonstrates how impactful these algorithms can be.[13] Mr. Mobley is an African American male over 40 years old with a history of mental illness. He was rejected from every job to which he applied, totaling over a hundred. When trying to decipher why he was rejected, he realized that all the companies had one thing in common: they all contracted with Workday to use its automated screening software. This software allowed a company to scan an applicant’s resume and evaluate responses to personality tests to determine whether a candidate should move forward in the recruitment process. Mr. Mobley attributed his 100% rejection rate to biased data on which the algorithms were trained, thus violating several anti-discrimination laws. Workday moved to dismiss the for failure to state a claim. The court denied Workday’s motion to dismiss the disparate impact claims under Title VII, the ADEA, and the ADA as all the required elements were adequately alleged.[14]
As discussed in the previous section, data can be biased in different ways. One issue in evaluating resumes arises due to the underrepresentation of minorities in certain industries. Because these algorithms are trained on existing data, when an industry overrepresents one group, the algorithm may decide that this protected characteristic is an important factor in predicting the success of an applicant. This practice, if done by humans, would plainly be illegal.[15] A striking example of this problem was seen when Amazon used AI to review resumes of software developer applicants. Amazon’s algorithm was trained using resumes for the same job from the preceding ten years. However, the technology industry at the time was overwhelmingly male.[16] This caused the algorithm to conclude that “male candidates were preferable.”[17] For the algorithm to implement its goal of hiring the best candidate, it rewarded applicants whose resume’s contained more masculine language and penalized candidates who used the word “women’s” in their resume.[18]
While penalizing the word “woman” clearly indicates the bias of the AI, these algorithms can fall victim to more subtle patterns among different features that can have similar discriminatory impacts, such as zip codes, names, and sports played.[19] Zip codes, for instance, are commonly racially concentrated because of historical discriminatory housing practices.[20] Similarly, names tend to correlate with certain ethnicities, making them easily subject to bias.[21] Because names and zip codes are often highly correlated with race, they can serve as a proxy variable for race and result in a disparate impact on a protected class of people.[22] In one study, for example, an algorithm determined that the name “Jared’ was the highest predictor of success for a job.[23] The problem here is that the name “Jared” is closely tied with race, which caused the program to prefer white men over other candidates.[24] Similarly, the algorithm also predicted that a high indicator of success was playing high school lacrosse,[25] which is also closely tied with race, as it is disproportionately played by white men.[26]
Many companies have also implemented automated video interviewing software. Many of these programs can analyze facial expressions, speech patterns, and body language, and transcribe text.[27] While this software allows companies to streamline the initial interview process, this software similarly adopts inherent bias by training on data that underrepresents minorities.[28] This poses several problems.
First, it is problematic that algorithms are not trained with a variety of skin colors. This lack of diversity in the training data has caused algorithms to be biased against people with darker skin. Studies have consistently found that minorities fare worse than their white counterparts when an algorithm predicts one’s emotional expressions, often finding that Black people appear angrier than their white counterparts.[29]
The fact that emotional expressions are often misread is related to a second issue: many of these software packages examine a candidate’s body language. Beyond the potential for misidentification of emotions due to a lack of diversity in the training data, this is problematic because different cultures use body language in different ways.[30] This software can lead to disparate impacts for minorities who do not use body language to express themselves in the same ways as the groups represented in the training data.
Third, this software can be harmful to non-native English speakers. This software is usually trained with native speakers, and because of this, it struggles to transcribe the speech of people with a non-traditional American accent.[31] When transcribing audio from people with accents or speech disabilities, it could result in faulty translations, no translation at all (i.e., “Inaudible”), or be interpreted by an algorithm as a lack of confidence.[32]
Beyond the issue with algorithmic bias, the use of AI triggers privacy concerns. Applicants may desire to keep much of the information supplied in job applications private from the public at large: for example, disability status or pay history. Furthermore, these systems might make determinations about a person’s mental health.[33] For this reason, it is worrisome to train algorithms using this sensitive data. It is also problematic if these programs parse resumes in a way that makes mistakes based on information supplied by applicants. AI can misinterpret supplied information as it decontextualizes much of the information supplied (e.g., Amazon’s AI resume reviewer parsing “woman’s chess club president” to penalize the candidate for her status as a “woman” while seemingly overlooking her leadership role). This misinterpretation reduces applicants’ privacy by reducing their autonomy to craft their own story and express who they are.
c. Issues in the Evaluation Process
Many companies use various methods to monitor their employees, including software to track factory worker productivity and algorithms that monitor eye movement.[34] The concerns about biased data discussed above remain highly relevant in this context.[35]
Some companies, for example, are using facial recognition programs to evaluate employee productivity.[36] A key concern with using facial recognition in evaluating employees is ensuring accurate identification. Because these datasets underrepresent people with darker skin tones, there is a worry that these employees will be penalized if a company implements a system that uses facial recognition. As mentioned, a study by Joy Buolamwini found that darker skinned individuals were misidentified at a rate of up to 34.7%, while the maximum rate of misidentification for lighter skinned individuals was 0.8%.[37]
Furthermore, some of these AI managing systems fail in assessing conditions that human managers would catch. For example, the algorithm could indicate that a disabled employee is less efficient than their counterparts because the algorithm is set to assess various factors that may be inapplicable for a person with disabilities (e.g., eye tracking for people with impaired vision or the speed at which some people move). It can often be difficult to determine what exactly is counted when an AI evaluates an employee due to the “black box” nature of these systems.[38] For this reason, workers with disabilities are particularly vulnerable to biased monitoring by AI.
III. POSSIBLE SOLUTIONS
While AI can negatively impact individuals, it is a powerful tool that companies have used to increase their efficiency and increase profitability. Because both the potential benefits and harms are significant,[39] it is unclear whether AI should be completely banned from the recruitment and evaluation process. Instead, there should be oversight into how the algorithms are trained and what kind of decisions they can make. For this discussion, it is helpful to look to other jurisdictions and similar areas of law for models.
a. Fair Credit Reporting Act
Decisions made in the employment process are extremely consequential. Because of this, it is beneficial to look to another area of law where denial of an application has significant consequences: consumer reporting.
The Fair Credit Reporting Act (“FCRA”) was enacted to protect consumers from adverse outcomes caused by faulty information in their consumer reports. These reports can be used in numerous, significant ways, such as determining eligibility for housing, credit, or employment.[40] In the employment context, the FCRA has safeguards in place for individuals. When an employer wants to use a consumer report for an employment decision, they must follow several steps, including notice and consent.[41] If the employer then uses this information to make an employment decision, they must provide the consumer with the report and with additional information outlining their rights under the FCRA.[42] This requirement enhances the transparency in the process and ensures that consumers can fix any inaccuracies in their records.
This framework should be extended to the use of AI in the recruitment context. How would this look?
First, applicants should receive a notice that a company plans to use AI in the recruitment process, and if it plans to use the resume to further train an algorithm. This would allow individuals with privacy concerns to opt out or potentially request a human review of their file. Second, candidates should be notified if they were rejected by an algorithm. However, it should go one step further and require that the applicant be given a reason why the algorithm rejected them. This can help determine if they were rejected for legitimate reasons or if the algorithm was biased when making this determination.
In the employee monitoring context, the Consumer Financial Protection Bureau (“CFPB”) recently published a circular highlighting the fact that data from employee monitoring can constitute a consumer report.[43] This is because the FCRA defines consumers, consumer report, and consumer reporting agency in very broad terms:
(1) a consumer is “an individual,”
(2) a consumer report is “any information by a consumer reporting agency bearing on a consumer’s . . . character, general reputation, [or] personal characteristics . . . which is used or expected to be used or collected . . . for the purpose of serving as a factor in establishing the consumer’s eligibility for . . . employment purposes,” and
(3) a consumer reporting agency is “any person which . . . regularly engages in whole or in part in the practice of assembling or evaluating consumer credit information or other information on consumers for the purpose of furnishing consumer reports to third parties.”[44]
These definitions, as interpreted in the CFPB’s circular, suggest that an employer who contracts with a third party to evaluate their employees by producing an algorithmic score would fit within this framework (the consumer is the employee, the consumer report is the evaluation by the third party, and the consumer reporting agency is the third party).[45]
This analysis should be extended to the furnishing of AI-generated resume scores. The consumer would be the applicant, the consumer reporting agency is the software provider, and the consumer report is the recommendation provided to the potential employer. Traditionally, if the information was provided by the consumer to the reporting agency, they could disclose this information to a third party without violating the FCRA.[46] However, because the information is being used in conjunction with an algorithm, it could potentially fall outside of this exception, as seen in the context of employee monitoring.[47]
b. Diverse Datasets and Auditing
The United States could also take an approach similar to the European Union (“EU”). The EU AI Act takes a risk-based approach and designates an algorithm as posing either a minimal, limited, high, or unacceptable amount of risk.[48] The Act classifies programs that evaluate candidates and monitor employees as high-risk systems.[49] High-risk systems must comply with certain regulations, including ensuring the datasets are “relevant, sufficiently representative and, to the best extent possible, free of errors and complete according to the intended purpose.”[50] Requiring datasets to be more representative of the population can reduce the potential for bias against underrepresented groups. Furthermore, the EU AI Act acknowledges that this collection of raw data may be challenging and imposes a duty on companies with high-risk programs to use technical solutions when appropriate.[51] Furthermore, the Act also requires that high-risk systems undergo post-market monitoring.[52] This is a policy that should be implemented in the United States to ensure that candidates are not denied employment due to their protected classes.
CONCLUSION
While AI has recently attracted significant attention for its ability to automate tasks, including making employment decisions, many have overlooked the potential negative consequences these algorithms can have. There must be greater attention paid to how these algorithms are trained to ensure that bias does not affect the outputs. This can be done by ensuring greater diversity in the datasets and conducting audits of how these systems reach their conclusions. Existing laws, such as the FCRA, can serve as a foundation and model for further AI regulation.
[1] Exec. Order No. 14,158, 90 Fed. Reg. 8441 (Jan. 20, 2025) (“This Executive Order establishes the Department of Government Efficiency to implement the President’s DOGE Agenda, by modernizing Federal technology and software to maximize governmental efficiency and productivity.”).
[2] Courtney Kube et al., DOGE Will Use AI to Assess the Responses of Federal Workers Who Were Told to Justify Their Jobs Via Email, NBC News, (Feb. 25, 2025, 12:32 PM), https://www.nbcnews.com/politics/doge/federal-workers-agencies-push-back-elon-musks-email-ultimatum-rcna193439 (“Responses to the Elon Musk-directed email . . . are expected to be fed into an artificial intelligence system to determine whether those jobs are necessary, according to three sources with knowledge of the system.”).
[3] See History of HireVue, HireVue, https://www.hirevue.com/about (last visited Mar. 17, 2025) (“HireVue was founded in 2004 based on the idea that people are more than bullets on a resume. In 2010, we were physically shipping cameras for video interviews.”).
[4] Daniel J. Solove & Paul M. Schwartz, Information Privacy Law 863 (8th ed. 2023).
[5] Id.
[6] Simon Friis & James Riley, Eliminating Algorithmic Bias Is Just the Beginning of Equitable AI, Harv. Bus. Rev. (Sept. 29, 2023), https://hbr.org/2023/09/eliminating-algorithmic-bias-is-just-the-beginning-of-equitable-ai
[7] Id. (“Algorithmic bias often occurs because certain populations are underrepresented in the data used to train AI algorithms or because pre-existing societal prejudices are baked into the data itself.”).
[8] While a faulty prediction based on a name seems more benign than discrimination based directly on race, it could be just as bad. Names are closely correlated with race and national origin and an inference like this can disparately impact minorities.
[9] Zhisheng Chen, Ethics and Discrimination in Artificial Intelligence-enabled Recruitment Practices, 10 Humans. & Soc. Sci. Commc’ns 567 (2023).
[10] See Bill Leonard, Study Suggests Bias Against 'Black' Names on Resumes, Soc’y Hum. Res. Mgmt. (Feb. 1, 2003), https://www.shrm.org/topics-tools/news/hr-magazine/study-suggests-bias-black-names-resumes.
[11] See Mobley v. Workday, Inc., 740 F.Supp.3d 796 (N.D. Cal. 2024).
[12] Ifeoma Ajunwa, The Quantified Worker 76 (“Nearly all Global 500 companies use [AI] tools for recruiting and hiring.”).
[13] Mobley, 740 F.Supp.3d at 796.
[14] Id. at 809.
[15] See U.S. Equal Emp. Opportunity Comm’n, Facts About Race/Color Discrimination, EEOC (last visited Sept. 4, 2025), https://www.eeoc.gov/laws/guidance/facts-about-racecolor-discrimination.
[16] See Jeffrey Dastin, Insight—Amazon Scraps Secret AI Recruiting Tool that Showed Bias Against Women, Reuters (Oct. 10, 2018, 8:50 PM), https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/.
[17] Id.
[18] Id.
[19] Ajunwa, supra note 11, at 85–86.
[20] Id.
[21] Id.
[22] Id. at 86.
[23] Id.
[24] Id. at 86–87.
[25] Id. at 87–88.
[26] Bob Cook, Lacrosse's Rich White People 'Problem' Is a Feature, Not a Bug, Forbes (Feb. 14, 2016, 11:31 PM), https://www.forbes.com/sites/bobcook/2016/02/14/lacrosses-rich-white-people-problem-is-a-feature-not-a-bug/?sh=7353c7127e85 (“A 2010 NCAA study reported that just 1.9 percent of Division I lacrosse players were black and that fewer than 10 percent were nonwhite.”).
[27] Elham Albaroudi et al., A Comprehensive Review of AI Techniques for Addressing Algorithmic Bias in Job Hiring, 5 AI 383, 385 (2024), https://www.mdpi.com/2669694 (“For instance, HireVue gained prominence in the mid-2010s for utilizing machine learning (ML) algorithms to assess candidates based on analysis of facial expressions, speech patterns, and body language.”).
[28] Id. at 390-93.
[29] See Lauren Rhue, Racial Influence on Automated Perceptions of Emotions (2018), https://dx.doi.org/10.2139/ssrn.3281765 (“Face++ consistently interprets black players as angrier than white players, even controlling for their degree of smiling. Microsoft registers contempt instead of anger, and it interprets black players as more contemptuous when their facial expression is ambiguous.”).
[30] See John L. Barkai, Nonverbal Communication from the Other Side: Speaking Body Language, 27 San Diego L. Rev. 101, 106 n.31 (1990).
[31] Eduardo Nacimiento-García et al., Gender and Accent Biases in AI-Based Tools for Spanish: A Comparative Study between Alexa and Whisper, Applied Sci. 2–3 (May 30, 2024) (citing Josh Meyer et al., Artie Bias Corpus: An Open Dataset for Detecting Demographic Bias in Speech Applications, ACL Anthology (2020)), https://www.mdpi.com/2811344.
[32] Dena Mujtaba et al., Lost in Transcription: Identifying and Quantifying the Accuracy Biases of Automatic Speech Recognition Systems Against Disfluent Speech, Arxiv (May 10, 2024), https://arxiv.org/pdf/2405.06150 (“[I]ndividuals who stutter are confronted with heightened challenges, especially in employment contexts”); see also Emotion Detection in Voice AI, NiCE, https://www.nice.com/glossary/emotion-detection-in-voice-ai (last visited Mar. 11, 2025) (“Emotion detection in voice AI analyzes various characteristics of speech, such as pitch, tone, volume, and cadence, to identify emotions like anger, frustration, happiness, or calmness.”).
[33] See, e.g., Mobley v. Workday, Inc., 740 F.Supp.3d 796, 796 (N.D. Cal. 2024).
[34] Wendi S. Lazar & Cody Yorke, Watched While Working: Use of Monitoring and AI in the Workplace Increases, Reuters (Apr. 25, 2023, 11:34 AM), https://www.reuters.com/legal/legalindustry/watched-while-working-use-monitoring-ai-workplace-increases-2023-04-25/ (“Remote workers are being monitored by their employers at unprecedented rates; using methods from keystroke and computer activity monitoring, to video monitoring, and even eye tracking software (which tracks a user's eyes to show whether they are looking at the screen, and at which part of the screen they are looking), employers are increasingly employing monitoring software, oftentimes without the employee even knowing.”).
[35] See supra Part II.b.
[36] See Jonathan Keane, Bosses Putting a ‘Digital Leash’ on Remote Workers Could Be Crossing a Privacy Line, CNBC (May 27, 2021, 2:17 AM), https://www.cnbc.com/2021/05/27/office-surveillance-digital-leash-on-workers-could-be-crossing-a-line.html.
[37] Joy Buolamwini & Timnit Gebru, Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, 81 Proc. Mach. Learning Rsch. 1, 1 (2018) (“We evaluate 3 commercial gender classification systems using our dataset and show that darker-skinned females are the most misclassified group (with error rates of up to 34.7%). The maximum error rate for lighter-skinned males is 0.8%.”).
[38] See Javier Sánchez-Monedero et al., What Does It Mean to 'Solve' the Problem of Discrimination in Hiring?: Social, Technical and Legal Perspectives from the UK on Automated Hiring Systems (2020), https://doi.org/10.1145/3351095.3372849.
[39] This discussion has overlooked the potential benefits of using AI in the workplace. For example, it should be noted that managers could have a much harsher bias in some respects, and AI can help reign in discriminatory decisions. Furthermore, while a person with a malicious purpose can more easily disguise their biases, an AI can be programmed to help indicate why an algorithm makes certain choices and reduce the potential for biased outcomes.
[40] Solove & Schwartz, supra note 4, at 632.
[41] 15 U.S.C. § 1681b(g).
[42] Id. § 1681b(b).
[43] Consumer Fin. Prot. Bureau, Consumer Fin. Prot. Circular No. 2024-06, Background Dossiers and Algorithmic Scores for Hiring, Promotion, and Other Employment Decisions (2024) (“In addition, an entity could ‘assemble’ or ‘evaluate’ consumer information within the meaning of the term ‘consumer reporting agency’ if the entity collects consumer data in order to train an algorithm that produces scores or other assessments about workers for employers.”).
[44] 15 U.S.C. § 1681a.
[45] Employers and Vendors Have FCRA Obligations When Using Workplace AI Tools: Your Step-by-Step Compliance Guide, Fisher & Phillips LLP (Nov. 11, 2024), https://www.fisherphillips.com/en/news-insights/employers-vendors-fcra-obligations-when-using-workplace-ai-tools-compliance-guide.html (“Therefore, even though the companies may be working in a closed loop in which they collect information provided by an employer and in turn provide it back to the same employer, that may still be enough to make these technologies vendors a ‘consumer reporting agency’—and the information provided by them a ‘consumer report.’”).
[46] See 15 U.S.C. § 603(d)(2)(A)(i).
[47] See Employers and Vendors Have FCRA Obligations When Using Workplace AI Tools: Your Step-by-Step Compliance Guide, supra note 41.
[48]High-Level Summary of the AI Act, Future of Life Institute (Feb. 27, 2025), https://artificialintelligenceact.eu/high-level-summary/.
[49] Id.
[50] Id.
[51]Alex Schulte, Data Governance, The EU AI Act and the Future of Global Mobility, Centuro Global (Oct. 28, 2024), https://www.centuroglobal.com/article/data-governance-eu-ai-act/ (“Technical Solutions: Where appropriate, providers should develop technical solutions, such as data augmentation, synthetic data generation, or dataset adaptation, to improve their quality and diversity.”).
[52] Axel Schwanke, The EU AI Act: Best Practices for Monitoring and Logging, Medium (Aug. 19, 2024), https://medium.com/@axel.schwanke/compliance-under-the-eu-ai-act-best-practices-for-monitoring-and-logging-e098a3d6fe9d#3aea.; Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence and Amending Regulations (EcEC No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828, 2024 O.J. (L) 1