Menu
Resources

Resources

  • April 30, 2024

    Best Practices For Employers Using AI to Make Employment Decisions

    Written by Susan Swatski

    Many employers use automated employment decision tools (AEDTs) to make decisions such as filtering job applications, hiring, performance evaluations, retention, promotion, and termination. However, employers should be cautious about using AEDTs for such traditional human resources tasks in a vacuum without attention to current and emerging laws and regulations that could create liability if their use of AI is mishandled. Without proper development, use and monitoring, AEDTs can be biased based on the data imputed, how the algorithm processes data or both. And that may have unintended consequences including those with legal implications which state and federal laws and regulations are struggling to keep up with.
    Employers who use AEDTs are not shielded from liability despite the absence of laws and regulations that expressly address such uses. Courts and governmental agencies will likely apply the laws and regulations in existence to scenarios involving AI, including AEDTs. In the employment area, those existing laws include:
    a. Title VII of the Civil Rights Act of 1964 (Title VII),
    b. The Americans with Disabilities Act (ADA),
    c. The Age Discrimination in Employment Act (ADEA),
    d. The Fair Chance Act (FCA) and
    e. The Fair Credit Reporting Act (FCRA).
    Title VII prohibits employment discrimination based on age, race, color, religion, sex and national origin. AI may make inferences based on these protected characteristics and those protected by state and local laws based on information learned from the internet or social media. AEDTs can result in discrimination in employment if employers do not take the time to understand how the AI works and how AEDTs use data to correct any results that may adversely impact persons with characteristics protected by Title VII. For example, if AEDTs use an algorithm that is based on data supplied by white males, then the resulting AI could be trained to favor white men. Another example arises in the context of a job applicant’s criminal history. Title VII requires employers to make an “individualized assessment” of an applicant’s criminal history when determining if the information is job related and consistent with business necessity. Such an assessment should not be left to AEDTs. Employers may need to assess factors that are not part of the algorithm or have an algorithm created that specifically addresses those factors.
    Use of third-party AEDTs will not shield employers from liability if use of those tools results in an adverse impact on members of a protected class. An adverse impact refers to a facially neutral practice that may disproportionately affect members of a protected class. Employers should employ safeguards to eliminate employees being adversely impacted by unintended, but illegal discriminatory reasons. Those safeguards include auditing AEDTs to understand what data is screened in and out, the diversity of the source data, and the data’s accuracy. At present, employers have few legal resources that squarely address AI. A solid first step to avoiding a misstep with legal implications is to look to existing regulatory and professional guidelines such as the Equal Employment Opportunity Commission (EEOC) and the Uniform Guidelines on Employee Selection Procedures (UGESP) which can be applied to AEDTs.
    Another potential safeguard that employers may want to utilize is application of the four-fifths (80%) rule of thumb, which is generally recognized by Federal enforcement agencies as a benchmark to evaluate whether a selection process has an adverse impact against a group having a shared protected characteristic, e.g. race. For example, employers should query whether the use of AEDTs in hiring produced a selection ratio of a minority group of at least four-fifths (80%) the selection ratio of the majority group. Some courts, however, have found that the use of the four-fifths rule is not appropriate where it is not a reasonable substitute for a test of “statistical significance.” Employers should ask their AI vendor whether it relied on the four-fifths rule or statistical significance when determining whether the use of AEDTs might have an adverse impact on a particular group of workers.
    The ADA is also prone to violation due to AEDT’s limitations on providing a “reasonable accommodation” for a person with a disability. Examples of uses of AEDTs that may have an adverse impact on a qualified applicant with a disability include:
    1. an applicant who has limited dexterity with their hands may not be able to complete an automated assessment despite having been able to perform the essential functions of the job. Had that same applicant been afforded the reasonable accommodation of taking the assessment orally, the applicant would have been proficient and not screened out for a discriminatory reason;
    2. an applicant with a speech impediment may not have speech pattern that is decipherable by AEDTs, which may make it unlikely that the AI would accurately assess the applicant’s problem solving ability;
    3. soliciting prohibited information from applicants about their medical history;
    4. AI that rejects applicants with gaps in their employment history would screen out persons who took time away from work for pregnancy or to treat for a disability; and,
    5. gamified video memory assessments could screen out blind persons who cannot adequately see the computer screen to play the games.
    To avoid running afoul of the ADA, employers should ensure that their use of AEDTs does the following:
    1. allows reasonable accommodations such as:
    a. opting out of AEDTs,
    b. providing notice before use of AEDTs and
    c. offering alternative formats such as offering an oral memory assessment as opposed to a video assessment.
    2. provides clear instructions for requesting a reasonable accommodation.
    Age Discrimination is another area that is ripe for AI bias. Employers should ensure that the algorithms used to review applicants is not trained to identify applicants who graduated after a particular date or who had .edu email addresses or other data that could skew in favor or against a particular age group.
    In May 2022, the EEOC issued technical guidance addressing the use of AEDTs to assess job applicants and employees under Title VII, the ADEA and ADA. The EEOC puts the burden of compliance squarely on employers. In this regard, the guidance advises, “if an employer administers a selection procedure, it may be reasonable under Title VII if the procedure discriminates on a basis prohibited by Title VII, even if the test was developed by an outside vendor.” The employer may also be held responsible for agents’ actions if the employer has given the agent authority to act on the employe’s behalf. “This may include situations where an employer relies on the results of a selection procedure that an agent administers on its behalf.”
    The EEOC guidance recommends that employers provide advance notice to applicants and employees of the use of AI and afford applicants and employees the opportunity to request a reasonable accommodation. The guidance also recommends that employers tell applicants: (1) which traits or characteristics the tool is designed to measure, (2) the methods by which those traits or characteristics are to be measured and (3) the disabilities, if any, that might potentially lower that applicant’s results or cause the applicant to be screened out. The guidance also advises that employers using AI tools must be prepared to demonstrate: (1) using AEDTs is “job related and consistent with business necessity” and (2) there are no less discriminatory alternatives that are equally effective.
    The FCRA and FCA (which prohibit federal agencies and federal contractors from requesting that an applicant for federal employment disclose criminal history information before the agency makes a conditional offer of employment) can be implicated by use of AI to perform background checks. Frequently, such checks require gathering information about applicants’ credit and criminal histories and sorting applicants according to the same. Here again, employers using vendors to perform these tasks have an obligation to understand the algorithms that AEDTs are using, what data is sought, and what stage in the hiring process that data is being considered by the AI.
    Although most states and major cities have seen a flurry of bills introduced that address AEDTs, few have been passed into law. The most prominent exception is New York City Local Law 144, which was passed in July 2023. Local Law 144 requires employers using AEDTs to screen candidates for hire or promotion to disclose the use of such tools to provide at least 10 days’ notice to the candidates. The notice must include instructions to request an alternative selection process or a reasonable accommodation. Local Law 144 prohibits use of AEDTs unless the tool has been the subject of an independent “bias audit” within the past year and the employer publishes the results of the audit, including its date, on the company’s website. Local Law 144 includes hefty monetary civil penalties for even first-time violators.
    In May 2023, President Biden signed an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The Order establishes standards for AI safety and security to, inter alia, protect worker privacy and civil rights. Those standards include a blueprint for an AI Bill of Rights. These standards identify the following five principles that should guide the design, use and deployment of automated systems to protect the public:
    1. protect against unsafe or ineffective systems,
    2. algorithms and systems should be used and designed in an equitable way,
    3. protect against abusive data practices via built-in protections and individuals should have agency over how their data is used,
    4. disclose when an automated system is being used and how it is being used and why it contributes to outcomes that may impact the individual and
    5. individuals should be able to opt out and have access to a person who can quickly consider and remedy problems the individual may encounter.
    Although these standards do not have the legal impact of a regulation, they do provide guidance for employers using AEDTs in their workplaces to help them avoid liability as legislators and government agencies draft laws and regulations.
    The Consumer Financial Protection Bureau (CFPB), the Department of Justice (DOJ), the EEOC, and the Federal Trade Commission (FTC) issued a Joint Statement addressing discrimination and bias that is inherent in the use of algorithms. The Joint Statement focuses on:
    1. applying existing legal authorities,
    2. addressing harmful outcomes and
    3. protecting individual rights
    The Joint Statement affirms that existing legal frameworks for automated systems apply equally to AI. The statement recognizes that AI can “perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes” and emphasizes the need for “vigilance” in AI employment practices.
    In the event that use of AI results in employee layoffs, employers should pay close attention to the requirements of the federal Worker Adjustment Retraining Notification and equivalent state and local laws, which govern notice obligations in connection with plant closing and mass layoffs. Also, to the extent employers use AI in machinery, compliance with the Occupational Safety and Health Act and equivalent state laws should be followed to provide a workplace free from recognized hazards.
    Best Practices for using AEDTs to avoid legal liability include:
    (1) focusing tasks solely on essential job qualifications,
    (2) reviewing vendor compliance,
    (3) ensuring that use of AI data used to make assessments are in fact job related,
    (4) auditing the technology to ensure selection rates do not violate anti-discrimination laws,
    (5) provide notice to applicants and employees that you are using AI software,
    (6) allow applicants and employees the opportunity to seek reasonable accommodations to address inherent bias in the software,
    (7) training staff charged with implementing and evaluating AI software results to verify the accuracy of the data and to recognize discriminatory issues,
    (8) ensuring that the data used to train AI is representative and diverse,
    (9) explain to applicants the characteristics the AI may use to conduct evaluations,
    (10) obtain consent to be evaluated by AI,
    (11) prohibit the sharing of an applicants’ AI results,
    (12) obtain waivers for use of facial recognition software,
    (13) create a mechanism for individuals to challenge and dispute adverse information
    produced by AI,
    (14) require vendors to share how their AI models were built, how they are trained and
    provide the demographics of those training the algorithms and
    (15) conduct pre-deployment and ongoing disparity testing and monitoring to account
    for system changes when new data is fed into the system.
    The use of AEDTs is rapidly emerging and changing due to its benefits and efficiencies. Although legislation and regulations are not yet fully developed, meaningful guidance is available. Our team of employment attorneys at Hill Wallack stand ready to guide employers through these changing times.