Class Action Targeting Video Interview Technology Reminds Employers of Testing Risks

A recently filed class action lawsuit alleges a hiring platform company, which used artificial intelligence (“AI”) to assess job candidates during video interviews, illegally collected facial data for analysis. The prospective class alleges multiple violations of the Illinois Biometric Information Privacy Act[1] (“BIPA”), which places limits on the use of biometric identifiers or biometric information in Illinois. Under BIPA, biometric identifiers include a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry. BIPA broadly defines biometric information as “any information” that is based on an individual’s biometric identifier used to identify an individual. This lawsuit is a reminder of the risks for employers using AI or biometric information in employment assessments.

BIPA Requirements and Prohibitions

BIPA requires private entities, including employers, that “collect, capture, purchase, receive . . . or otherwise obtain” biometric identifiers or information, such as face geometry, in Illinois, to satisfy three requirements. First, the entity must inform the person in writing that a biometric identifier or biometric information is being collected or stored. Second, the entity must inform the subject in writing of the specific purpose and length of term for which a biometric identifier or biometric information is being collected, stored, and used. And third, the entity must receive a written release from the subject.

BIPA also requires entities, including employers in Illinois, that process biometric identifiers or information to develop and publicize a written policy establishing a retention schedule and guidelines for permanently destroying the biometric identifiers or information. BIPA prohibits entities, including employers in Illinois, from selling or otherwise profiting from a person’s or customer’s biometric identifiers or biometric information.

Individuals can recover liquidated damages of $1,000 or actual damages per violation, whichever is greater, along with attorneys’ fees and expert witness fees. Liquidated damages increase to $5,000 if the violation is intentional or reckless.

The Proposed Class Action

On January 27, 2022, an Illinois resident filed a class action complaint in Illinois state court against the hiring platform company HireVue alleging violations of BIPA.[2] The complaint claims that the company illegally collected facial data for analysis without written permission when the plaintiff interviewed for a job through HireVue’s online platform. The complaint also accuses HireVue of profiting from her and other potential class members’ facial biometric identifiers. Finally, the complaint alleges HireVue failed to provide a publicly available retention schedule and/or guidelines for permanently destroying the biometrics.

The plaintiff seeks to represent herself and “thousands of members of the Class” “whose biometrics were captured, collected, received . . . or otherwise obtained” through HireVue’s video interview software within the state of Illinois any time within the applicable limitations period.[3]

Other Laws Directed Specifically at Artificial Intelligence

Employers should be aware of other states that have passed laws affecting biometrics. Arkansas, Texas and Washington have passed similar Biometric Privacy Laws.[4] California adopted a comprehensive privacy statute, the California Consumer Privacy Act, that requires employers to issue notices to applicants and employees, which describe the categories of information being collected, as well as how the information is collected, used, shared, and disposed of.[5] Maryland prohibits employers from using facial recognition technology during pre-employment job interviews, unless the applicant consents by signing a specified waiver.[6]

Other state and local legislatures have passed legislation that specifically addresses AI. Effective in 2023, New York City law will prohibit the use of automated hiring and promotion tools in New York City unless those tools have been subject to a bias audit.[7] The New York City law also will require employers to notify employees or candidates who reside there if the tool was used to make job decisions.

Another Illinois law, the Artificial Intelligence Video Interview Act (“AIVIA”),[8] imposes additional requirements on AI-based interviews. Although there is no private right of action under AIVIA, the act requires employers comply with certain requirements when recruiting for Illinois-based positions and using video-recorded interviews that use AI. Before seeking applications to submit to video interviews, AIVIA requires employers to notify each applicant that AI may be used to analyze the applicant’s video interview and consider the applicant’s fitness for the position; provide each applicant with information explaining how the AI works and the types of characteristics used to evaluate applicants; and obtain consent from each applicant to be evaluated by the AI program. For more information on the Artificial Intelligence Video Interview Act, see our previous Client Alert.

Other jurisdictions, including Washington D.C., are considering similar laws.

Other Legal Challenges

The Illinois case described above follows another case challenging HireVue’s AI-based interviews. In 2019, the Electronic Privacy Information Center (“EPIC”) filed a Federal Trade Commission complaint against HireVue challenging the company’s AI-driven assessments that assess job candidates based on facial analysis. In response, HireVue announced it would stop relying on facial analysis to assess job candidates, but would continue to analyze biometric data from job applicants.[9]

Anti-Discrimination Laws May Also Apply to AI Selection Tools

The Equal Employment Opportunity Commission recently announced it will focus on the use of AI in employment decisions, specifically providing updated guidance on how AI and other emerging tools may comply with federal anti-discrimination laws (described below).

Federal and state anti-discrimination laws may apply broadly to AI selection tools to prohibit discrimination against individuals in protected classes based on characteristics such as race, color, sex, national origin, religion, disability and age. These federal anti-discrimination laws include Title VII of the Civil Rights Act of 1964 (“Title VII”), the Age Discrimination in Employment Act of 1967 (“ADEA”), and the Americans with Disabilities Act of 1990 (“ADA”). Both Title VII and the ADEA prohibit employers from using AI selection tools that disproportionally impact a protected group unless certain conditions are met. Under Title VII, an employer may use a tool with a disproportionate impact if the tool is a reasonable measure of job performance and is the least adverse alternative. Under the ADEA, an employer need only prove that the tool is reasonable. The ADA specifically bans hiring using employment tests that tend to screen out an individual with a disability or class of individuals with disabilities unless the test is shown to be job-related and consistent with business necessity. Many states and localities have analogous anti-discrimination laws that may be more protective than the federal laws of employee and applicant rights.

To reduce risk of anti-discrimination liability, employers should consider ensuring that employment tests and other selection tools and procedures that pose an adverse impact on a protected class are validated for the positions and purposes for which they are used. Federal guidelines identify several validation methods, and call for selection tools to be reviewed regularly to ensure they are properly validated. Employers are well advised to seek professional guidance with respect to the validation of AI-based and other selection tools.

What Employers Should Do

Employers should be mindful of the risks involved with AI assessment technology and ensure they comply with the emerging patchwork of federal, state and local laws regulating their use, as described above. Because interview technologies allow prospective employees to interview virtually, employers should consider where interviews take place as well as where decisions are made and jobs are located.

Employers should be mindful of the data collected by the AI assessment technology and ensure they comply with laws regulating the collection, use, and storage of biometric and personal information that may be used by the AI assessment technology.

Steps employers should consider include:

  • Assessing whether AI assessment technology poses an adverse impact and whether it is valid and job-related under federal and state anti-discrimination laws.
  • Pursuant to BIPA, when employers collect biometric information or biometric identifiers in Illinois:
  • Establishing a written policy on retention and destruction of the biometric information or biometric identifiers
  • Properly informing the subject in writing about the use and retention of the biometric information or biometric identifiers and obtain a written release
  • Ensuring that the biometric information or biometric identifiers are not used for profit
  • Ensuring that the proper disclosure and storage of the biometric information or biometric identifiers
  • When using biometric information in Arkansas, California, Texas and Washington, following confidentiality practices consistent with those states privacy laws with respect to that information.
  • When using facial recognition during pre-employment job interviews in Maryland, obtaining applicant consents by signing a specified waiver.
  • When using automated hiring or promotion algorithms, in or after 2023 in New York City, subjecting those tools to a bias audit and issue appropriate notices under the New York City law.
  • Also in Illinois, complying with AIVIA requirements for AI-based interviews.
  • Considering monitoring state and federal law for additional developments in this evolving area.

Leave a Reply

Your email address will not be published. Required fields are marked *