NYC Tackles AI and Automated Decision-making in Employment and Recruiting

As the public and private sectors continue to struggle with harnessing the opportunities and risks associated with AI, New York City tackled a discrete area of AI-related concern – employment and recruiting – with the implementation of Local Law 144 and its accompanying regulations. Local Law 144 went into effect this summer and imposes restrictions on the use of automated employment decision tools (“AEDT”). The use of AEDT is prohibited by employers and employment agencies unless (1) the tool has undergone a bias audit in the past year, (2) information about the bias audit is made publicly available, and (3) notices have been provided to employees or job candidates.1 The NYC Department of Consumer and Worker Protection (DCWP), which enforces the law, issued Final Rules in support of the AEDT law in April 2023, and notable changes include an expanded scope of the definitions of “machine learning, statistical modeling, data analytics, or artificial intelligence”.2

The use of AI, including generative AI (GAI), in the employment and recruiting context poses significant challenges, including the potential for substantial and harmful discrimination in the hiring and recruiting processes and the potential for over disclosure or inappropriate disclosure of employee and candidate personal data. For existing employees, beyond the decisions around job security, AI may be used for activities such as performance evaluation, employee monitoring, and various kinds of employee assessment. While there are numerous opportunities for GAI to assist with HR-related administrative tasks, the problems with using AI in the recruiting and hiring process are well-documented and often lead to discriminatory outcomes due to algorithmic bias, issues with data quality, and lack of transparency and oversight. The potential for GAI to exacerbate these problems is considerable (e.g., GAI-developed performance assessments that are skewed towards a particular demographic or disfavor employees with more irregular schedules such as pregnant employees).

NYC has attempted to place guardrails to head off the most problematic implementations of AI by requiring employers and employment agencies (collectively, “employers”) to thoroughly assess and provide transparency around their use of AEDT. “Employment decisions” are not just final hiring or promotion decisions, but also include assessment and screening that assist the hiring/promotion process. This does not include, e.g., using AI to contact potential candidates, although it is possible to foresee bias and other issues with those types of activities as well.

  1. Bias audit: The bias audit is an impartial evaluation by an independent auditor to assess for disparate impact that is conducted within no more than one year before using the AEDT, and then again annually. The NYC Rules set out minimum requirements for what is to be included, such as impact ratio across sex categories, race/ethnicity categories, and intersectional categories. Based on the results, employers must determine whether they would be in violation of federal, state, and NYC laws prohibiting discrimination by using the AEDT. To date, it does not appear that NYC has specified examples of suitable independent auditors (which do not have to be approved by the DCWP) or defined a specific requirement for statistical significance.

  2. Notice of bias audit: Employers must publish a summary of the results of the most recent audit and the date the AEDT was first used either on the employment section of their website and/or via linking to the information. The summary is to include the date of the audit, information about the data used in the audit, how many data subjects fell into an “unknown category” and the number of data subjects assessed, selection/scoring rates and impact ratios.

  3. Notice to employees and candidates: Employers must provide notice to employees and candidates who are NYC residents that they are using an AEDT at least 10 business days before using the AEDT and tell them which job qualifications or characteristics the AEDT will assess. The notice should include instructions about requesting accommodations. Notice may be made by mail or email, with options for including notice via website or written policy/procedure for candidates and employees, respectively.


Escalating Fines.
  The DCWP handles complaints related to violations of Law 144, enforces the Law, and refers claims regarding discrimination to the NYC Commission on Human Rights. Penalties start at $500 per violation for the first violation, and increase to up to $1500 per violation on subsequent days. The potential to quickly accumulate penalties is significant: each day out of compliance results in a new violation, and there is a separate violation with respect to each person who was supposed to be notified of an AEDT under the law.

Other Jurisdictions.  NYC is not alone in attempting to harness the use of AI and automated decision making in the employment and recruiting context. Maryland and Illinois also recently enacted legislation related to the use of AI during interviews, and other state and municipal jurisdictions are considering AI rules as well. Globally, various countries regulate the use of AI and AEDT, notably the EU and UK, which are widely expected to enhance current measures to address the rapidly developing capabilities of GAI. Employment issues in certain regions are quite sensitive given the focus in privacy law around employee monitoring/surveillance, especially in the EU/UK, discrimination, and the obligations related to profiling and automated decision making. At the federal level, U.S. government agencies have indicated that AI discrimination is a priority, although it is unclear whether new legislation will follow. Several agencies, including the EEOC, for example, issued a joint statement on enforcement related to discrimination and bias earlier this year.3

Suggested Next Steps.  In the meantime, employers with an NYC presence should:

  • Review the definitions of AEDT in Local Law 144 and determine whether any of their AI use falls within the scope of the law
  • Secure an independent auditor to perform bias audit(s) for the applicable AEDT use cases and review the results
  • Develop notices for the public, employees, and candidates, as appropriate
  • Evaluate their current privacy and AI impact assessment processes and practices to accommodate the continuously developing regulations in the areas of AI, GAI, privacy and security, and risk management.

This is an area to monitor closely as there are likely to be more laws passed addressing the use of GAI in employment settings.

Footnotes

New York City Local Law 144 of 2021 (A Local Law to amend the administrative code of the city of New York, in relation to automated employment decision tools) (Administrative Code of the City of New York § 20-870 et seq.).
2 Rules of City of NY Dept of Consumer and Worker Protection [6 RCNY] § 5-300 et seq. See also NYC DCWP Frequently Asked Questions.
3 Consumer Financial Protection Bureau, Department of Justice Civil Rights Division, Equal Employment Opportunity Commission, and Federal Trade Commission: JOINT STATEMENT ON ENFORCEMENT EFFORTS AGAINST DISCRIMINATION AND BIAS IN AUTOMATED SYSTEMS (April 2023).
close
Loading...
Knowledge assets are defined in the study as confidential information critical to the development, performance and marketing of a company’s core business, other than personal information that would trigger notice requirements under law. For example,
The new study shows dramatic increases in threats and awareness of threats to these “crown jewels,” as well as dramatic improvements in addressing those threats by the highest performing organizations. Awareness of the risk to knowledge assets increased as more respondents acknowledged that their