Is anyone following nyc’s ai disclosure law?

New York City’s “AI Law” or Local Law 144 (also known as “The Artificial Intelligence in Hiring Law”) addresses the use of automated employment decision tools (AEDTs) by companies in their hiring and promotion decisions.  The law seeks to promote transparency surrounding algorithmic decision-making in employment. Under Local Law 144, companies must conduct annual bias audits on AEDTs and publish the results, as well as disclose their use of AEDTs when screening job applicants based in NYC and when assessing current performance of employees based in NYC.

Additionally, employers in NYC must comply with certain disclosure requirements to potentially impacted individuals, by clearly and concisely explaining the AEDT screening process in place to any applicant based in NYC, no later than ten days before using the tool. Candidates may request an alternate screening process or an accommodation. 

The law carries civil penalties of up to $1,500 per day, per violation. Individuals claiming discrimination due to the AEDT technology may bring civil actions in any court of competent jurisdiction. 

Despite Local Law 144’s potential to have huge implications for NYC companies—regardless of size—to date, there have been no enforcement actions. (The law was passed in December 2021 and enforcement began on July 5, 2023). One reason for the lack of enforcement may be due to the law’s narrow definition of AEDT, defining AEDT as “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons."  

Do you see the snag? A tool that is deployed without any human-in-the-loop interactions would be covered by Local Law 144. But, it is relatively easy to simply insert human oversight somewhere into the process and thus pull the AEDT outside of the law’s scope.

Another reason Local Law 144 is not being enforced could be because it is designed to rely primarily on applicant and employee complaints to initiate action by the city enforcement agency. But, in order for applicants or employees to file a complaint, they have to both know and care that the AEDT is being deployed—and when companies fail to post public audit results (under the argument that there is a human-in-the-loop making the final decision on hiring) individuals are less likely to be aware that AI was used in their hiring process. 

To underscore this point, a study by The Citizens and Technology (CAT) Lab at Cornell University last year found that from 391 employers, 18 employers published hiring algorithm audit reports, and 13 posted transparency notices informing job-seekers of their rights.

And, even companies that do comply with Local Law 144 with disclosure about the use of AEDTs may not be doing enough to ensure applicants and employees in fact know about the AEDT and understand its implications (under Local Law 144, a website posting about use of the AEDT may suffice as notice). 

In summary, although NYC was a leader in passing an algorithmic transparency law, back in 2021, it remains to be seen how effective this law will be, in one of the largest hiring markets in the country.


Article written by Maria T. Cannon (AIGP), Associate at AMBART LAW. This article is for general information purposes only and may not be relied upon for legal advice. If you would like a consultation regarding your matter, click here to request one.

Previous
Previous

A Lesson in Legal Realism: The (trimmed down) Texas Responsible AI Governance Act

Next
Next

The Virginia AI Act and Consequential Decision-Making