AI Bias Audits in BigLaw Recruiting: NYC Local Law 144 Explained for Candidates
BigLaw Bear · 8 min read

If you applied to a BigLaw firm with a New York City office during the 2026 cycle, there is a real chance an automated tool was used somewhere in screening your application. New York City has a law specifically about this, called Local Law 144, and it gives you rights as a candidate. Most law students do not know it exists.
This post is the practical version. What the law requires, what your rights actually are, and what you can do if you want to opt out of AI screening. We will keep the legalese minimal.
What Local Law 144 covers
Local Law 144 took effect in July 2023. It applies to employers using "Automated Employment Decision Tools" (AEDTs) on candidates for positions located in New York City.
An AEDT, in the law's definition, is a computational process that produces a "simplified output" (a score, classification, or recommendation) used to "substantially assist or replace" discretionary decision-making in employment.
Translated: if a firm uses an algorithm to score, rank, or filter candidates in any meaningful way, that is an AEDT under the law.
Three things matter here:
-
The law applies to NYC-based positions. If you are applying to a firm's New York office, you are covered. If you are applying to the same firm's Chicago or D.C. office, this specific law does not apply (though some other states are moving in similar directions).
-
It applies to "substantially assist" decisions. A pure recommendation system that humans completely override is in a gray zone. A scoring tool that filters out candidates below a threshold before any human looks is squarely covered.
-
The law was the first of its kind in the United States. It is the template that other jurisdictions are using as a starting point. Illinois has a similar law for video interview AI. California has been considering broader AI hiring rules. The federal EEOC has issued guidance.
What firms have to do
Three core requirements:
1. Bias audit. Any firm using an AEDT must commission an independent bias audit at least once a year. The audit measures whether the tool produces materially different outcomes for candidates by race, ethnicity, and sex, using statistical metrics defined in the law (impact ratio, scoring rate). The audit results must be published on the firm's website before the tool can be used.
2. Candidate notification. Firms must notify candidates that an AEDT will be used at least 10 business days before they use it. The notification has to identify the job qualifications and characteristics the tool evaluates and explain how candidates can request information about the tool's data sources.
3. Opt-out and alternative process. Candidates have the right to request an alternative selection process or accommodation. In practice, the law does not say firms have to grant the request, but it says they cannot retaliate against you for asking.
Your rights as a candidate
Stripped down to plain English:
You have a right to be told. If a NYC-based BigLaw firm is using an AEDT in your application, they are required to tell you. Look in the application packet, the email confirmations, or the firm's career page (often at the bottom in fine print). The notice usually links to the firm's most recent bias audit results.
You have a right to see the audit. Bias audit results are public. They will not tell you the firm's recipe (the actual scoring weights and inputs are proprietary), but they will show you whether the tool produced disparate outcomes across protected classes. If a firm is using an AEDT and has not published an audit, that is itself a Local Law 144 violation.
You have a right to ask for an alternative process. Send the recruiting coordinator a request that your application be evaluated without the AEDT, and explain why. Firms can decline, but most firms with serious recruiting operations will route an opt-out request to a human review path. They do not want a public Local Law 144 complaint.
You cannot be retaliated against for asking. Asking for an opt-out is not a signal that you are difficult, and the law explicitly protects you. In practice, the worst that happens is "we will note your request" and your application gets reviewed by a person.
What this looks like in real applications
Most large firms with NYC offices have updated their application portals to include a brief AEDT notice. The most common form is a checkbox on submission that says something like:
"We use automated tools to assist in evaluating applications. By submitting, you acknowledge that you have received notice of this use. You may request an alternative process by contacting [email]."
Many candidates click through this in two seconds without reading. Now you know what that line is, and what your options are.
A handful of firms have moved further: explicit landing pages with the bias audit linked, clear opt-out instructions, and a description of what the AEDT actually evaluates. Those firms tend to be the ones with the strongest in-house compliance functions or the most exposure to public scrutiny.
A handful of firms have done the bare minimum or skipped notification entirely. If you are applying to a NYC-based BigLaw firm and there is no notice anywhere in your application materials about AEDT use, two things might be true: they are not using one (some firms still do all screening manually), or they are out of compliance. The published audits on each firm's site are the easiest way to check.
What this does NOT do for you
A few honest limitations:
- It does not stop firms from using AI. The law regulates use, it does not prohibit it. Most firms continue to use AEDTs after publishing audits.
- It does not give you the AEDT's actual logic. You can see the audit results, not the model itself. If you want to understand what the tool is rewarding or penalizing, the audit is your only window, and it is a small one.
- It only covers NYC-based positions. Applications to a firm's other offices are governed by other rules (or none at all, depending on state).
- Bias audits do not always reveal real issues. A model that produces statistically similar outcomes across protected classes can still produce systematically worse outcomes for, say, candidates from non-T14 schools or non-traditional backgrounds. Local Law 144 does not address those dimensions.
Other jurisdictions to know about
Local Law 144 is the most concrete, but the rules are moving:
- Illinois Artificial Intelligence Video Interview Act. If a firm uses AI to analyze video interviews of Illinois candidates, they have to disclose, get consent, and (in some cases) submit demographic data to the state.
- California. The Civil Rights Council has issued draft regulations on AI in hiring that would impose audit and notice requirements similar to Local Law 144 across the state.
- Federal EEOC guidance. Title VII of the Civil Rights Act applies to AI tools just as it applies to any other employment selection procedure. The EEOC has published guidance about AI-driven adverse impact.
- Colorado AI Act. Effective February 2026, Colorado requires "developers" and "deployers" of high-risk AI systems (including hiring tools) to take reasonable care to avoid algorithmic discrimination, with disclosure and impact assessment requirements.
The trajectory is clear: AI use in hiring is being regulated, unevenly, in more places. Whether this matters to you as a candidate depends on where you are applying.
What to actually do
If you want to be informed but not paranoid:
-
Read the fine print on each application. Look for the AEDT notice. It is usually short and easy to miss.
-
Check the firm's site for the published audit. A quick search of "[firm name] local law 144 audit" usually finds it. Skim the impact ratio. If it is far from 1.0, that is meaningful information about the tool.
-
If you want to opt out, ask. Email the recruiting coordinator. Be polite, be brief, do not editorialize. "I would like to request an alternative selection process under NYC Local Law 144" is a complete sentence. Most firms will not push back.
-
Keep records. If you opt out and your application gets a faster rejection than peers, you have at minimum a paper trail. Whether that trail leads anywhere is a different question, but having it is better than not.
The bigger picture
The candidates most affected by AI in hiring are the ones with non-traditional backgrounds: career changers, first-gen law students, candidates from less-represented schools, people whose resumes do not match the patterns the model was trained on. Those are exactly the candidates who benefit most from a human review path. Local Law 144 is an imperfect tool, but it is one of the few formal mechanisms law students can use to ask for that review.
If you want a broader read on how AI is reshaping the BigLaw recruiting funnel, see our post on how AI changed BigLaw recruiting in the 2026 cycle. For practical advice on AI in your application materials, see our take on using AI for cover letters.
The law is not going to be settled by the next cycle. Knowing the rules of the cycle you are in is one of the few things you actually control.