Skip to main content

Transparency · Last reviewed May 2026

How our AI works, what it sees, and how we keep it fair

Big Law Bear uses AI for two things: helping recruiters sort applications faster, and assisting interviewers during live conversations. We do not use AI to make hiring decisions on its own. Every signal below is reviewed by a human at a real firm before it affects your candidacy. This page describes each tool in plain English, the data it sees, the bias monitoring we run, and how to opt out.

The short version

  • AI helps recruiters; AI does not hire or reject anyone on its own.
  • Inputs are limited to what you share with us. We do not buy or scrape information about you.
  • Protected characteristics (race, gender, age, etc.) are never AI inputs.
  • You can opt out without penalty - your application still gets human review.
  • We run NYC LL144-style bias audits on the prescreen tool every quarter and publish the results.

The AI features on Big Law Bear

We ship two AI-assisted features that affect candidate evaluation, plus one structured-extraction utility:

Selection support

Prescreen scoring

What it does

Reads the application materials you submit (cover letter, custom-question responses, optional supplemental documents) and produces a 0-100 composite score, a tier (top / strong / maybe / weak / pass), five sub-axis scores, and a 2-3 sentence rationale for the recruiting team.

Data it sees

Cover letter text, custom-question responses, application metadata (school, year, practice interests). Resume content is included only if you uploaded one.

Data it never sees

Demographic data (race, gender, age, ZIP code, names on photos, LSAT/grade percentiles). The model is shown a redacted version of your file.

Human review

Every prescreen output is shown to a human firm reviewer alongside your full application. The reviewer can override the score, add their own notes, or ignore the AI entirely. The AI output never reaches you and never directly causes acceptance or rejection.

How to opt out

To opt out, contact us via the form at /help/contact (category: AI fairness) and we will mark your applications as no-prescreen for the current cycle. A self-serve toggle in your account settings is on the roadmap but not yet shipped.

Interview support (limited rollout)

Interview copilot (note pasting + post-call transcription)

What it does

Today the copilot is interviewer-driven: a partner can paste a 1-3 sentence snippet of what was just said and the assistant offers follow-up question prompts. After the call, video recordings can be transcribed to a written summary for the firm's records. A live, in-call transcript with real-time suggestions is on the roadmap; it is NOT in production today.

Data it sees

Pasted snippets and post-call audio (when the firm has uploaded the recording). Big Law Bear does not retain raw audio after 24 hours; transcripts are retained for the duration of the recruiting cycle and then deleted.

Data it never sees

The interviewer’s assessment of the candidate; how the candidate “scored”; the interviewer’s decision; or any biographical data outside the conversation.

Human review

The assistant only suggests; the interviewer is responsible for what they say and how they evaluate. There is no AI-generated “score.”

How to opt out

Yes, Illinois candidates and any candidate who declines the AI Video Interview Act consent will have their interviews conducted without the assistant. Recordings are still made if the firm requires them, but no AI processing occurs.

Utility (not a selection tool)

Resume parsing on signup

What it does

If you upload a resume during signup, we extract structured fields (name, law school, GPA if listed, LSAT or GRE if listed, work-experience years, address) to populate your profile so you don’t have to retype them. Practice-area interests are not auto-extracted; you pick those manually.

Data it sees

Only the resume you choose to upload.

Data it never sees

No demographic inference. No ‘quality’ assessment of the resume.

Human review

N/A, this is structured extraction, not evaluation. You can edit every field after extraction; you can also skip the upload and type fields manually.

How to opt out

Just don’t upload a resume. Type your fields directly.

What data the AI sees, and what it never sees

The single most common question we get is “does the AI know my race / gender / school ranking?” The answer is no for protected characteristics. The honest, complete answer is below.

InputPrescreenCopilotResume parser
Cover letter / written responses--
Resume content (if uploaded)-
Self-reported school + class year-
Live interview audio--
Race, gender, age, sexual orientation
Disability status / accommodations
ZIP code, address, photos
Past hiring outcomes from other firms
Inferred demographic data

For interview copilot, the data scope is restricted to the audio of that interview only. Past interviews from other firms or other candidates do not influence the assistant’s suggestions.

How we monitor for bias

Algorithmic bias does not announce itself. We treat every AI-touched output as a potential source and run the following checks on a recurring basis:

  1. NYC LL144 four-fifths rule audit - before the prescreen tool is enabled for any NYC-scope role, an independent auditor evaluates the selection rates by demographic group on a rolling 12-month sample. If any group’s selection rate falls below 80% of the highest-rate group, the tool is gated until a model retrain or threshold change resolves the gap. Results are posted at /legal/nyc-aedt-notice.
  2. Monthly demographic parity check - we sample prescreen outputs across a month and compare the distribution of scores against the application population, controlling for the law-school cluster the candidate is in. If the spread on any group exceeds the band we expect from noise alone, an alert fires and the tool is paused until an engineer + recruiting-ops lead sign off.
  3. Hold-out validation on every model update - before any change reaches production, we evaluate it on a hold-out set that includes adversarial examples (different writing styles, non-traditional backgrounds, immigrant / non-native English applicants, applicants who chose to not disclose school name) to ensure the new model does not regress on those cohorts.
  4. Drift dashboard - the operations team monitors a live dashboard of score distributions, complaint rates, and override rates (how often firm reviewers disagree with the AI). A spike in any of these is treated as a signal to investigate, not noise.
  5. External validation - once a year, a third-party reviewer audits the prescreen tool methodology and validates that the published bias-audit results reflect what the system actually does in production.

Human in the loop, always

No AI output on Big Law Bear directly causes an outcome for a candidate. Specifically:

  • A prescreen score does not auto-reject anyone. It surfaces alongside the application for a human reviewer to consider, accept, or ignore.
  • Interview copilot suggestions do not produce a candidate score, hire/no-hire flag, or ranking. They are conversational prompts the interviewer can use or skip.
  • Resume-parsing extractions do not auto-disqualify a candidate based on missing fields. Every extracted field is editable.

Big Law Bear logs every override (recruiter disagreed with the AI) so we can study patterns. Persistent one-direction overrides on a particular cohort would be a signal of bias and trigger the parity check above.

Your rights as a candidate

  • Opt out of AI prescreen by writing to us via /help/contact (category: AI fairness). We mark your applications as no-prescreen for the current cycle. Your application stays in the queue; only the AI notes are suppressed for the firm. A self-serve toggle in account settings is on the roadmap.
  • Decline AI interview consent (Illinois and any candidate who chooses) means the assistant is disabled for your interviews. See /legal/ai-interview-consent.
  • Request what we have on you - under GDPR / CCPA you can request a copy of your data, including any AI-generated notes, by emailing privacy@biglawbear.com. We respond within 30 days.
  • Challenge an outcome - if you believe an AI-assisted decision was unfair, write to the same address. Big Law Bear will review the logged AI output, the human override decision, and respond with what we found.
  • Adverse-action notice - if a firm using Big Law Bear declines you and AI was a contributing input, the firm is responsible for providing the notice required by the FCRA (where applicable) and any state law equivalents.

For firms using Big Law Bear

You are the “employer” under EEOC guidance, NYC LL144, IL AIVA, and equivalent state laws. Big Law Bear is the tool vendor. Your firm is responsible for the candidate-facing notices and any required disclosures. We provide:

  • Pre-built AEDT candidate notice email that goes out automatically on application submission for NYC-scope roles, with links to this page and /legal/nyc-aedt-notice.
  • Per-firm AIVA consent capture (voice memos, post-call transcription, copilot scopes). AEDT opt-out is tracked separately on the candidate notice record.
  • Audit-trail rows on every prescreen run (provider, model, tokens, cost, partner action of agreed / overrode), and on every AI interview artifact (voice, transcript, copilot). Exportable on request.
  • Bias-audit input form at /portal/settings/compliance where your firm uploads the URL of its annual third-party bias audit. A surfaced firm-side compliance dashboard (notice-served rate, model-run gating) is planned but not yet shipped.

Found a problem?

If you think an AI feature on Big Law Bear is unfair, opaque, or simply wrong, we want to know.

File a report at /help/contact (category: AI fairness). It lands in our admin triage queue and is reviewed by the engineering and recruiting-operations leads jointly. We treat fairness reports as P0 incidents and respond within two business days. The reporter is not identified to the firm using the tool.

This page is part of Big Law Bear’s transparency stack. It is reviewed every 6 months by outside counsel and the engineering lead. The current revision is dated above. Substantive changes are announced on the blog and reflected in the version note. Older versions are archived and available on request.