Artificial intelligence is now embedded in modern hiring, applicant screening, employee promotion, and performance evaluation systems. From resume scanners to video interview analytics and automated promotion recommendations, algorithmic decision tools are becoming increasingly common. But one message from federal enforcement agencies is becoming unmistakably clear: you are responsible for your AI.
The U.S. Equal Employment Opportunity Commission (EEOC), which is responsible for enforcing federal laws prohibiting workplace discrimination, harassment, and retaliation, has expanded enforcement around algorithmic decision tools, particularly where they may create unlawful disparate impact. Employers cannot shift liability to vendors. If an AI tool discriminates, the employer using it is accountable.
The EEOC’s 2023–2024 Technical Guidance: Vendor Promises Are Not a Defense
In 2023 and 2024, the EEOC released technical assistance guidance addressing the use of artificial intelligence in employment decisions. The Commission emphasized that existing civil rights laws — including Title VII, the ADA, and the ADEA — are fully applicable to algorithmic tools.
A key takeaway for employers is straightforward: relying on a vendor’s claim that a tool is “bias‑free” is not enough. Employers must independently assess whether their tools create a disparate impact on protected groups.
According to the EEOC’s guidance, employers should:
- Understand how the tool works and what data it uses.
- Request validation studies demonstrating job‑relatedness and business necessity.
- Analyze outcomes for statistical disparities across protected categories.
- Maintain documentation of their review and oversight process.
If a screening algorithm disproportionately screens out women, older workers, or individuals of a particular race, the employer must be prepared to demonstrate that the tool is job‑related and consistent with business necessity — and that no less discriminatory alternative exists.
The iTutorGroup Settlement: A Real‑World Warning on Age Discrimination
The EEOC’s settlement with iTutorGroup illustrates how algorithmic screening can lead to enforcement action. In that case, the EEOC alleged that the company’s automated hiring system rejected female applicants over age 55 and male applicants over age 60.
The result was a settlement requiring monetary relief, reporting obligations, and changes to hiring practices. The key lesson for employers is not simply that age discrimination laws apply — it is that automated filters can create unlawful age thresholds without meaningful human oversight.
Even if no one at the company intended to discriminate, the algorithm’s structure produced a discriminatory outcome. That outcome alone was sufficient to trigger enforcement.
DOJ and EEOC Joint ADA Guidance: Accessibility Is Mandatory
The Department of Justice (DOJ) and the EEOC jointly issued guidance clarifying that AI tools must comply with the Americans with Disabilities Act (ADA). This includes ensuring that automated hiring systems do not screen out qualified individuals with disabilities or fail to provide reasonable accommodations.
For example, an AI‑based video interview platform that evaluates facial expressions or speech patterns may disadvantage applicants with neurological conditions, speech impairments, or other disabilities. If the employer does not offer an alternative assessment method, it could face ADA liability.
The joint guidance makes clear that employers must provide accommodations when AI tools are used in hiring or evaluation processes. Failure to do so can constitute discrimination — even if the tool was designed by a third party.
Practical Compliance Steps for Employers
Given the EEOC’s expanding focus on algorithmic discrimination, employers should take proactive steps now.
- Conduct regular disparate impact analyses on AI hiring and promotion tools.
- Implement human review mechanisms for automated rejections.
- Ensure accessibility and establish a clear accommodation process.
- Train HR and compliance teams on AI‑related discrimination risks.
- Review vendor contracts to require transparency and cooperation in audits.
Employers operating in Virginia, Maryland, and the District of Columbia should be particularly attentive, as federal enforcement trends often intersect with evolving state and local scrutiny of automated employment decision tools.
An AI Hiring Compliance Checklist for Employers
For most employers, the biggest AI risk is not that a tool exists, but that it is adopted informally and then quietly drives high‑impact decisions with no audit trail. If you use (or are considering using) algorithmic tools to screen resumes, rank candidates, score interviews, predict “flight risk,” or flag promotion readiness, treat that tool like any other compliance‑sensitive vendor product—especially in the DMV market, where multi‑jurisdiction hiring is common.
Below is a practical checklist you can use to pressure‑test your current process and reduce the chance that an AI‑assisted decision becomes the centerpiece of a disparate‑impact or ADA accommodation dispute.
- Inventory every tool that influences hiring, promotion, or performance decisions (ATS “screening” filters, automated interview scoring, assessment platforms, and background screening decision engines).
- Confirm what inputs the tool actually uses (education proxies, graduation year, zip code, employment gaps, speech/affect analysis, keystroke, or video signals) and whether those inputs could correlate with protected characteristics.
- Ask the vendor for validation documentation, testing methodology, and any disparate‑impact analyses—then require periodic re‑testing after material model or dataset changes.
- Build a clear “human‑in‑the‑loop” policy: who can override the tool, what documentation is required, and how the decision is reviewed for consistency.
- Create an accommodation path for applicants and employees with disabilities (including an alternative assessment method) and train recruiters to route requests appropriately.
- Document retention: preserve tool outputs, decision rationales, and key communications so you can explain the decision‑making process if an EEOC charge arrives.
- Conclusion: Accountability Cannot Be Automated Away
- Artificial intelligence may streamline hiring and evaluation processes, but it does not reduce legal exposure. Federal agencies have made clear that civil rights protections apply with full force to algorithmic systems. If your organization uses AI in hiring or promotions, now is the time to review those systems, document your oversight, and ensure that your technology aligns with federal nondiscrimination requirements. The cost of inaction may far exceed the cost of compliance. Finally, make sure your internal stakeholders understand the core point: regulators will not treat “the vendor did it” as a defense. If the tool is part of your employment process, you own the outcome—and you need governance, testing, and documentation to match.
If you have questions concerning your business’s use of AI in hiring or promotion programs, or need help with a specific employment law issue in Virginia, Maryland, or the District of Columbia, please contact Doug Taylor at (703) 525-4000 or rdougtaylor@beankinney.com.
This article is for informational purposes only and does not contain or convey legal advice. Consult an attorney. Any views or opinions expressed herein are those of the author and are not necessarily the views of the firm or any client of the firm.

