The EEOC's Position on AI Hiring Tools
The Equal Employment Opportunity Commission has made its position clear: employers are responsible for the outcomes of their AI hiring tools, regardless of whether those tools were developed by third-party vendors. The EEOC's guidance, issued through technical assistance documents and enforcement actions, applies existing anti-discrimination frameworks to modern AI technology.
This article summarizes the EEOC's key guidance, explains its practical implications, and provides actionable steps for compliance.
Key EEOC Guidance Documents
Technical Assistance on AI and Title VII (May 2023)
The EEOC's landmark guidance confirms that:
- Title VII applies fully to AI hiring decisions: There is no AI exception to federal anti-discrimination law
- Employers are liable even when using vendor tools: Relying on a vendor's assurance that a tool is "unbiased" does not create a defense
- Disparate impact analysis applies: AI tools that produce disproportionate outcomes for protected groups can violate Title VII, even without discriminatory intent
- The four-fifths rule applies: The EEOC will use the same adverse impact analysis framework for AI tools as for traditional selection procedures
ADA and AI (May 2022)
The EEOC has also addressed AI hiring tools under the Americans with Disabilities Act:
- AI tools that screen out candidates with disabilities may violate the ADA
- Employers must provide reasonable accommodations for candidates who cannot effectively interact with AI tools
- Video interview analysis that evaluates facial expressions or speech patterns may discriminate against candidates with disabilities
Age Discrimination and AI
While not yet the subject of dedicated guidance, the EEOC has indicated that:
- AI tools trained on historical data may perpetuate age discrimination
- Proxy variables like graduation year, years of experience caps, or technology proficiency requirements can create age-based adverse impact
- The Age Discrimination in Employment Act (ADEA) applies to AI hiring decisions for workers 40 and older
Practical Implications for Employers
1. Vendor Agreements Are Not a Shield
The EEOC has explicitly rejected the argument that employers can delegate compliance responsibility to AI vendors. Even if your vendor certifies that its tool has been tested for bias, you remain liable for discriminatory outcomes. This means you need your own independent validation.
2. The Four-Fifths Rule Remains the Starting Point
The EEOC continues to use the four-fifths rule as an initial screen for adverse impact. If your AI tool's selection rate for any protected group is less than 80% of the highest group's rate, the EEOC may presume adverse impact and shift the burden to you to demonstrate job-relatedness and business necessity.
3. Job-Relatedness and Business Necessity
If adverse impact is established, you must demonstrate that your AI tool is:
- Job-related: The characteristics the tool measures are actually relevant to job performance
- Consistent with business necessity: There is a compelling business reason for using the tool
This requires validation studies showing the tool actually predicts job performance, not just that it produces scores or rankings.
4. Less Discriminatory Alternatives
Even if you can demonstrate job-relatedness, you may still be liable if a less discriminatory alternative exists that serves the same business purpose. This means you should evaluate multiple tools and approaches, documenting why you chose the one you did.
5. Record-Keeping Requirements
The EEOC recommends (and in some cases requires) that employers maintain records of:
- All AI tools used in employment decisions
- Selection rates by protected group at each stage
- Validation studies and bias audit results
- Any adverse impact findings and remediation actions
- Vendor due diligence documentation
EEOC Enforcement Trends
Recent enforcement actions reveal the EEOC's priorities:
- Systemic investigations: The EEOC is conducting industry-wide investigations of AI hiring tool use, not just responding to individual complaints
- Commissioner charges: EEOC commissioners have initiated charges against specific AI hiring tool vendors and employers
- Conciliation agreements: Several employers have entered into conciliation agreements requiring ongoing monitoring and bias auditing of AI tools
- Litigation: The EEOC has filed lawsuits challenging specific AI hiring practices, establishing legal precedent
Compliance Action Plan
Immediate Steps
- Identify all AI tools used in hiring, promotion, and termination decisions
- Obtain documentation from each vendor about the tool's methodology, training data, and any bias testing performed
- Calculate impact ratios for each tool across race, sex, age, and disability status
- If any ratio is below 0.80, conduct a deeper analysis with statistical significance testing
Ongoing Practices
- Conduct annual bias audits of all AI hiring tools
- Maintain records of selection rates by protected group at every decision stage
- Document the job-relatedness and business necessity of each AI tool
- Review vendor contracts to ensure they include indemnification for discriminatory outcomes
- Train HR staff and hiring managers on AI compliance responsibilities
- Establish a process for candidates to report concerns about AI-influenced decisions
How OnHirely Aligns with EEOC Requirements
OnHirely's audit methodology is built on the EEOC's adverse impact framework. The platform calculates impact ratios using the four-fifths rule, runs statistical significance tests (chi-squared and Fisher's exact), performs intersectional analysis, and generates documentation that demonstrates proactive compliance efforts. This approach directly addresses the EEOC's expectations for employer accountability in AI hiring.