Navigate the complex landscape of AI in hiring, performance management, and workforce analytics with focus on anti-discrimination compliance and emerging regulations like NYC Local Law 144.
AI is transforming human resources across the employment lifecycle, from recruitment to performance management and workforce planning.
Employment AI must comply with anti-discrimination laws prohibiting bias based on protected characteristics.
| Jurisdiction | Key Laws | Protected Characteristics |
|---|---|---|
| US Federal | Title VII, ADA, ADEA | Race, color, religion, sex, national origin, disability, age (40+) |
| EU | Employment Equality Directive | Age, disability, religion, sexual orientation + national laws |
| UK | Equality Act 2010 | Nine protected characteristics including gender reassignment |
| India | Constitution Art. 14-16 | Religion, race, caste, sex, place of birth |
Even without intentional discrimination, AI hiring tools can have disparate impact on protected groups. Under US law, employers may be liable if selection rates for protected groups are less than 80% of the rate for the highest group (4/5ths rule).
New York City's Local Law 144 is a landmark regulation requiring bias audits of automated employment decision tools (AEDTs).
| Metric | Description |
|---|---|
| Impact Ratio | Selection rate for category / selection rate for most selected category |
| Categories | Sex (male, female), Race/ethnicity (as per EEOC categories) |
| Intersectionality | Impact ratios for sex categories within each race/ethnicity category |
The EU AI Act classifies AI systems for employment purposes as high-risk, requiring compliance with stringent requirements.
When using third-party HR AI tools, employers remain legally responsible for discrimination. Conduct thorough vendor due diligence including review of bias testing, audit reports, and contractual allocation of compliance responsibilities.