Bias & Fairness

Fairness Metrics

Quantitative measures used to evaluate whether an AI system produces equitable outcomes across different demographic groups.

What Is Fairness Metrics?

Fairness metrics are mathematical definitions of what it means for an AI system to be "fair." In the context of AI hiring, the most commonly used fairness metrics include: demographic parity (equal selection rates across groups), equalized odds (equal true positive and false positive rates across groups), predictive parity (equal positive predictive values across groups), calibration (equal accuracy of predicted probabilities across groups), and the four-fifths rule (all groups' selection rates within 80% of the highest rate). A critical insight in fairness research is that these metrics can conflict with each other — it is mathematically impossible to satisfy all fairness criteria simultaneously except in trivial cases. For example, a model cannot achieve both demographic parity and predictive parity when base rates differ across groups. This means that choosing which fairness metrics to prioritize is a values decision, not just a technical one. In regulatory compliance, the four-fifths rule and statistical significance testing are the primary metrics. In best-practice auditing, multiple metrics should be examined to understand the full fairness profile of an AI hiring tool.

Category: Bias & Fairness

Related Terms

Ready to Audit Your AI Hiring Tools?

Get your compliance report in minutes. No consulting engagement needed.

Start Your Free Audit