Didn’t find the answer you were looking for?
How do I evaluate demographic parity when checking for bias in classification models?
Asked on Oct 03, 2025
Answer
Evaluating demographic parity in classification models involves assessing whether the model's predictions are independent of sensitive attributes, such as race or gender. This ensures that each demographic group has an equal probability of receiving a positive outcome, which is a key fairness metric in AI ethics.
<!-- BEGIN COPY / PASTE -->
def check_demographic_parity(predictions, sensitive_attribute):
# Calculate positive prediction rates for each group
group_rates = predictions.groupby(sensitive_attribute).mean()
# Check if the rates are similar across groups
parity = group_rates.max() - group_rates.min()
return parity < threshold # Define a threshold for acceptable parity
<!-- END COPY / PASTE -->Additional Comment:
- Demographic parity is also known as statistical parity or group fairness.
- It is important to define a threshold that reflects acceptable levels of disparity based on context.
- Consider using fairness dashboards or tools like Fairness Indicators to visualize and track demographic parity.
- Demographic parity may not always be suitable; consider other fairness metrics like equal opportunity or predictive parity depending on the use case.
Recommended Links:
