Judiciously Reducing Sub-group Comparisons for Learning Intersectional Fair Representations
Clara Rus, Andrew Yates, Maarten de Rijke
Published in ECIR 2026, 2025
Ensuring fairness in ranking systems is critical to avoid discriminatory outcomes towards minority groups in high stakes domains such as recruitment. Most fairness interventions only address fairness for one or more binary groups without accounting for intersectional fairness. We study the problem of achieving intersectional fairness in ranking systems, where individuals may face compounded disadvantages. We adapt and extend pre-processing fairness intervention methods to optimize for intersectional group fairness. As the number of intersectional sub-groups grows exponentially with the number of attributes, optimization becomes computationally expensive. We propose to reduce the number of sub-group comparisons when optimizing for intersectional fairness, based on the highest disparities between sub-groups. We show that limiting sub-group comparisons achieves comparable or better intersectional fairness. We validate this on three real-world datasets and a simulated setup designed to test robustness to intersectional fairness challenges.
