Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2 
Clara Rus, Jeffrey Luppes, Harrie Oosterhuis, Gido H Schoenmacker
Published in Recsys@HR, 2022
The goal of this work is to help mitigate the already existing gender wage gap by supplying unbiased job recommendations based on resumes from job seekers. We employ a generative adversarial network to remove gender bias from word2vec representations.
Clara Rus, Maarten de Rijke and Andrew Yates
Published in Recsys@HR, 2023
Fairness interventions require access to sensitive attributes of candidates applying for a job, which might not be available due to limitations imposed by data protection laws. In this work we propose using a pre-processing technique to create counterfactual representations of the candidates that lead to a more diverse ranking with respect to intersectional groups.
Clara Rus, Andrew Yates, Maarten de Rijke
Published in ECIR, 2024
Fairness interventions are hard to use in practice when ranking people due to legal constraints that limit access to sensitive information. Pre-processing fairness interventions, however, can be used in practice to create more fair training data that encourage the model to generate fair predictions without having access to sensitive information during inference. On two real-world datasets, the pre-processing methods are found to improve the diversity of rankings with respect to gender, while individual fairness is not affected. Moreover, we discuss advantages and disadvantages of using pre-processing fairness interventions in practice for ranking people.
Clara Rus, Gabrielle Poerwawinata, Andrew Yates, Maarten de Rijke
Published in CIKM, 2024
We present AnnoRank, a web-based user interface (UI) framework designed to facilitate collecting crowdsource annotations in the context of information retrieval. AnnoRank enables the collection of explicit and implicit annotations for a specified query and a single or multiple documents, allowing for the observation of user-selected items and the assignment of relevance judgments. Furthermore, AnnoRank allows for ranking comparisons, allowing for the visualization and evaluation of a ranked list generated by different fairness interventions, along with its utility and fairness metrics.
Chen Xu, Zhirui Deng, Clara Rus, Xiaopeng Ye, Yuanna Liu, Jun Xu, Zhicheng Dou, Ji-Rong Wen, Maarten de Rijke
Published in SIGIR, 2025
We introduce FairDiverse, an open-source standardized toolkit to enable comprehensive, reproducible evaluation of fairness- and diversity-aware algorithms across both search and recommendation tasks in IR.
Chen Xu, Clara Rus, Yuanna Liu, Marleen de Jonge, Jun Xu, Maarten de Rijke
Published in SIGIR, 2025
Recently, fairness-aware information retrieval (IR) systems have been receiving much attention. Numerous fairness metrics and algorithms have been proposed. The complexity of fairness and IR systems makes it challenging to provide a systematic summary of the progress that has been made. This complexity calls for a more structured framework to navigate future fairness-aware IR research directions. The field of economics has long explored fairness, offering a strong theoretical and empirical foundation. Its system-oriented perspective enables the integration of IR fairness into a broader framework that considers societal and intertemporal trade-offs. In this tutorial, we first highlight that IR systems can be understood as a specialized economic market. Then, we re-organize fairness algorithms through three key economic dimensions—macro vs. micro, demand vs. supply, and short-term vs. long-term. We effectively view most fairness categories in IR from an economic perspective. Finally, we illustrate how this economic framework can be applied to various real-world IR applications and we demonstrate its benefits in industrial scenarios. Different from other fairness-aware tutorials, our tutorial not only provides a new and clear perspective to re-frame fairness-aware IR but also inspires the use of economic tools to solve fairness problems in IR. We hope this tutorial provides a fresh, broad perspective on fairness in IR, highlighting open problems and future research directions.
Alessandro Fabris, Clara Rus, Jorge Saldivar, Anna Gatzioura, Asia Biega, Carlos Castillo
Published in IPM Journal, 2025
Personnel recruitment is increasingly mediated by Applicant Tracking Systems (ATS), which rank candidates for job positions, making them a central decision-support tool in modern HR processes. Often framed as an information retrieval (IR) problem, the ranking of candidates in ATS is typically driven by relevance to the job position, with algorithms sorting applicants according to a set of predefined criteria. In recent years, fairness-aware ranking methods have emerged to mitigate the risk of indirect discrimination, where the ordering of candidates may inadvertently favor one demographic group over another. These approaches are inspired by browsing models developed for web search and aim to balance candidate exposure based on protected characteristics. However, ATS in recruitment introduce unique challenges due to their high-stakes nature and the decision-making context in which they operate. In this paper, we present a series of user studies that explore the disconnect between fair exposure and fair outcomes in candidate shortlisting. We focus on how factors such as task design (e.g., how recruiters interact with candidate lists), individual representations of candidates (e.g., national origin cues), and ranking order influence both position bias and demographic balance. Our findings show that while demographic balance may be achieved in terms of ranking visibility, this does not necessarily translate to fair outcomes in terms of who gets shortlisted. Through a crowdsourced experiment and in-depth interviews with recruiters, we identify key task-level, individual, and ranking factors that mediate these effects. We conclude that fairness in ATS rankings is contingent not only on algorithmic design but also on the shortlisting tasks they support, as well as the interfaces, strategies, and assumptions that recruiters use when interacting with candidate lists. Based on these insights, we provide implications for the design of algorithms, interfaces, and recruitment processes that support fairer and more equitable recruitment outcomes.
Clara Rus, Masoud Mansoury, Andrew Yates, Maarten de Rijke
Published in under review, 2025
Clara Rus, Andrew Yates, Maarten de Rijke
Published in under review, 2025
Published:
Read more about it here.
Published:
Published:
Published:
Read more about it here.
Published:
Read more about it here.
Published:
Read more about it here.
Published:
Read more about it here.
Published:
Read more about it here.
Published:
Read more about it here.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.