Many real-life applications deal with preference-based assignments. In such multi-agent problems, agents have preferences over elements (activities, resources, or even other agents), and these preferences must be aggregated into a collective decision which is an assignment of agents to these elements. Nowadays, with the increasing use of algorithms and AI tools in systems governing our life choices (job recruitment, insurance covering, universities assignment), important decisions for the agents can be made in preference-based assignments. Therefore, to ensure confidence and participation in the system, it is crucial to guarantee that algorithms used for computing these assignments are fair to the agents. However, fairness highly depends on the context of decision regarding, e.g., its temporality, the type of reported preferences, the type of assignment or the level of agents’ knowledge. Hence, if one wants to realistically guarantee fairness in preference-based assignments, it is important to adapt fairness to the decision context while justifying that proposed solutions are indeed fair. The two properties of adaptability and explainability for fairness concepts will then together contribute to the adoption and trust by agents of systems using algorithms for assignment.
The project proposes to investigate the guarantee of fairness via two axes: the design of flexible fairness concepts which are able to adapt to various decision contexts in order to address real-life configurations, as well as the explainability of fairness in proposed solutions. The project has also a practical dimension by providing an explanation-oriented tool for computing fair assignments.
Keywords: Artificial Intelligence, Computational Social Choice, Fairness, Matching, Resource Allocation, Fair division, Explainability
This project is funded by the French National Research Agency (ANR grant ANR-22-CE23-0008-01).