One of the main concerns in the recent AI research is that most data-driven approaches preserve the bias or unfairness available in the collected (offline) data in the resulting models, which could lead to harmful social and ethical effects in the society. Fairness-aware machine learning has emerged to alleviate these effects via learning fair models to avoid discrimination against certain individuals and/or groups. However, these methods often measure the immediate effect of fairness in static forms, while in many scenarios, decisions made by AI systems have time-varying consequences and the result of a decision being fair or not will be revealed only in the long term. Recent work demonstrates that modeling the immediate effect of decisions for single-step prevention of bias does not guarantee fairness in one-step later downstream tasks [1,2] or in longer-term Markov decision processes (MDPs) [3,4,5]. Those work solely simulate the dynamics of certain applications with known parameters to highlight the need for modeling the long-term implications of bias, while we require methods that learn both the dynamics of populations and optimal policies that are unbiased in the long run.
In this project, we aim at modeling the long-term effects of bias and fairness in a Offline Reinforcement Learning [6] framework. Reinforcement Learning (RL) is a learning approach for sequential decision-making problems that is conducted via trial and error while interacting with an environment, in which there is no explicit teacher supervising. On the other hand, ofline RL allows for learning optimal/safe policies from offline data when interaction, simulation, or online learning is impractical and/or dangerous which makes it a perfect match for dealing with the introduced problem.
This project requires familiarity with basic concepts of reinforcement learning and general understanding about fairness-aware learning. You are expected to conduct a comprehensive literature review to track down the work that have been done in both domains: fairness-aware learning and offline RL, and then devise an approach which both models the long-term effects of fairness via reinforcement learning (in a sequential manner) from offline data and learns policies that prevent/mitigate bias in the future decisions.