back to list

Project: Long-term Fairness with Offline Reinforcement Learning

Description

One of the main concerns in the recent AI research is that most data-driven approaches preserve the bias or unfairness available in the collected (offline) data in the resulting models, which could lead to harmful social and ethical effects in the society. Fairness-aware machine learning has emerged to alleviate these effects via learning fair models to avoid discrimination against certain individuals and/or groups. However, these methods often measure the immediate effect of fairness in static forms, while in many scenarios, decisions made by AI systems have time-varying consequences and the result of a decision being fair or not will be revealed only in the long term. Recent work demonstrates that modeling the immediate effect of decisions for single-step prevention of bias does not guarantee fairness in one-step later downstream tasks [1,2] or in longer-term Markov decision processes (MDPs) [3,4,5]. Those work solely simulate the dynamics of certain applications with known parameters to highlight the need for modeling the long-term implications of bias, while we require methods that learn both the dynamics of populations and optimal policies that are unbiased in the long run.

In this project, we aim at modeling the long-term effects of bias and fairness in a Offline Reinforcement Learning [6] framework. Reinforcement Learning (RL) is a learning approach for sequential decision-making problems that is conducted via trial and error while interacting with an environment, in which there is no explicit teacher supervising. On the other hand, ofline RL allows for learning optimal/safe policies from offline data when interaction, simulation, or online learning is impractical and/or dangerous which makes it a perfect match for dealing with the introduced problem.

Requirements & Activities

This project requires familiarity with basic concepts of reinforcement learning and general understanding about fairness-aware learning. You are expected to conduct a comprehensive literature review to track down the work that have been done in both domains: fairness-aware learning and offline RL, and then devise an approach which both models the long-term effects of fairness via reinforcement learning (in a sequential manner) from offline data and learns policies that prevent/mitigate bias in the future decisions.

References

  1. S. Kannan, A. Roth en J. Ziani, „Downstream effects of affirmative action,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, 2019.
  2. L. T. Liu, S. Dean, E. Rolf, M. Simchowitz en M. Hardt, „Delayed impact of fair machine learning,” in International Conference on Machine Learning, 2018.
  3. A. D'Amour, H. Srinivasan, J. Atwood, P. Baljekar, D. Sculley en Y. Halpern, „Fairness is not static: deeper understanding of long term fairness via simulation studies,” in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 2020.
  4. M. Wen, O. Bastani en U. Topcu, „Algorithms for Fairness in Sequential Decision Making,” in International Conference on Artificial Intelligence and Statistics, 2021.
  5. X. Zhang, R. Tu, Y. Liu, M. Liu, H. Kjellström, K. Zhang en C. Zhang, „How do fair decisions fare in long-term qualification?,” in Thirty-fourth Conference on Neural Information Processing Systems, 2020.
  6. R. Prudencio, M. Maximo, E. Colombini, "A survey on offline reinforcement learning: Taxonomy, review, and open problems", IEEE Transactions on Neural Networks and Learning Systems, 2023
Details
Supervisor
Maryam Tavakol
Interested?
Get in contact