back to list
Research Project: Fairness-aware AI
Description
FairML or Fairness-aware AI has emerged into a large interdisciplinary area. in DAI, we approach the challenges of fairness and non-discrimination from several perspectives, focusing on the technical ML aspects intersections of FairML and XAI, and interdisciplinary aspects including philosophical, legal and computer science. Several subprojects include:
- Fairness and unwanted biases in recommender systems (See research output of Masoud Mansoury co-supervised together with Bamshad Mobasher, Robin Burke, and recent collaboration with Bol.com)
- Fairness in Social Network Analytics
- FairML in banking and insurance (in collaboration with Rabobank, DLL, ING, Floryn)
- Moral and legal justification of FairML
- VACE: Value alignment for counterfactual explanations in AI (co-led together with Emily Sullivan)
- Fairness in AutoML
- Fairness in Reinforcement Learning scenarios
- Fairness as predictive modeling with independency constraints (pioneered in 2008 by Toon Calders and his then PhD candidate Faisal Kamiran)
We contribute to Fairlearn: an open-source, community-driven project that aims to help data scientists improve fairness of machine learning models.
We have co-organized a series of events, including BIAS @ ECMLPKDD, Fair ADM Lorentz Workshop Fairness in Algorithmic Decision Making: A Domain-Concrete Approach, SIGKDD and DAMI special issues on fairness and bias in AI.
Details
- Principal Investigator
-
Mykola Pechenizkiy
- Involved members:
-
George Fletcher
-
Akrati Saxena
-
Pratik Gajane
-
Hilde Weerts
-
Cassio de Campos
-
Maryam Tavakol