Fairness in Dynamic Settings
Supervisors
Suitable for
Abstract
The goal of fair machine learning is to ensure that decisions taken by machine learning systems don't discriminate based on sensitive attributes like race and gender. Most of the fair machine learning literature focuses on settings in which individuals can't change their features that a model takes as input. However, in many situations, the individuals can modify their features to get a positive decision based on the apriori information or feedback/explanation provided by the model/decision-maker. How should fairness be defined in these settings, such that certain individuals or groups do not face disproportionate obstacles in achieving better outcomes? In this project, the student will start by empirically evaluating the existing approaches to fairness. In collaboration with the supervisor, the student will then define the appropriate notion(s) of fairness for the above settings and develop techniques to satisfy those notions. The student will implement the technique and use both simulations and real-world datasets to evaluate the effectiveness of the technique. Further, depending on interest and time, there will also be the opportunity to work on proving formal guarantees for the algorithm and/or a human subject experiment.Prerequisites: Familiarity with machine learning, proficiency in Python, interest in ML fairness . Students are encouraged to reach out to Naman Goel (naman.goel@cs.ox.ac.uk) to discuss more project ideas related to above topics.