De-biasing Datasets: Investigating Bias Detection, Mitigation, and Accuracy/Bias Tradeoffs

Duke MIDS logo
: Education
: 2021

In today’s world, datasets are much larger and more complex and thus require new and innovative techniques to make reliable inferences. The data also reflect existing human biases and social inequalities, which are compounded by the use of algorithms and machine learning models. This can lead to unfair outcomes and discriminatory practices in high-stakes decision-making settings, such as assessing a person’s risk to society or deciding to approve someone for a loan. This project will develop methods for evaluating fairness and correcting for any discovered biases using data on insurance rates with the eventual goal of extending the methods to broader settings, thus developing a systematic way of identifying bias and fairness notions in a variety of contexts.