In our Capstone project, we provided a pipeline for our clients to evaluate and reduce bias in their financial classification models using the AI Fairness 360 package developed by IBM. Our client, 2nd Order Solutions (2OS), is a financial service firm that provides banks with machine learning algorithms to decide loans and credit card approval. Under financial regulations, banks must then prove that their financial products do not discriminate on protected variables like an applicant’s race, sex, marriage status, or age. 2OS thus seeks to enhance its products by evaluating and ensuring fairness in its algorithms before delivering them to its clients.
To solve this problem, we iterated through open-source ML packages that address fairness on datasets we know to have biases and compared the packages for their interpretability and efficacy. We identified disparate impact as a key metric to measure how much less likely the underrepresented groups are to receive a positive outcome. Using this metric, we tested the efficacy of individual packages’ methods of mitigating bias. AIF 360 stood out for its variety of mitigation methods and efficacy. Additionally, to control for confounding variables in the data, we proposed matching as a solution to clean data before training. Using AIF360 and matching, 2OS now has a framework to improve its products by evaluating and reducing bias in its machine learning algorithms.