Make any classifier fairer with a simple post-processing step
What if you could train any classifier and fix its bias afterward? This paper shows you often can. The mathematically best fair rule can be achieved by applying group-specific score thresholds to your model's predictions, with a tiny bit of randomization at the boundary if needed.
That insight powers a simple two-step pipeline: first learn a predictor; then post-process its scores to satisfy fairness goals such as equalized odds or statistical parity. Crucially, the post-processing parameters are learned by solving a single unconstrained optimization problem, making it fast and model-agnostic: works with deep nets, random forests, SVMs, and more.
The method is provably consistent (it approaches the best possible accuracy under the chosen fairness constraint) and comes with an impossibility result that quantifies the accuracy-fairness trade-offs across multiple demographic groups. The authors validate the approach on the Adult dataset.
Paper: http://arxiv.org/abs/2005.14621v1
Paper: http://arxiv.org/abs/2005.14621v1
Register: https://www.AiFeta.com
#AI #Fairness #MachineLearning #DataScience #Ethics #EqualizedOdds #StatisticalParity #Algorithms #Research