Abstract:
Machine learning algorithms have been significantly integrated in the automated decision-making processes. Despite their wide practical success, these systems have demonstrated biases towards certain demographic groups. Such instances have motivated researchers to study fairness in machine learning. In this paper, we will focus on fairness in clustering, which is a well-studied unsupervised machine learning task. We propose a new fairness measure FM , Fairness Under Minorities, that is inspired by the Rényi correlation and which yields better fairness results whenever biases are present in minority groups. We outline some derived relations between our proposed notion and other fairness measures. Our experimental study illustrates the effectiveness of FM and proves that it better captures unfairness in minority groups, unlike other fairness measures. This paper also aims at demonstrating what fairness measures best fit certain datasets.