This article looks at the fairness of AI models in cases where demographic groups are unevenly distributed. How fair, for example, would a prediction model be for women if women were only five percent of the data sample used to train the model. To test the models, the authors first ran them on an unbalanced distribution, then compared the results with those obtained by running it with a balanced data sample. They found that "none of the predictive models was consistently fair in all 3 courses." But more surprisingly, "attributing the unfairness to demographic group imbalance may cause the unfairness to persist even when the data becomes balanced." Assuming that the model is unfair to women may obscure the fact, say, that the model is also unfair to people without pockets. It's a small study, and before jumping to any conclusions we need to see the results replicated in a large-scale study.
Today: 0 Total: 10 [Share]
] [