The key to representing minorities fairly is to recognize that there are minorities. But what if information about minorities isn't in the data? A lot of machine learning datasets have this problem, which is why they perform poorly with respect to them. A speech-recognition system might perform poorly with minority accents, for example. The solution is to assume there are minorities and then account for them based on distributionally robust optimization (DRO) . This post describes a paper that describes the process. Interestingly, it's a machine learning system based on the philosophy of John Rawls describing 'justice' as fairness.
Today: 0 Total: 18 [Share]
] [