"According to new research," says this article, "ChatGPT shows bias against resumes with credentials that imply a disability." For example, "it noted that a candidate with depression had "additional focus on DEI and personal challenges," which "detract from the core technical and research-oriented aspects of the role." This is a problem, obviously. But in assessing issues of this type, two additional questions need to be asked: first, how does the AI performance compare with human performance? After all, it is very likely the AI is drawing on actual human discrimination when it learns how to assess applications. And second, how much easier is it to correct the AI behaviour as compared to the human behaviour? This article doesn't really consider the comparison with humans. But it does show the AI can be corrected. How about the human counterparts?
Today: 0 Total: 19 [Share]
] [