As the subtitle says, "this is the latest example of how bias creeps into artificial intelligence." A sentiment analyzer looks at text-based content (like tweets or blog posts or emails) as determines whether the sender is happy or sad, anngry, bitter, sorrowful, or whatever. It does this based on the association between whatever is in the email and words associated with those sentiments. And that's where the bias creeps in. It is taking various sentences that should be neutral (like "I'm a Christian", "I'm a Sikh" or "I'm a Jew") and assigning positive or negative sentiments to them. "A chief obstacle to programming a non-biased language AI is that the data itself is rarely purified of human prejudice."
Today: 0 Total: 1746 [Share]
] [