I've been thinking recently about how we validate AI algorithms, this in light of over-enthusiastic press reports and questionable outcomes. This article addresses this question directly. If we think about the four major functions of algorithms - priortization, classification, association and filtering - we can see how misleading data inputs can create undesired outcomes. This is what Microsoft discovered with Tay, the racist chatbot (the Chinese fared no better) The report looks at the causes and types of discrimination that can result and examines the possibility of algorithmic accountability, elucidating five principles for accounbtable algorithms: fairness, explanability, auditability, responsibility, and accuracy. Given the way these algorithms work, adherence to some of these principles may be a challenge.
Today: 0 Total: 4 [Share]
] [