We've seen how an AI can become a racist xenophobe in one day of training. We've also seen how propaganda can create the same effect in an entire nation. So it stands to reason that algorithms can embody the prejudice and hate present in the data set used to train it, and can even magnify that effect in its decision-making. So it's reasonable to require that the decisions made by these AIs be vetted in some way. This is the purpose of European Union regulations related to profiling, non-discrimination and the right to an explanation in algorithmic decision-making.
Today: 4 Total: 113 [Share]
] [View full size