Content-type: text/html Downes.ca ~ Stephen's Web ~ How to prevent AI from taking over the world

Stephen Downes

Knowledge, Learning, Community

I think this column demonstrates a need for a deeper understanding of both ethics and AI if we are to address problems in a complex domain (and I recognize that I'm referencing the the Chair and Professor of Jurisprudence at the University of Oxford when I call for this deeper understanding). According to Ruth Chang, "the so-called 'value alignment problem' - how to get AI to respect and conform to human values - is arguably the most important, if vexing, problem faced by AI developers today." But I ask, what are those human values? There's no clear answer. And AI doesn't simply assume "that in a decision, there are only two possibilities", as Chang suggests. Deep learning systems weigh multiple complex possibilities, just as humans do. And that's the problem - the values an AI learns isn't what we tell them, it's what we teach them though our examples, the same examples that create the mountains of data they consume. An AI learns our actual values, not some idealized abstraction of them, and that makes the AI's responses difficult to predict, conflicted... and dangerous. Just like humans.

Today: 1 Total: 1425 [Direct link] [Share]


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: May 06, 2024 06:53 a.m.

Canadian Flag Creative Commons License.

Force:yes