Content-type: text/html Downes.ca ~ Stephen's Web ~ The Use and Misuse of Counterfactuals in Ethical Machine Learning

Stephen Downes

Knowledge, Learning, Community

This paper (9 page PDF) dives into the weeds a bit, but it's interesting and potentially very useful. When we use AI in learning, one of the criteria people ask for is for an explanation of decisions or recommendations. But for various reasons, which I mentioned here and will discuss in detail in the future, explanations will either be very difficult to get or not very useful. Instead, a lot of writers are recommending the use of counterfactuals; sometimes, what people need, rather than an explanation per se, is a statement of what could have been done instead to produce a different outcome. But counterfactuals introduce their own issues. How do you know that a counterfactual is true? This article looks at the semantics of counterfactuals and offers a table of the decisions we need to make in order to use them. And this gives us an interesting way to talk about the ethics of using AI in learning.

Today: 4 Total: 4 [Direct link] [Share]


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Nov 04, 2024 3:17 p.m.

Canadian Flag Creative Commons License.

Force:yes