Content-type: text/html Downes.ca ~ Stephen's Web ~ The Explanatory Force of Network Models

Stephen Downes

Knowledge, Learning, Community

Interesting paper that looks at network analyses of things (including brains) and asks about their explanatory power. A quick case in point (my own example): if we say a person knows P because she has network configuration C, does C explain why she knows P? Or does P explain why she has configuration C? This may seem trivial, but if we want to produce P in a person, the explanation is important, as it (maybe) tells us what causes what. The author's thesis (stated in the abstract and then in the third paragraph) is really awkwardly stated. But the conclusion is pretty clear: network analyses do not redefine the norms of  explanation, and they suffer from the same methodological puzzles. Worth reading for the lucid discussion of graph theory as it relates to neural networks. Preprinted in Carl F. Craver's website, found via Philosophical Progress. Image: Wikipedia.

Today: 1 Total: 25 [Direct link] [Share]

Image from the website
View full size


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Dec 25, 2024 08:43 a.m.

Canadian Flag Creative Commons License.

Force:yes