The Explanatory Force of Network Models
Carl F. Craver,
Preprint,
Jan 10, 2017
Interesting paper that looks at network analyses of things (including brains) and asks about their explanatory power. A quick case in point (my own example): if we say a person knows P because she has network configuration C, does C explain why she knows P? Or does P explain why she has configuration C? This may seem trivial, but if we want to produce P in a person, the explanation is important, as it (maybe) tells us what causes what. The author's thesis (stated in the abstract and then in the third paragraph) is really awkwardly stated. But the conclusion is pretty clear: network analyses do not redefine the norms of explanation, and they suffer from the same methodological puzzles. Worth reading for the lucid discussion of graph theory as it relates to neural networks. Preprinted in Carl F. Craver's website, found via Philosophical Progress. Image: Wikipedia.
Today: 4 Total: 103 [Share]
] [View full size