"In recent years," begins this article (28 page PDF), "the idea that neuroscientists give mechanistic explanations has become increasingly popular." These explanations contrast with the sort of taxonomies and systems analyses offered by educators and psychologists. "Contrary to mechanistic explanation, network analysis seems to abstract away from concrete parts and activities to instead focus on the topological properties of connectivity patterns, with the aim of explaining how they determine the behavioral dynamics of the systems exhibiting those patterns." Obviously both have a role to play in 'explaining' what exactly a neural network or AI is doing (I put 'explaining' in quotes because it's the definition of this term that's in question here). The argument considered in this paper considers "under which conditions topological explanations are indeed mechanistic explanations." This becomes the case in multilayer networks when a pattern in one layer becomes an activation in the next, a phenomenon I have classified under the heading 'recognition'. Anyhow, tons of detailed argument here, for purists who want to know exactly what we mean when we ask for, say, 'explainability' (and are not satisfied with counterfactual or dispositional accounts).
Today: 0 Total: 17 [Share]
] [