I have long compared human learning with machine learning, in the sense that both work on the basis of using data (aka experience) to develop networks of connections. This paper considers criticisms of that perspective, specifically, "that processing in deep neural networks is unlike human cognition for four reasons: they are i) data-hungry, ii) brittle, and iii) inscrutable black boxes that merely iv) reward-hack rather than learn real solutions to problems." These four arguments are effectively the argument that this view amounts to some form of behaviourism (a view expressed explicitly by Audrey Watters, among others). The responses look at some of the more recent challenges to deep learning that are also challenges in human learning: one-shot learning, perturbed images, explanatory rationality and interpretability, and game exploits. Even if you're not convinced by the argument there is a wealth of conceptual insight here. Image: DARPA.
Today: 0 Total: 11 [Share]
] [