This is an interesting article from a couple months ago describing work Geoffrey Hinton has been doing to try to reconcile successful methods used by artificial neural networks with how the brain actually learns. Longtime readers will be familiar with the two mechanisms discussed: Hebbian networks ("Neurons that fire together, wire together") and back-propagation. The problem with back-propagation, according to the article, is that "in a biological network, neurons see only the outputs of other neurons, not the synaptic weights or internal processes that shape that output." One major response to this problem comes in the form of recurrent neural networks ("that is, if neuron A activates neuron B, then neuron B in turn activates neuron A"). Toward this end, the article also discusses predictive networks and how different neural cells (and pyramidal neurons in particular) work to recognize and manage error.
Today: 4 Total: 82 [Share]
] [