Content-type: text/html Downes.ca ~ Stephen's Web ~ What's Already Been Proven

Stephen Downes

Knowledge, Learning, Community

Apr 20, 2010

Originally posted on Half an Hour, April 20, 2010.

Responding to David Wiley, who cites John Anderson and Lael Schooler’s 1991 Reflections of the Environment in Memory.

This is the same John R. Anderson who wrote ‘Human Associative Memory’ with Gordon Bower, which describes the associative structures fundamental to my own work and also to associationist reasoning generally. http://act-r.psy.cmu.edu/people/ja/ja_bio.html (another Canadian, too).

In other words, this sort of work *is* the “empirical work done to shore up the nascent theoretical framework called connectivism.” I suppose more of it can be done; I cite it when I come across it. I can’t speak for George, but it’s not like I just made some stuff up and called it a theory.

Related to this, when you ask questions like, “what are the nodes that are connected in connectivism?” I refer you, not to hand-waving generalities, but to things like Boltzmann engines, which draw upon the thermodynamics inherent in the gradual build-up and release of electrical changes in neurons. 
http://en.wikipedia.org/wiki/Boltzmann_machine There’s plenty of solid empirical research here, some solid mathematics, and even a spiritual dimension if your so inclined (my various references to ‘harmony through diversity’ are directly grounded in the Boltzmann machine). The average human is more complex than the average neuron, of course, and different mechanics apply. But within some bounds, the same sort of descriptions that apply to neurons also apply to humans – the phenomenon of a ‘propensity to respond after repeated stimuli’, for example, can be observed in both.

That said, what seems to be important is the set of connections, rather more than the particular physical make-up of the nodes being connected. There is not any evidence that find that stipulates that only certain kinds or essences of nodes can be connected (Thomas Nagel notwithstanding). That said, there is a requirement that the entities be in some sense *physical*, because the nature of a connection (as I’ve often stated) is that a change of state in one results, via the connection, in a change of state in the other (that’s why graph theory, nodes and edges, constitutes only a virtualization, and not an instantiation, of network learning).
For while I realize that good-old SR looks like paired associate learning, you can’t substitute words, like ‘Paris’ or ‘France’, for two nodes. A word, in and of itself, has no causal property; only the tokening has a property. This is important because a word has no discrete token inside a human mind, and therefore, while we can *represent* an association between ‘Paris’ and ‘France’, we cannot *instantiate* it. *That* is why we prefer complex networks (and what accounts for the generally anti-cognitivist stance of my own work).
Now I am perfectly happy to talk about simple networks. One node, a connection (not merely an ‘edge’), and another. We can represent nodes as simply as possible – on/off (though in reality many more states are possible).

We can represent different networks of this sort. A connection as simply as possible (on/off) such that if node A is on and connection is on, node B turns on (that’s an excitatory (or Hebbian)connection). A connection as simply as possible (on/off) such that if node A is on and connection is on, node B turns if (that’s an inhibitory connection). Etc. What are the mechanisms for these? Could be electric switches, could be chemical reactions, could be dominos. If you look at Rumehhart and McClelland’s ‘Jets and Sharks’ experiment, you see we can create pooling and differentiation with these kinds of connections.

If the nodes aren’t simply on/off, if the connection is represented with a probability function, etc., based on different properties, you get different types of networks.

All of this is known, old, well-proven. It doesn’t need to be proven all over again, just for education. Quite the opposite. Education should, for once and at long last, learn from what has already been proven.


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Nov 15, 2024 06:19 a.m.

Canadian Flag Creative Commons License.

Force:yes