Content-type: text/html Downes.ca ~ Stephen's Web ~ What's The Number for Tech Support?

Stephen Downes

Knowledge, Learning, Community

Oct 07, 2006

The article in Science Daily is pretty misleading, when it says "Human Brain Region Functions Like Digital Computer". To most people, to function like a computer is to function based on a series of instructions encoded into some (binary) language. This in turn leads to the idea that the brain is like a computer program.

This, of course, is precisely not what O'Reilly is saying in the article (unfortunately not available online, though you can find numerous others of his articles). Probably the recent online article most corresponding to the Science article is Modeling Integration and Dissociation in Brain and Cognitive Development.

In this article he is pretty specific about how the brain respresents conceptuial structures. "Instead of viewing brain areas as being specialized for specific representational content (e.g., color, shape, location, etc), areas are specialized for specific computational functions by virtue of having different neural parameters... This 'functionalist' perspective has been instantiated in a number of neural network models of different brain areas, including posterior (perceptual) neocortex, hippocampus, and the prefrontal cortex/basal ganglia system... many aspects of these areas work in the same way (and on the same representational content), and in many respects the system can be considered to function as one big undifferentiated whole. For example, any given memory is encoded in synapses distributed throughout the entire system, and all areas participate in some way in representing most memories."

This is tricky, but can be broken down. Basically, what he is proposing is a functionalist architecture over distributed representation.

"Functionalism in the philosophy of mind is the doctrine that what makes something a mental state of a particular type does not depend on its internal constitution, but rather on the way it functions, or the role it plays, in the system of which it is a part."

For example, when I say, "What makes something a learning object is how we use the learning object," I am asserting a functionalist approach to the definition of learning objects (people are so habituated to essentialist definitions that my definition does not even appear on lists of definitions of learning objects).

It's like asking, what makes a person a 'bus driver'? Is it the colour of his b lood? The nature of his muscles? A particular mental state? No - according to functionalism, what makes him a 'bus driver' is the fact that he drives buses. He performs that *function*.

"A distributed representation is one in which meaning is not captured by a single symbolic unit, but rather arises from the interaction of a set of units, normally in a network of some sort."

As noted in the same article, "The concept of distributed representation is a product of joint developments in the neurosciences and in connectionist work on recognition tasks (Churchland and Sejnowski 1992). Fundamentally, a distributed representation is one in which meaning is not captured by a single symbolic unit, but rather arises from the interaction of a set of units, normally in a network of some sort."

To illustrate this concept, I have been asking people to think of the concept 'Paris'. If 'Paris' were represented by a simple symbol set, we would all mean the same thing when we say 'Paris'. But in fact, we each mean a collection of different things, and none of our collections is the same. Therefore, in our own minds, the concept 'Paris' is a loose association of a whole bunch of different things, and hence the concept 'Paris' exists in no particular place in our minds, but rather, is scattered throughout our minds.

Now what the article is saying is that human brains are like computers - but not like the computers as described above, with symbolds and programs and all that, but like computers when they are connected together in a network.

"The brain as a whole operates more like a social network than a digital computer... the computer-like features of the prefrontal cortex broaden the social networks, helping the brain become more flexible in processing novel and symbolic information." Understanding 'where the car is parked' is like understanding how one kind of function applies on the brain's distributed representation, while understanding 'the best place to park the car' is like how a different fuunction applies to the same distributed representation.

The analogy with the network of computers is a good one (and people who develop social network software are sometimes operating with these concepts of neural mechanisms specifically in mind). The actual social network itself - a set of distributed and interlinked entities, usually people, as represented by websites or pages - constitutes a type of distributed representation. A 'meme' - like, say, the Friday Five - is distributed across that network; it exists in no particular place.

Specific mental operations, therefore, are like thinking of functions applied to this social network. For example, if I were to want to find 'the most popular bloggers' I would need to apply a set of functions to that network. I would need to represent each entity as a 'linking' entity. I would need to cluster types of links (to eliminate self-referential links and spam). I would then need to apply my function (now my own view here, and possibly O'Reilly's, though I don't read it specifically in his article, is that to apply a function is to create additional neural layers that act as specialized filters - this would contrast with, say, Technorati, which polls each individual entity and then applies an algorithm to it).

The last bit refers to how research is conducted in such environments. "
Modeling the brain is not like a lot of science where you can go from one step to the next in a chain of reasoning, because you need to take into account so many levels of analysis... O'Reilly likens the process to weather modeling."

This is a very important point, because it shows that traditional research methodology, as employed widely in the field of e-learning, will not be successful. This becomes even more relevant with the recent emphasis on 'evidennce-based' methodology, such as the Campbell Collaboration. This methodology, like much of the same type, recommends double-blind tests measuring the impacted of individual variables in controlled environments. The PISA samples are an example of this process in action.

The problem with this methodology is that if the brain (and hence learning) operates as described by O'Reilly (and there is ample evidence that it does) then concepts such as 'learning' are best understood as functions applied to a distributed representation, and hence, will operate in environments of numerous mutually dependent variables (the value of one variable inpacts the value of a second, which impacts the value of a third, which in turn impacts the value of the first, and so on).

As I argue in papers like Public Policy, Research and Online Learning and Understanding PISA the traditional methodology fails in such environments. Holding one variable constant, for example, impacts the variable you are trying to measure. This is because you are not merely screening the impact of the second variable, you are screening the impact of the first variable on itself (as transferred through the second variable). This means you are incorrectly measuring the first variable.

Environments with numerous mutually dependent variables are known collectively as chaotic systems. Virtually all networks are chaotic systems. Classic examples of chaotic systems are the weather system and the ecology. In both cases, it is not possible to determine the long-term impact of a single variable. In both cases, trivial differences in initial conditions can result in significant long-term differences (the butterfly effect).

This was a significant difference between computation and neural networks. In computation (and computational methodology, including traditional causal science) we look for specific and predictable results. Make intervention X and get result Y. Neural network (and social network) theory does not offer this. Make intervention X today and get result Y. Make intervention X tomorrow (even on the same subject) and get result Z.

This does not mean that a 'science' of learning is impossible. Rather, it means that the science will be more like meteorology than like (classical) physics. It will be a science based on modelling and simulation, pattern recognition and interpretation, projection and uncertainty. One would think at first blush that this is nothing like computer science. But as the article takes pains to explain, it is like computer science - so long as we are studying networks of computers, like social networks.

Thanks Guy Levert for the email question that prompted the title and the remainder of this post.



Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Dec 15, 2024 4:28 p.m.

Canadian Flag Creative Commons License.

Force:yes