Content-type: text/html Downes.ca ~ Stephen's Web ~ Knowledge and Recognition

Stephen Downes

Knowledge, Learning, Community

Jan 11, 2012

Responding to x28, 'Lower Levels of Connectivism'

First, it is probably more accurate to speak of 'domains' of connectivity rather than layers. The use of 'layers' suggests some sort of ordering (from, eg., small to large) that isn't really a defining characteristic. Using 'domains' allows us to recognize that *any* network, appropriately constituted, can be a learning and knowing system.

Second, this usage, "knowledge is found in the connections between people with each other," was a bit loose. I should have said 'entities' instead of 'people', where 'entities' refers to *any* set of entities in a connective network, not just people in a social network. I used 'people' because it's more concrete, but it was a loose usage.

That said, there are two major issues raised in this post. First, how is the sense of 'knowledge' equivalent in one domain and another. And second, how does knowledge cross between domains.

The first raises a really interesting question: does knowledge have a phenomenal quality? And is the nature of this quality based in the physical properties of the network in which it is instantiated? I can easily imagine someone like Thomas Nagel ('What is it like to be a bat?') saying yes, that there is something that it 'feels like' for a neural network to 'know' something that (say) a computer network or a social network does not.

Related to this is the question of whether such a phenomenal 'feel' would be epipehenomenal or whether it would have a causal efficacy. Does what it feels like to 'know' have any influence on our (other) knowledge states? Of is the 'feel' of knowing something merely incidental to knowing?

What I want to say is that there is something in common in the 'knowing' experienced by a neural network and the 'knowing' experienced by a social network, that this something is described by the configuration of connections between entities, so that we can say that 'knowing' for each of these systems is the same 'kind' of thing in important respects, without also having to say that they are the 'same' thing.

Different mechanisms create connections between people with each other and between neurons with each other (and between crows with each other in a crown network, etc). People use artifacts - words, images, gestures, etc. - to communicate with each other, while neurons use electro-chemical signals to communicate with each other. Though the patterns of connectivity between the two systems may be the same, the physical constitution of that pattern is different. It's like a contrail in the sky and a ski trail in the snow - we can observe the sameness of the parallel lines, and make inferences about them (that they never meet, say), while at the same time observe that they have different causes, and that it 'feels' different to create a contrail than it does to create a ski trail.

The same is true of knowledge. We can make observations about the set of connections that constitutes 'knowing' (that it is a mesh, that it embodies a long tail, that a concept is distributed across nodes, etc) independently of reference to the physical nature of that network. And yet, 'knowing' will 'feel' differently to a bunch of neurons than to a bunch of people (indeed, we can hardly say we know how a society 'feels' at all, except by analogy with how a human feels, which may not be a very accurate metaphor).

The second comment concerns how knowledge is transferred between networks (to put the point *very* loosely). There are different senses to this point - how someone comes to know what society knows, how someone comes to know what someone else knows, how somebody comes to know what nobody knows.

In the first instance - and I think this is really key to the whole theory of connectivism - there is no sense in which knowledge is *transferred* between any of these entities.

This is most obvious in the latter case. Learning something nobody knows *cannot* be a case of knowledge transfer. The knowledge must therefore develop spontaneously as a result of input phenomena (ie., experience) and the self-organizing nature of appropriately designed networks.

The organization that results from these conditions *is* the knowledge. The process of self-organizing *is* the process of learning. There are three major factors involved: the input phenomena, the learning mechanism, and the prior state of the network. There is a huge literature describing how such processes can occur.

In the case of one person learning from another, the major different is that the phenomena being experience consist not just of objects and events in nature, but of the deliberate actions of another person. These actions are typically designed in such a way as to induce an appropriate form of self-organization (and there is a supposition that it encourages a certain amount self-organization that one could not obtain by experience alone - the 'zone of proximal development').

What's important to recognize is that the learning is still taking place in the individual, that the other person is merely presenting a set of phenomena (typically a stream of artifacts) to be experienced, and that one's one learning mechanisms and prior state are crucial to any description of how that person learns.

One of the key elements I'd like to point to here is 'recognition'. This is a phenomenon whereby a partial pattern is presented as part of the phenomena, and where, through prior experience, the network behaves as though the full pattern were present. When we see half the letter 'E', for example, we read it as though the full letter 'E' were present.

To 'know' that 'A is B' is to 'recognize' that 'A is B', that is, when presented with 'A', one reacts as though being presented with a 'B'. Recognition lies at the core of communication, as it allows (for example) a symbol 'tiger' to suggest a phenomenon (a tiger).

What is important to understand here is that the recognition is something the *recipient* brings to the table. It is not inherent in the presentation of the phenomenon, and may not even be intended by the presenter (indeed, as likely as not, the presenter had something different in mind).

This also tells us how a piece of knowledge (so-called; there probably aren't really 'pieces' of knowledge) travels from one network to another network. Observe, for example, a murmuration of blackbirds. We humans (the neural networks) observe a flowing dynamic shape in the sky, like a big blob of liquid. We perceive the other network as a whole, and perceive it *as* something. We &recognize* a pattern in the other network.

When a human observes the behaviour of a social network, the human (ie, the neural network) can recognize and respond to patterns in that social network. The patterns are not actually 'created by' or even 'intended' by the social network; they are what we would call 'emergent properties' of the network, supervenient on the network.

So: a person watches 14 other people use the word 'grue' in such and such a context; when the person sees artifacts corresponding to 'grue' he *recognizes* it as an instance of that context. That is to say, on presentation of the artifact representing 'grue', he assumes an active set of connections similar to what he would assume if presented with that particular context.

As a postscript, it's worth mentioning that there's no sense of 'collaboration' or 'shared goal' inherent in any of this. Indeed, I would argue that the use of such terminology makes assumptions that cannot really be justified.

When we say that 'society knows P', what do we mean? *Not* that a certain number of individuals in society know P. There is no apriori reason to assume that social knowledge is the same as individual knowledge, and indeed, it is arguable, and in some senses demonstrable, that what society knows is *different* from what an individual knows. Why? Because the prior state is different, because the learning mechanisms are different, and most importantly, the input phenomena are different.

A society does not, for example, perceive a forest in the same way a human does. A society cannot perceive a forest directly. A human perceives a forest by looking at it, smelling it, walking through it. A society has no such sensations.

A human does not, for example, perceive a neural activation in the same way a neuron does. A neuron receives a series of tiny electro-chemical signals. A human has no such sensations.

A human can only recognize a neural activation *as* something - a forest, say. A society can only recognize a perception *as* something - en economic unit, say, a tract, or something we don't even have a word for.

A human can experience neural activations only in the aggregate - only as a network - in which it may recognize various emergent properties. This set of network activations (this 'sensation') is associated with 'that' set of network activations (that 'knowledge'). The same with a society. It can never experience the forest through the perspective of only one individual - it can only experience the forest through the aggregate of individual perspectives.

The whole dialogue of 'collaboration' presumes that a set of humans can create a fictitious entity, and by each human obtaining the same knowledge (neural state, opinions and beliefs, etc), can imbue this fictitious entity with that state. And by virtue of this action, the fictitious entity can then be assigned some semblance of agency analogous (but magnified) to a human agency.

Assuming that it makes sense to imagine such a creation (and there are many difficulties with it) such a construct does not have independent cognitive properties; it cannot 'learn' on its own, and it cannot 'know' more (or anything different) than any of its constituent human members.



Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Dec 15, 2024 4:28 p.m.

Canadian Flag Creative Commons License.

Force:yes