Content-type: text/html Downes.ca ~ Stephen's Web ~ Non-Web Connectivism

Stephen Downes

Knowledge, Learning, Community

Feb 20, 2007

Responding to this thread...

(One of the things I really dislike about Moodle is that I have to use the website to reply to a post - I get it in my email, I'd rather just reply in my email.) Anyhow...

It occurs to me on reading this that the assembly line can and should be considered a primitive form of connectivism. It embodies the knowledge required to build a complex piece of machinery, like a car. No individual member of the assembly line knows everything about the product. And it is based on a mechanism of communication, partially symbolic (through instructions and messages) and partially mechanical (as the cars move through the line).

The assembly line, of course, does not have some very important properties of connectivist networks, which means that it cannot adapt and learn. In particular, its constituent members are not autonomous. So members cannot choose to improve their component parts. And also, assembly line members must therefore rely on direction, increasing the risk they they will be given bad instructions (hence: the repeated failures of Chrysler). Also, they are not open (though Japanese processes did increase the openness of suppliers a bit).

It is important to keep in mind, in general, that not just any network, and not just any distributed knowledge, qualifies as connectivist knowledge. The radio station example in particular troubles me. It is far too centralized and controlled. In a similar manner, your hard drive doesn't create an instance of connective knowledge. Yes, you store some information there. But your hard drive is not autonomous, it cannot opt to connect with other sources of knowledge, it cannot work without direction. It doesn't add value - and this is key in connectivist networks.

Response: Jeffrey Keefer

Stephen, when you said "But your hard drive is not autonomous, it cannot opt to connect with other sources of knowledge, it cannot work without direction. It doesn't add value - and this is key in connectivist networks," you seem to be speaking about people who have the freedom to act independently toward a goal, which is something that those on the assmbly line in your earlier example are not necessarily free or encouraged to do. If they are directed and not free, it seems that they are more like independent pieces of knowledge or skills, that strategically placed together makes something else. If that can be considered connectivism, then what social human endeavor (from assembling food at a fast food restaurant to preparing a team-based class project to conducting a complex surgical procedure) would not be connectivistic?
Yeah, I was thinking that as I ended the post but didn't want to go back and rewrite the first paragraph.

Insofar as connectivism can be defined as a set of features of successful networks (as I would assert) then it seems clear that things can be more or less connectivist. That it's not an off-on proposition.

An assembly line, a fast-food restaurant -- these may be connectivist, but just barely. Hardly at all. Because not having the autonomy really weakens them; the people may as well be drones, like your hard drive. Not much to learn in a fast food restaurant.

One of the things to always keep in mind is that connectivism shows that there is a point to things like diversity, autonomy, and the the other elements of democracy. That these are values because networks that embody them are more reliable, more stable, can be trusted. More likely to lead, if you will, to truth.

Karyn Romeis writes:

What I really am struggling with is this: "The radio station example in particular troubles me. It is far too centralized and controlled." Please, please tell me that you did not just say "Let them eat cake".

I presume that the people who make those calls to the radio station do so because they have no means of connecting directly to the electronic resources themselves. Perhaps they do not even have access to electricity. In the light of this, they might be expected to remain ignorant of the resources available to them. However, they have made use of such technology as is available to them (the telephone) to plug into the network indirectly. They might not be very sophisticated nodes within the network, but they are there, surely? It might be clunky, but under the circumstances, it's what they have: connection to people who have connection to technology. Otherwise we're saying that only first world people with direct access to a network and/or the internet can aspire to connectivism. Surely there is space for a variety of networks?
What concerns me about the use of radio stations is the element of control. It is no doubt a simple fact that there are things listeners cannot ask about via the radio method. And because radio is subject to centralized control, it can be misused. What is described here is not a misuse of radio - it actually sounds like a very enlightened use of radio. But we have seen radio very badly misused, in Rwanda, for example.

You write, "Otherwise we're saying that only first world people with direct access to a network and/or the internet can aspire to connectivism. Surely there is space for a variety of networks?" I draw the connection between connectivism and democracy very deliberately, as in my mind the properties of the one are the properties of the other. So my response to the question is that connectivism is available to everyone, but in the way that democracy is available to everyone. And what that means is that, in practice, some people do not have access to connectivist networks. My observation of this fact is not an endorsement.

Yes, there is a space for a variety of networks. In fact, this discussion raises an interesting possibility. Thus far, the networks we have been talking about, such as the human neural network in the brain, or the electronic network that forms the internet, are physical networks. The structure of the network is embodied in the physical medium. But the radio network, as described above, may be depicted as a network. The physical medium - telephone calls and a radio station - are not inherently
a network, but they are being used as a network.

Virtual networks allow us to emulate the functioning of, and hence get the benefit of, a network. But because the continued functioning of the network depends on some very non-network conditions (the benevolence of the radio station owner, for example) it should be understood that such structures can very rapidly become non-networks.

I would like also in this context to raise another consideration. That is related to the size of the network. In the radio station example described, at best only a few hundred people participate directly. This is, in the nature of things, a very small network. The size of the network does matter, as various properties - diversity, for example - increase through size increases. As we can easily see, a network consisting of two people cannot embody as much knowledge as a network consisting of two thousand people, much less two million people.

In light of this, I would want to say that the radio station example, at best, is not the creation of a network, but rather, the creation of an extension of the network. If the people at the radio station could not look up the answers on Google, the effectiveness of the call-in service would be very different. So it seems clear here that physical networks can be extended using virtual networks.

This is somewhat like what George means when he says that he stores some of his knowledge in other people (though it is less clear to me that he intends it this way). His knowledge is stored in a physical network, his neural net, aka his brain. By accessing things like the internet, he is able to expand th capacity of his brain - the internet becomes a virtual extension of the physical neural network.

Note that this is not the same as saying that the social network, composed of interconnected people, is the same as the neural network. They are two very different networks. But because they have the same structure, a part of one may act as a virtual extension of the other.

This, actually, resembles what McLuhan has to say about communications media. That these media are extensions of our capacities, extensions of our voices and extensions of our senses. We use a telescope to see what we could not see, we use a radio to hear what we could not hear. Thought of collectively, we can use these media to extend our thought processes themselves. By functioning as though it were a brain, part of the wider world, virtually, becomes part of our brain.

Responding to Glen Gatin, who wrote a longish post:

Jurgen Habermas talks about communicative action in the public sphere as an essential component of democracy. I see the process that we are using( and discussing) as a form of communicative action and discussion groups such as this are exemplars of the activity that Habermas championed. I hope someone more versed in sociological theory can clarify because it seems that some of the conditions that got Jurgen thinking are coming around again. (excellent Habemas interview video on YouTube)

The other point picks up on Stephens comment about George' comment regarding storing knowledge or data in other people. Societies have always done that, from the guys that memorize entire holy texts, elders/hunters/ warriors in various societies as repositories of specialized wisdom.

Society relies on implicit skills and knowledge, the kind that can't be written down. Julian Orr's fabulous thesis "Talking About Machines, An Ethnography of a Modern Job" describes the types of knowledge that can't be documented, must be stored in other people. He points out that you can read the company manual but knowledge doesn't come until coffee time (or the bar after work) when one of the old timers tells you what it really means. Narrative processes are key. Developing the appropriate, context- based skill sets for listening to the stories, to extract the wheat from the chaff, is a critical operation in informal learning.
Storing knowledge is what Academia was partly about, storing the wisdom of western civilization in the minds of societies intellectuals and paying considerable amounts of public monies to have them process and extended our collective knowledge.

All through, there are examples of the mechanisms necessary to to access and participate in collective wisdom. You have to know the code, speak the language, use the proper forms of address, make the proper sacrifices, say the proper prayers, use APA format, enter the proper username and password. The internet expands the possibilities of this function as humans evolve toward a collective consciousness ala Teilhard de Chardin's noosphere. Welcome to Gaia.

First, a lot of people have talked about the importance of discourse in democracy. We can think of Tocqueville, for example, discussing democracy in America. The protections of freedom of speech and freedom of assembly emphasize its importance.

And so, Habermas and I agree in the sense that we both support the sorts of conditions that would enable an enlightened discourse. openness and the ability to say whatever you want, for example. But from there we part company.

For Habermas, the discourse is what produces the knowledge, the process of arguing back and forth. Knowledge-production (and Habermas intended this process to produce moral universals) is therefore a product of our use of language. It is intentional. We build or construct (or, at least, find) these truths.

I don't believe anything like this (maybe George does, in which case we could argue over whether it constitutes a part of connectivism ;) ). It is the mere process of communication, whether codified intentionally in a language of discourse or not, that creates knowledge. And the knowledge isn't somehow codified in the discourse, rather, it is emergent, it is, if you will, above the discourse.

Also, for Habermas, there must be some commonality of purpose, some sense of sharing or group identity. There are specific 'discourse ethics'. We need to free ourselves from our particular points of view. We need to evaluate propositions from a common perspective. All this to arrive at some sort of shared understanding.

Again, all this forms no part of what I think of connectivism. What makes the network work is diversity. We need to keep our individual prejudices and interests. We should certainly not subsume ourselves to the interests of the groups. If there are rules of arguing, they are arrived at only by mutual consent, and are in any case arbitrary and capricious, as likely as not to be jettisoned at any time. And if there is an emergent 'moral truth' that arises out of these interactions, it is in no way embodied in these interactions, and is indeed seen from a different perspective from each of the participants.

Now, also, "The other point picks up on Stephens comment about George' comment regarding storing knowledge or data in other people. Societies have always done that, from the guys that memorize entire holy texts, elders/hunters/ warriors in various societies as repositories of specialized wisdom."

This sort of discourse suggests that there is an (autonomous?) entity, 'society', that uses something (distinct from itself?), an elder, say, to store part of its memory. As though this elder is in some sense what I characterized as a virtual extension of a society.

But of course, the elder in question is a physical part of the society. The physical contituents of society just are people ("Society green.... It's made of people!!") in the same way that the physical constituents of a brain network are individual neurons. So an elder who memorizes texts is not an extension of society, he or she is a part of society. He or she isn't 'used' by society to think, he or she is 'society thinking'. (It's like the difference between saying "I use my neurons to think" and "my neurons think").

Again, "Society relies on implicit skills and knowledge, the kind that can't be written down. Julian Orr's fabulous thesis "Talking About Machines, An EthnograpGranovetterhy of a Modern Job" describes the types of knowledge that can't be documented, must be stored in other people."

This seems to imply that there is some entity, 'society', that is distinct from the people who make up that entity. But there is not. We are society. Society doesn't 'store knowledge in people', it stores knowledge in itself (and where else would it store knowledge?).

That's why this is just wrong: "the mechanisms necessary to to access and participate in collective wisdom. You have to know the code, speak the language, use the proper forms of address, make the proper sacrifices, say the proper prayers, use APA format, enter the proper username and password. The internet expands the possibilities of this function as humans evolve toward a collective consciousness ala Teilhard de Chardin's noosphere. Welcome to Gaia."

There are no mechanisms 'necessary' in order to access and participate in the collective wisdom. You connect how you connect. Some people (such as myself) access via writing posts. Other people (such as George) access via writing books. Other people (such as Clifford Olsen) access via mass murder. Now George and I (and the rest of us) don't like what Clifford Olsen did. But the very fact that we can refer to him proves that you can break every standard of civilized society and still be a part of the communicative network. Because networks are open.

A network isn't like some kind of club. No girls allowed. There's no code, language, proper form of address, format, username or password. These are things that characterize groups. The pervasive use of these things actually breaks the network. How, for example, can we think outside the domains of groupthink if we're restricted by vocabulary or format?

The network (or, as I would say, I well-functioning network) is exactly the rejection of codes and language, proper forms of address, formats, usernames and passwords. I have a tenuous connection (as Granovetter would say, 'weak ties') with other members of the network, formed on the flimsiest of pretexts, which may be based on some voluntary protocols. That's it.

From the perspective of the network, at least, nothing more wanted or desired (from our perspective as humans, there is an emotional need for strong ties and a sense of belonging as well, but this need is distinct and not a part of the knowledge-generating process).

To the extent that there is or will be a collective consciousness (and we may well be billions of entities short of a brain) there is no reason to suspect that it will resemble human consciousness and no reason to believe that such a collective consciousness will have any say (or interest) in the functioning of its entities. Do you stay awake at night wondering about the moral turpitude if each of your ten billion neurons? Do you even care (beyond massive numbers) whether they live or die?

Insofar as a morality can be derived from the functioning of the network, it is not that the network as a whole will deliver unto us some universal moral code. We're still stuck each fending for ourselves; no such code will be forthcoming.

At best, what the functioning of the network tells us about morality is that it defines that set of characteristics that help or hinder its functioning as a network. But you're still free to opt out; there's no moral imperative that forces you to help Gaia (there's no meaning of life otherwise, though, so you may as well - just go into it with your eyes opem, this is a choice, not a condition). be, it might be said, the best neuron you can be, even though the brain won't tell you how and doesn't care whether or not you are.

This is what characterizes the real cleave between myself and many (if not most) of these other theorists. They all seem to want to place the burden of learning, of meaning, if morality, of whatever, into society. As though society cares. As though society has an interest. As though society could express itself. The 'general will', as Rousseau characterized it, as though there could be some sort of human representation or instantiation of that will. We don't even know what society thinks (if anything) about what it is (again - ask yourself - how much does a single neuron know about Descartes?). Our very best guesses are just that -- and they are ineliminably representations in human terms of very non human phenomena.

Recall Nietzsche. The first thing the superman would do would be to eschew the so-called morality of society. Because he, after all, would have a much better view of what is essentially unknowable. The ease with we can switch from saying society requires something to saying society requires a different things demonstrates the extent to which our interpretations of what society has to say depend much more on what we are looking for than what is actually there.



Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Dec 15, 2024 4:43 p.m.

Canadian Flag Creative Commons License.

Force:yes