Content-type: text/html Downes.ca ~ Stephen's Web ~ Are the Basics of Instructional Design Changing?

Stephen Downes

Knowledge, Learning, Community

Jul 01, 2005

Joseph Beckmann wrote: Philosophy is a much larger, much deeper and much more complex activity than "constructivism" could ever encompass. It involves a worldview that is so much more a challenge than neurology's current state that Paul Allen's billion dollar investment in pure research on brain activity suggests we hold off on any of these labels for, oh, a century or so. This comment is well taken, in my opinion. And a few words in this regard would be appropriate at this juncture. Philosophy - and in particular the philosophy of mind - has had a great deal to say about the issues currently under debate here.

Let me begin, for example, with behaviourism. In philosophy, there is typically a division of behaviourism into two separate schools of thought:

- methodological behaviourism - this approach allows that there are mental events, such as beliefs, but that since they are inaccessible to observers, we must treat them as though they were physical (and hence observable) events

- philosophical behaviourism - there are quite literally no mental events per se, and when we talk about the mental we are in fact talking about the observable and physical

Probably the most important work in this latter school was Gilbert Ryle's 1949 'The Concept of Mind'. He argues that when we say there are things like 'beliefs' and even the 'self', we are committing what he calls a 'category error'.

For example, suppose someone was sent to a campus to find 'the university'. They get to the campus and report back, 'well, I found a science building and an education building, and a whole bunch of people, but I never found "the university"'. The presumption that there is some separate observable phenomena over and above the physical entities that comprise the university constitutes a 'catagory error'.

The assertion that there are no mental events is in a certain sense inplausible, if only by introspection. There is something that it is 'like' to have a belief, and this something seems clearly to be distinct from the mere assemblage of physical constituents. John Searle in 'Minds, Brains, and Programs' and Thomas Nagel in 'What is it Like to Be a Bat' offer the most compelling versions of this argument.

But if there are mental states, it may still be that our descriptions of them nonetheless commit the same sort of category error described by Ryle. This view crystalized under the heading of 'eliminative materialism' and while first advanced by people like Paul Feyerabend and Richard Rorty achieves its definitive statement in Paul Churchland's 'Matter and Consciousness'.

The upshot of Churchland's argument is that, strictly speaking, there is no such thing as a 'belief' or even a 'thought'. These concepts are relics of an age when we thought the mental came in neat little atomistic packages, just like the physical. They are an unfounded application of concepts like 'objects' and 'causation' to phenomena that defy such explanation; they are, in other words, relics of 'folk psychology'. Saying 'someone has a belief' is like saying that 'the Sun is rising' - it is literally untrue, and depends on a mistaken world view.

Folk psychology has its staunch defenders and, if I may say, is well represented in contemporary educational theory, including the currently dominant paradigms of distance learning. To get at that, it is worth looking the thought of some of its defenders, and most notably Jerry A. Fodor and Zenon Pylyshyn.

Their argument is, in a nutshell, that the claims of folk psychology are literally true, that there is, for example, an entity in the mind corresponding to the belief that 'Paris is the capital of France', and that this belief is, in fact, what might loosely be called 'brain writing' - or, more precisely, there is a one-to-one correspondance between a person's brain states and the sentence itself. Moreover, argues Fodor, the basic material for these brain sentences - not only the syntactic structures, as Chomsky asserts, but also the vocabulary itself, was present in the brain from birth.

This is not as absurd as it sounds, and there is a long and well-argued defense in works such as 'The Language of Thought', 'Representations' and 'Psychosemantics'. Intuitively, though, you can think of it this way: scuptors sometimes say 'the sculpture was already in the rock; I just found it'. And, quite literally, it makes no sense to say that the sculpture was not in the rock - where else would it be? The idea of 'shaping the mind' is the same sort of thing; it is a revealing of the potential that is latent in the mind, the pre-existing capacity to learn not only language but even sets of concepts and universal truths.

After all, as Chomsky argues - how could we come to know that some laws, such as the laws of mathematics, are universally true? Empirical evidence could never provide for us grounds to even conceive of such universality, much less come to know it. Ergo, these universals must in some way already be present in our mind. And the same point may also be said of basic linguistic constructions; nature does not provide us with, say, a distinction between noun and verb. Our mind does. Echoes of Kant's 'necessary preconditions for the possibility of perception'.

From my point of view, though, research in mental phenomena has been running in the other direction, away from folk psychology. From early work, such as David Marr's 'Vision' and Stephen Kosslyn's 'Image and Mind' through Patricia Smith Churchland's 'Neurophilosophy' to LeDoux's contemporary 'The Synaptic Self', it is becoming increasingly evident that what we call 'mental contents' do not resemble sentences, much less physical objects, at all.

The philosophical literature of the last 20 years or so is rich with this debate, though much of it is (in my mind) frivolous. As Fodor and his ilk defend a shrinking terrain (a 'mind of the gaps', if you will), others have tried to find a middle ground, such as Daniel Dennet in 'The Intentional Stance'.

Where the Fodor approach intersects with learning theory is, in my view, via communication theory. Though communication theory will admit that there is nonlinguistic communication (audio music and video images, for example) it is nonetheless based on the idea that there is something like mental content that is in some way transmitted from a sender to a receiver.

In learning theory, the concept of transactional distance, in my opinion, is based on an analysis of communication. What that means is that there is exists a space (construed either physically or metaphorically) between two entities between which there exists a channel of communication. In one entity there exists a state, a mental state, which corresponds to a semantic state (in other words, a sentence), and in the process of communication, (aspects of) that state are transmitted from the first entity to the second.

This transmission is known as a signal, and as writers like Wilbur Schramm observe, the state transfer is made possible because it constitutes an experience (a mental state) shared between sender and receiver. This signal, in physical form (such as, say, a book) may constitute an artefact; alternatively, it may be viewed as a medium.

The physical analysis of learning, on this account, becomes possible because the physical state - the actual communicative entity - matches the mental state. Thus, the relative states in the sender and the receiver can be (putatively) observed and measured.

For example, this approach allows Fred Dretske, in 'Knowledge and the Flow of Information', to explain communication from the perspective of information theory. The transfer of information, suggests Dretske, occurs when, as the result of a signal from an external entity, one's assessment of the total possible states of affairs in the world is reduced.

For example, suppose we know there is a dress in the closet. Then it is possible that it is a red dress, or a blue dress, or a green dress. Thus there are three possible states of affairs in the world. Then someone says, "It is not a red dress." Now there are two possible states of affairs in the world: it could be a blue dress or a green dress. Thus, a bit of information has been transmitted, and (presumably) this bit of information can be found in the communications medium, as well as measured for its presence in the receiver through semantic behaviour (such as, say, answering questions on a test).

Moore's contribution to educational theory, placed firmly within this framework, is that the effectiveness of communication is improved through interaction. Instead of viewing communication as a one-time event, in which information is sent from a sender and received by a receiver, the transfer of information is enabled through a series of communications, such that the receiver sends messages back to the sender, or to third parties.

This is similar to the 'ckecksum' mechanism in computer communications, where the receiving computer sends back a string of bits to the sender in order to confirm that the message has been received correctly. Minimally, through this communication, a process of verification is enabled; one can easily infer more complex communications in which knowledge is actually generated, or constructed, by the receiver based on prompts and cues from the sender.

Again, though, notice the pattern here. What is happening is that information theorists, such as Dretske, along with educational theorists, such as Moore, are transferring the properties of a physical medium, in this case, the communication of content via electronic or other signals, to the realm of the mental. Transactional distance just is an application of a physical concept to a mental concept. And if you buy into this, you are bound to buy in to the rest of it, and most especially, that there is something we'll call 'mental content' which is an isomorphism between physical states of the brain and the semantical content transmitted to and received by students, who either in some way absorb or construct a mental state that is the same as the teacher's - a 'shared experience'.

This is all very personal for me. In my own work, my very first published paper, 'Why Equi Fails', was a response to Fodor, keying in on a particular claim made in 'The Language of Thought' regarding the equi noun-phrase deletion rule. My assertion was that the rule could not be asserted universally, as Fodor claimed, because substitution for noun-phrases is context-senstitive. For example, the use of the noun-phrase 'I' will be subject to a different substitution depending on who utters the phrase, and the context in which it is uttered.

Constructivism, we have read recently, is all about the context-sensitivity of learning. But if learning is indeed context-sensitive (as I would aver) then the 'language of thought' hypothesis fails, and the rest of folk psychology along with it. For the presumption of these theories is that, when you believe that 'Paris is the capital of France' and when I believe that 'Paris is the capital of France', that we believe the same thing, and that, importantly, we share the same mental state, and hence can be reasonably relied upon to demonstrate the same semantic information when prompted.

And the 'language of thought' hypothesis could not possibly succeed, nor folk psychology either. Because it turns out that not only language but the whole range of phenomena associated with knowledge and learning are context-sensitive. Or so the philosophers say.

- Ludwig Wittgenstein, in 'Philosophical Investigations' and elsewhere, argues that meaning is context senstitive, that what we mean by a word depends on a community of speakers; there is no such thing as a 'private language', and hence, the meaning of a word cannot stand alone, fully self-contained, in the mind.

- Wilbert Quine, in 'Two Dogmas of Empiricism' and in 'Word and Object', shows that observation itself is context-sensitive, that there is no knowable one-to-one matching between sense-phenomena and the words used to describe them; in 'On the Indeterminacy of Translation' he illustrates this with the famous 'gavagai' example: when a native speaker uses the word 'gavagai' there is no empirical way to know whether he means 'rabbit' or 'the physically incarnate manifestation of my ancestor'

- Norwood Russell Hanson, in 'Patterns of Discovery', argues, in my view successfully, that causal explanations are context-sensitive. 'What was the cause of the accident?' It depends on who you ask - the police officer will point to the speed, the urban planner will point to the road design, the driver will point to the visibility.

- George Lakoff, in 'Women, Fire and Dangerous Things', shows that categories are context sensitive (contra Saul Kripke); that what makes two things 'the same' varies from culture to culture, and indeed (as evidenced from some of his more recent political writings) from 'frame' to 'frame'.

- Bas C. van Fraassen shows that explanations are context sensitive. 'Why are the roses growing here?' may be answered in a number of ways, depending on what alternative explanations are anticipated. 'Because someone planted them.' 'Because they were well fertilized.' 'Because the chlorophyll in the leaves converts the energy of the Sun into glucose' are all acceptable answers, the correct one of which depends on the presuppositions inherent in the question.

- David K. Lewis and Robert C. Stalnaker argue that the counterfactuals and modalities are context sensitive (though Lewis, if asked, would probably deny it). The truth of a sentence like 'brakeless trains are dangerous' depends, not on observation, but rather, on the construction of a 'possible world' that is relevantly similar (Stalnaker uses the word 'salience') to our own, but what counts as 'relevant' depends on the context in which the hypothetical is being considered.

As I said, this is all very personal to me. My PhD dissertation, which I wrote in three days (and retyped later, adding references) on the top of the hill at the Edmonton Folk Festival in 1991 (that was back in the days before the tent city sprang up there), 'The Network Phenomenon: Empiricism and the new Connectionism', reached the only possible conclusion: there are no such things as 'mental contents', no 'sentences in the brains', no 'beliefs' that can be understood as such, no 'memories' that can be pulled out and studied on their own as purely semantical entities.

My work ran smack into a committee of cognitivists, followers of Chomsky, Fodor and Dretske, who insisted that I was wrong and instead of what I proposed, demanded an essay on 'mental contents'. I wrote a half-hearted proposal, returned to my original work (a logic of similarity, following Amos Tversky, with the addition of salience, and a bitter denunciation of 'Psychosemantics' called 'Trash Fodor' - both of which I kept to myself) and after a year or so decided it would be better if I and my committee conducted our dispute on grounds other than my future. So much for my PhD.

Because, the question is, what does happen when we (think we) 'know', 'believe' and even 'think'? It was always evident to me - despite a robust inner life of 'self-talk' - that I am not manipulating sentences in the brain. The very misunderstandings through which people can utterly misinterpret what I say in writing - or even diagrams - convinces me of that.

To keep a long story short, my own work since then has followed yet another group of philosophers and theorists. This is what 'The Network Phenomenon' was about. I followed people like Anderson and Bower in 'Human Associative Memory', Marvin Minsky in 'The Society of Mind', Minsky and Seymour Papert in 'Perceptrons' and McClelland and Rumelhart in 'Parallel Distributed Processing', embracing aspects of a theory that has come to be called 'connectionism'.

The theory, stgated simply, is that human thought amounts to patterns of interactions in neural networks. More precisely, patterns of input phenomena - such as sensory perceptions - cause or create patterns of connections between neurons in the brain. These connections are associative - that is, connections between two neurons form when the two neurons are active at the same time, and weaken when they are inactive or active at different times. See, for example, Donald Hebb's 'The Organization of Behavior', which outlines what has come to be called 'Hebbian associationism'.

There are four major implications of this approach for edeucational theorists:

- first, knowledge is subsymbolic. Mere possession of the words does not mean that there is knowledge; the possession of knowledge does not necessarily result in the possession of the words (and for much more on this, see Michael Polanyi's discussion of 'tacit knowledge' in 'Personal Knowledge').

- second, knowledge is distributed. There is no specific 'mental entity' that corresponds to the belief that 'Paris is the capital of France'. What we call that 'knowledge' is (an indistinguishable) pattern of connections between neurons. See, for example, Geoffrey Hinton, 'Learning Distributed Representations of Concepts'.

- third, knowledge is interconnected. The same neuron that is a part of 'Paris is the capital of France' might also be a part of 'My dog is named Fred'. It is important to note that this is a non-symbolic interconnection - this is the basis for non-rational associations, such as are described in the recent Guardian article, 'Where Belief is Born'

- fourth, knowledge is personal. Your 'belief' that 'Paris is the capital of France' is quite literally different from my belief that 'Paris is the capital of France'. If you think about it, this must be the case - otherwise Gestalt tests would be useless; we would all utter the same word when shown the same picture.

- fifth, what we call 'knowledge' (or 'belief', or 'memory') is an emergent phenomenon. Specifically, it is not 'in' the brain itself, or even 'in' the connections themselves, because there is no 'canonical' set of connections that corresponds with 'Paris is the capital of France'. It is, rather (and carefully stated), a recognition of a pattern in a set of neural events (if we are introspecting) or behavioural events (if we are observing). We infer to mental contents the same way we watch Donald Duck on TV - we think we see something, but that something is not actually there - it's just an organization of pixels.

There has been a lot of discussion about the fifth point recently because of observations of similar phenomena in the physical world, in Duncan J. Watt's discussion of social networks in 'Six Degrees', Alberta-Laszlo Barabasi in 'Linked' and Mark Buchanan in 'Nexus'. Steven Johnson has popularized the concept in his book 'Emergence'. What is significant here (though none of the writers have said it in just this way) is that semantic content can 'emerge' from networks, including especially dynamic networks, of interconnected entities, whether they are neurons, brains or crickets.

I won't go into detail on this, as I have done so elsewhere, but it is necessary to highlight a critical point: emergent phenomena are not causal pheonomena. That is (say) the picture of Richard Nixon does not 'cause' you to think of the disgraced former president. They require a perceiver, someone to 'recognize' the pattern being dispayed in the medium. And this recognition depends on a relevant (but not strictly defined) similarity between one's own mental state and the pattern being perceived. That's *why* perception (and language, etc), unlike strict causation, is context-sensitive.

And worth noting is that the same works the other way. There is no means for a student to 'cause' (strictly speaking) a recognition on the part of, say, an examiner, that he or she 'knows that Paris is the capital of France'. What is essential (and, I might add, ineliminable) is that the complex of this person's behaviours be recognized as displaying that knowledge. As Wittgenstein says, we recognize that a person believes the ice is safe by the manner in which he walks on the ice. And because this demonstration is non-causal, it depends on the mental state of the examiner, and worse, because (quite literally) we see what we want to see, the prior disposition of the examiner.

To return to constructivism. When I see people use a phrase like 'construct meaning', it is, to me, quite literally meaningless. Meaningless not merely because no process of construction occurs (the acquisition of memory, etc., is an associative process, a natural function of the brain, rather than an intentional process), but because meaning is quite literally not something that can be constructed at all, no more than (say) we can construct 'distance' or 'school spirit'. The meaning, quite literally, does not exist in the mind; saying that we are 'constructing' meaning is to (illicitly) appeal to a folk psychological theory of mind, whether we intend to or not.

In an email message not so long ago Farhad Saba argued that online learning constitutes no new paradigm because it introduces nothing new to distance education. Saba's point, and it is actually a good one, is that the replacement of one communications medium for another does not change the theory. The nature of communication, or information, or of interaction do not change merely because a wire is used instead of some other physical medium.

What is essential to the theory is the process of communication, the transfer of information. Whether or not we use a book, an instructor, or a wire, learning is the result of a process of communication, and this communication is improved - observably and measurably so - through a process of interaction. Hence, argues Saba, online learning, insofar as it postulates only a change of medium, introduces no new ement to the theory, and hence does not constitute, as he says, a new paradigm.

And it is with this assertion that I disagree. But I need to be clear about how I am disagreeing. The mere use of a computer network to facilitate communication adds nothing new over and above what we have seen before. There is abundant evidence that the processes and mechanisms that inform traditional distance learning theory may be accomplished online.

Indeed, this approach has characterized most of what we call 'online learning' to date; in my own work I have characterized this as 'traditional online learning' and in other work have described it as 'using new technology to replicate old processes'. It's like replacing a piece of wood with a shovel; a shovel may represent no advance over a piece of wood if it is used merely to, say, act as a fence rail or support an awning, but a shovel adds a whole new set of affordances previously unseen in pieces of wood, affordances that suggest a new practice, in this case, digging.

Two major sets of affordances offered in online learning are not found in traditional learning. First, online, communication occurs not through a channel, but through a network. And second, communication flows not merely through a passive medium but through a computational environment. Online learning embodies these affordances, and that they constitute a part of what is meant by online learning; this is what I demonstrated with reference to my 1998 paper.

In the earlier theory, there is a direct causal connection between states of affairs in the communicating entities; it is, therefore, a causal theory. But in the latter theory, there is no direct causal connection; it is what would be called (in the parlance of the new theory) an emergentist theory (that is, it is based on emergence, not causality).

That is what makes part of Saba's demands, while so seductive, so misleading. He calls for "evidence that show this claim is true" and "studies to substantiate this claim". Such demands are typical of causal theories; following the methodology outlined theorists like Carl Hempel, they require an assessment of initial conditions, an intervention, and a measurement of observed difference, as predicted by a (causal) generalization.

They are, like most Positivist and Positivist-inspired theories, reductive in nature; that is why, for example, we expect to find something like a reductive entity, 'the mesage', 'the information', 'the learning', and the like. They are also aggregationist; the presumption, for example, is that knowledge is cumulative, that it can be assembled through a series of transactions, or in more advanced theories, 'constructed' following a series of cues and prompts.

The allure of a causal theory is also that there appears to be no alternative. If there is no causal connection between teacher and learner, then how can any learning take place, except through some sort of divine intervention? Once we have established and begun to describe the causal process through which information is transacted from teacher to learner, we have pretty much claimed the field; any further account along these lines is an enhancement, an embellishment, but certainly not something new.

And indeed, the proposal that learning is supported using 'computers in a network' does not take us out of the causal realm; it does not involve a mystical hand of God. There is a physical, causal connection between entities in the network, and this is what makes communication possible. But the transfer of information does not reduce to this physical substrate; contrary to the communications-theoretical account, the new theory is a non-reductive theory.

Philosophically, there is substantial support for emergentist theories of knowledge. Philosphers have come up with the concept of 'supervenience' to describe something (sometimes called a 'mental state', though I resist that terminology) that is not the same as (ie., not reducible to) physical phenomena, but which are nonetheless dependent on them. See Donald Davidson, 'Mental Events'. Also, as mentioned above, Paul Churchland.

Thus, collections of physical states may share the same non-physical state; this non-physical state may be described as a 'pattern', or variously, 'a mental state', 'information', a 'belief', or whatever. Knowledge (and other mental states, concepts, and the like) when represented in this way are 'distributed' - that is, there is no discrete entity that is (or could be) an 'instance' of that knowledge.

Computationally, the theory also enjoys support. It is based in one of two major approaches to artificial intelligence. When we think of AI, we usually think of programs and algorithms - the usual stuff of information, signals and channels in computer theory closely tied to associated concepts in communication theory. The 'General Problem Solver' of Newell and Simon, for example, take a 'symbol processing' approach to computation in AI.

As discussed above, we see a similar theory expressed in the theories advanced by people like Fodor and Chomsky; the idea that cognition is (essentially) reducible to a physical symbol set (and therefore instances of cognition and transaction) are governed by the same mechanism. Against this, however, and arguably superior, is the 'connectionist' approach to AI, as described above in the work of Minsky and Papert or Rumelhart and MacClelland.

Mathematically, there is additional support. The properties of networks, as distinct from (typical) causal systems are expressed as a branch of graph theory, the study of which has recently come into prominence because of the work of Watts and Buchanan. These studies show not only how networks come to be structured as they are but also illustrate how something like, say, a concept can become a 'network phenomenon'.

Overall, what emerges from these separate strains of thought is a theory of learning that:

(a) is distinct from the communications-theoretic approach postulated by Moore and others, and demonstrably so,

(b) is enabled uniquely through learning using 'computers in a network',

(c) is created as an artifact of learning using 'computers in a network', that is, is a predictable consequence of the use of this technology, even if not explicitly designed,

(d) has enumerable pedagogical consequences, and

(e) is such that these consequences, and their relation to learning, can be observed and empirically measured (though not in a straightforward causal analysis, but rather through a process of recognition).

How is the new theory distinct? It seems to me that its essence is captured in the old expression, "You can teach people facts, but you can't teach them common sense." The type of learning enabled by, and made more likely by, the use of computer networks is more like the learning of common sense than it is like the learning of facts.

This suggests not only a difference in methodology - something other than the transfer of information through communication - but a difference in result - a deeper, more nuanced form of learning, learning which is at once more useful and more appropriate than that obtained using traditional practices.

This difference is best understood by looking at some differing properties of the new, networked, form of learning.

First, as mentioned above, and key to much of what follows is the idea of 'distributed representation'. What this means is that there is no 'place' that knowledge resides in the first place. Rather, as George Siemens outlines in 'Connectivism', knowledge resides in the network as a whole - the physical network composed of learners, teachers and practitioners, and not in any given part of the network.

The communications approach presupposes (at least in part) that there is some entity, a 'teacher', in whom the knowledge resides; the process of learning is therefore a facilitation of channels of communication between teacher and learner (at a minimum). But if knowledge resides in the network as a whole, it is much less clear that there is sense to be made of the role of the 'teacher', much less of creating a channel through which the knowledge provided by a teacher can flow.

This is a hard concept and one I have tried in the past to explain through the use of metaphor. For example, I have argued, where might we find the knowledge of 'how to fly to Europe'? Such knowledge involves not only the piloting and navigation of an aircraft, but also the knowledge of how to build an aircraft, how to fabricate the materials out of which the aircraft is built, how to program the aircraft's computers, principles of aerodynamics, and more. The person who knows how to land a 747 is distinct from the person who knows how to make the tires on which the airplane lands. We sometimes represent the knowledge of 'how to fly to Europe' as though it were contained in a single individual, but in fact, the people we credit with this knowledge are (literally) riding on and depending on a great deal that is out of their ken.

Moreoever, this concept - the distribution of knowledge - applies not only to obviously complex bits of knowledge, such as 'flying to Europe', but also even to basic bits of knowledge, such as '2+2=4' or 'this is a rabbit'. As theorists such as Quine have shown, even our basic bits of factual knowledge rest on a set of assumptions, a theoretical framework, if you will (or as Quine styles it, a 'web of belief'), not all of which may be contained in the mind of the individual making the assertion. In order to know that '2+2=4' one needs to know a great deal surrounding the concept of counting, symbolic representation of physical quantities, the verbal signals associated with those symbols, and more. There is a clear distinction to be drawn between a child being able to utter the words '2+2=4' and knowing that '2+2=4', and that difference lies in the cultural or community surround in which the child is immersed.

What online learning does is not merely to communicate information but to create such a network. Prior to the advent of online learning, all such networks were local - they were, even in instances of distance learning, physically constrained. But with online learning comes not only a much wider, more diverse network, but also the idea that (a) the network may be based on non-physical (or emergent) properties, (b) that the individual may choose to belong to or not belong to a network, and (c) that an individual may assume multiple identities or memberships in multiple networks.

The theory of distributed representation has a profound implication for pedagogy, as it suggests that learning (and teaching, such as it is) is not a process of communication, but rather, a process of immersion. Put loosely, it suggests the idea of teaching not by telling or even demonstrating but rather through the creation (or identification) of an environment into which a learner may be immersed.

Indeed, pushed further (as the concept should be) it suggests that the traditional distinction between teaching and practice ought to be collapsed, that there is no distinction in kind between, say, being a 'practicing physicist' and 'practising being a physicist'. Learning to be a physicist is to place oneself inside the community of practice (as Etienne Wenger would say), to learn as if through osmosis how to solve 'the problems at the end of the chapter' (as Thomas Kuhn would say). In contemporary learning theory, it involves the design of such environments as games or simulations (Prensky, Papert, Gee, Aldrich) or the explicit immersion of the workplace into the communty (Cross).



Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Dec 22, 2024 12:35 p.m.

Canadian Flag Creative Commons License.

Force:yes