Jun 19, 2007
I have just finished a presentation to the British Council consisting of a video and a short discussion. I'm not happy with the result - partially because the process of producing the video seemed to be cursed (including one crash that wiped out hours of work - Camtasia has no autosave! who knew?) and partially because I didn't feel comfortable with the discourse.
The video production is one thing, and I can live with a more or less proficient video because it's part of the ongoing process of learning a new way to communicate. But I'm less sanguine about the discourse. I have a sense of what went wrong with it - I even talked about that a bit during the session - but still it nags at me with deeper issues still unresolved.
We weren't very far into the discussion when I made the comment that "if you're just presenting information, online is better than the traditional classroom." The point I was trying to make was that the unique advantage of the classroom is that it enables face-to-face interaction, and that it should be used for that, leaving other things to other people.
And so, of course, someone asked me, "How do you know?" Which stopped me - not because I don't know - but because of the utter impossibility of answering the question.
There are so many differences in community - the different vocabularies we use and the different assumptions we share, for example. For me to express point A in such a way that it will be understood the way I understand it, I need to work through a fair amount of background. But in a session like this - a 20 minute video and a few seconds of discussion - there was no way I was going to be able to accomplish that.
And this carries over to differences in epistemology. The question of 'How do you know' means different things to different people. In some cases, it's not even appropriate - if a football coach instructs a player, the player doesn't say "How do you know" because he knows that the coach isn't set up to answer questions of that sort (he'll say, "I depend on my experience" or some such thing, offering a statement that has no more credibility than the original assertion). In other cases, some sort of process or set of conditions is assumed - and this varies from discipline to discipline, community to community.
In this particular instance I was speaking at a conference on blended learning. So there's a certain perspective that has already been adopted, one that already says that the classroom should not be abandoned. Indeed, the classroom is like the baseline reference, and the role of ICT is to support by being what the classroom cannot be - being available at home, for example, or at midnight, or around the world. ICT is about enhancing learning, in the blended learning model. And this picture couldn't be further from my own model if it tried. For me, it felt like going to a prayer meeting and talking about the role atheism could play in the devotee's life.
You see, from where I sit, blended learning is a bit like intelligent design. It's a way for people to keep hold of their traditional beliefs, to maintain the primacy of the classroom, the primacy of authority in education, the primacy of the information-transfer model of learning, and at the same time (because it's blended, you see) to appear as advocates of new learning technologies, including (as was the subject of the conference) Web 2.0. It's faith pretending to be science. While in my world, there is basically no role for the classroom at all. It's irrelevant.
To their credit, they were willing to let me have that, giving me room to reinvent the face-to-face interaction (which I do believe in) to allow full and proper play for Web 2.0 and ICT in general. But I am still faced with the fundamental questions: how do I explain what I mean, and how do I know (or show I know) it is true?
To take a case in point: I said "if you're just presenting information, online is better than the traditional classroom." What I thought I was making was a straight-forward assertion about the properties of the traditional classroom and the online presentation of information. I wanted to bring this out but found that I didn't have the words.
For example, information is transmitted online at much greater bandwidth than in a classroom. This is partially because a person standing at the front of the room can only speak at a certain speed. The words only come out so fast - and at a fraction of the speed they can be read (at least by most people). And in a classroom the instructor must attend to the needs of all students, which means there will be periods of 'dead air', where one student is being addressed at the expense of everyone else, who must sit and wait.
I wanted to say this, but I couldn't say this, because the audience must already know this - and yet, despite this knowledge, will still favour classroom delivery, which is why what I thought was a statement of fact - that "if you're just presenting information, online is better than the traditional classroom" - became a statement of opinion, that needed some sort of evidence. From my perspective, it was as though I had said "the sky is blue" and someone (who apparently believe there was no sky) asked my how I knew. How do you explain? How do you argue?
What could 'better' even mean in such a context?
Because my own statement - that "if you're just presenting information, online is better than the traditional classroom" - doesn't even make sense in the context of my own theory, because I do not support an information-transfer theory of education. I'm in the position where I'm trying to discuss the relative advantages of online and in-class learning, and trying to place myself into the context of the existing discussion, which works to a certain point, but which vaporizes when pressed in certain ways.
How do I know it is better? Well in this world there are certain outcomes to be expected, and means of measuring those outcomes, so that the relative efficacy of classroom instruction and online instruction could be compared, by conducting pretests and post-tests against standardized evaluations, using standardized curricula. And the best I could say, under such conditions, is that there is no difference, based on 40 years of studies. Which they must know about, right?
All this is going through my mind as I seek to answer the question.
I consider the possibility that by 'better' he means 'more efficient'. Because here I could argue (with some caveats about production methods and delivery, the sort of things I outline in Learning Objects) that the use of online delivery methods is much cheaper than the very labour-intensive methodology of the classroom. That we are paying, for example, research professors (who don't even want to teach) very high salaries to accomplish something that could be as well done using multimedia.
So I concluded that he was looking for evidence of the usual sort - studies that showed knowledge was more reliably transferred (or at the very least, implanted) using ICTs than in classroom instruction. Probably such studies exist (you can find a study to support almost anything these days). But I am again hitting the two-fold dilemma.
First, our conception of the task is different. I had just come from reading and writing about associative learning. "The result in the brain is strengthening or weakening of a set of neural connections, a relatively slow process." It's not about content transfer, it's about repeated exposure (preferably where it is highly salient, as this impacts the strength of the neural connection). The classroom plays almost no role in this; at best it focuses the student's attention, so that subsequent exposure to a phenomenon will be more salient.
This is (as so often happens) abutted directly against corporate or institutional objectives. The fact that trainers and teachers have certain things that they need to teach their students, and that this is generally non-negotiable (to me, this is a lot like the Senate legislating that the value of Pi is 3, but I digress). That evidently, and by all evidence, these objectives can be accomplished using classroom instruction, and that moreover, they might not be using ICTs.
The evidence, of course, is the set of successful exam results. One would think, with the experience of No Child Left Behind behind us, that we would be sensitive to the numerous and multifarious means of manipulating such results. I have written before about how such tests can' be trusted. About how the proposition that there can be (so-called) evidence-based policy should not be believed. And I've linked to the misconceptions people carry with them about this. But I can't shake in people that belief that there is, after all is said and done, some way to measure whether one or the other is better.
The thing is, there is no definition of 'better' that we could define the parameters for such a measurement, and even if there were, the determinates of 'better' are multiple and complex. A person's score on a test, for example, is subject to multiple and mutually dependent factors, such that you cannot control for one variable while testing for the others. Any such measurement will build into its methodology the outcome it is looking for.
The problem is - according to everything we seem to know - unless there is some way of measuring the difference, there is no way to know the difference. Even if we don't believe that "if it can't be measured, it doesn't exist," it must be that measurements give us some sense of what is better and what is not - that they can at least approximate reality, if not nail it down precisely. I don't agree - the wrong measurement can suggest that you are succeeding, when you are failing. Sometimes these wrong measurements are deliberately constructed - the phenomenon of greenhouse gas intensity is a case in point.
At a minimum, this position takes a good deal of background and analysis to establish. At worst, attempting to maintain such a position leaves open the charge of 'charlatan'. Responses like this: "Each time I read a student's paper containing 'I think, I feel, I believe,' I am aggravated, acerbically critical, and given to outbursts of invective: 'Why do I care what you feel?' I write, roaring with claw-like red pen. 'This is not an emotional experience. Believe? Why would you think you can base an argument on unsubstantiated belief? You don't know enough to believe much of anything. Think? You don't think at all. This is mental masturbation. Without evidence you have said exactly nothing!'"
Am I a charlatan when I say things like "if you're just presenting information, online is better than the traditional classroom?" Even if I have nothing to personally gain from such statements, am I leading people down the garden path? It is very difficult, in the face of things like the British Council presentation, to suppose people are thinking anything else. "It's a nice line," they think to themselves as I stumble in front of them, attempting lamely to justify my lack of evidence, "but there's no reason I should believe it."
Which raises the question - why do I believe it?
I have made decisions in my own life. I have chosen this way of studying over that. I have chosen this way of communicating over that. I didn't conduct a study of which way to learn and which way to communicate. I operated by feel. There's no way of knowing whether I might not have been more successful if, say, I had stayed in the academic mainstream, published books and papers, assigned my copyrights to publishers, learned through classes and conferences and papers and lectures.
But, of course, that was never the decision I made. At no point did I sit down and say, I will eschew traditional academia, I will learn informally, through RSS and Gog-knows-what Web 2.0 technology, and (while I'm at it) I will embrace Creative Commons and lock publishers out of the loop. Indeed, I don't think I could have imagined all of that, were we to suppose some fateful day when such a decision would have been made. I made the decision one small step at a time, one small adjustment at a time, as though I were surfing a wave, cutting, chipping, driving forward, each decision a minute adjustment, each characterized not by measurement, not by adherence to principle, but by feel, by reaction, by recognition.
This is important. George Siemens says that knowledge is distributed across the network, and it is, but how we know is irreducibly personal.
What does that mean? Well, part of what it means is that when we are actually making decisions, we do not in fact consult principles, best practices, statistics or measurements. Indeed, it is even with some effort that we refrain from playing the hunch, in cases where we (cognitively) know that it's a bad bet (and we walk away (and I've had this feeling) saying, "I know the horse lost, but I still should have bet on the gray," as if that would have made the difference).
Malcom Gladwell says, make snap decisions. Trust our instincts. What this means is very precisely an abandonment of principle, an abandonment of measurement, in the making of decisions. It's the same sort of thing. My 'knowing' is the culumation of a lifetime of such decisions. I have come to 'know' that "if you're just presenting information, online is better than the traditional classroom" in this way - even though the statement is, in the contex of my own theories, counterfactual. I know it in the same way I know that 'brakeless trains are dangerous' - not by any principle, not my any evaluations of actual brakeless trains, but because I have come to know, to recognize, the nature (and danger) of brakeless trains.
We sometimes call this 'the weight of experience'. And this is why my 'knowledge' differs from yours. Not because one of us, or the other, has failed to take into account the evidence. But because the weight of our respective experiences differs.
This gets back to the question of why 'presenting information' will not be 'successful' (let alone 'better') in my view. Recall that I said that the wrong measurement can suggest that you are succeeding, when you are failing. We can present information, and then test students to see if the remember that information. If they are successful on the test, then we say that they 'know' that information.
My experiences with my presentations is different. I can make a presentation - such as, say, today to the British Council - and walk away feeling that while the audience heard me, and while they could probably pass a test (I am a good presenter, after all, even on my bad days, and they are smart people, with exceptional memories), I would not say that they 'know' what I taught them. Wittgenstein says, "Somebody demonstrates that he knows that the ice is safe, by walking on it." These participants may leave the conference being able to repeat the words, but scarce any of them will change their practice, eschew the classroom, embrace the world of Web 2.0.
How can I say that they know my position, if all they do (all they can do?) is repeat the words? If they 'knew' my position, they would change their practice - wouldn't they? If they had the same knowledge I had - which would have the same weight of experience I had - they they would naturally, without the need for convincing (or even training) make the same decisions I did. Without needing even to think about it. That's what Dreyfus and Dreyfus call 'expert knowledge'. "He does not solve problems. He does not even think. He just does what normally works and, of course, it normally works." And it can't be obtained by measurement, it can't be expressed in principles, it can't be taught as a body of knowledge, and it can't be measured by answers on a test.
A presentation such as the one I gave at British Council this morning (or at CADE a month ago) isn't a transfer of information. People may acquire some words and expressions from me, but they won't acquire knowledge, because even if my presentation were perfect, it could not perform the repetition of instances required in order to create a weight of experience on a certain subject. The best I could do is to repeat a word or phrase over and over, in different ways and slightly different contexts, the way advertising does, or the comedian that kept repeating 'Via' ("Veeeeeee.... ahhhhhhh").
A presentation is a performance. It is a demonstration of the presenter's expertise. The idea is that, through this modeling - through facility with the terminology, through demonstration of a methodology, through the definition of a domain of discourse (which will be reinforced by many other presentations on the same subject - if you hear Wittgenstein's name often enough, you come to believe he's a genius) - you learn what it is to be 'expert'.
A lecture won't impart new knowledge on older, more experienced listeners at all - it acquires the status of gossip, serving mainly to fill people in on who has been saying what recently, what are the latest 'in' theories or terms. The point of a talk on 'Web 2.0' is to allow people to talk about it, not to result in their 'knowing' it. With younger participants (interestingly the least represented at academic conferences, lest they be swayed by people other than their own professors) the inspiring demonstration of academic expertise serves as a point of departure for a lifetime of similar practices that will, in a generation, result in similar expertise (people did not become disciples of Wittgenstein because they believed him - it is very unlikely that they even understood him - but by the fact that he could (with a glance, it seemed) utterly demolish the giants in the field of mathematical philosophy).
I have spoken elsewhere about what sort of knowledge this is. It is - as I have characterized it elsewhere - emergent knowledge, which may be known by the fact that it is not perceived (ie., it is not sensory, the way 'red' or 'salty' are sensory) and it is not measured, but by the fact that it is recognized. It is a 'snapping to' of awareness, the way we see a duck (or a rabbit) or suddenly discover Waldo.
'Recognition', in turn, amounts to the exciting of a set of connections, one that is (relevantly) similar to the current content of perception. It is a network phenomenon - the activation of a 'concept' (and its related and attendant expectations) given a certain (set of) input condition(s). When we present certain phenomena to the network, in the form of a set of activations at an 'input layer' of neurons, then based on the set of existing connections in the network, some neurons (and corresponding connections) are activated, while others remain silent; this present experience (sometimes) produces a response, and (in every case) contributes to the set of future connections (one connection is subtly strengthened, another subtly weakened).
When presented with a certain set of input phenomena, you can remember - to certain degree. If given sufficient motivation, you can associate certain noises (or certain shapes) with each other. On being told, I can remember that 'Paris' is the 'capital' of 'France', and even repeat that information on a test (and moreover, remember who said it to me, and when, and under what circumstances), but I cannot be said to know unless I demonstrate (a disposition?) that if I want to see the President of France, that then I go to Paris. And this is not the sort of thing that is on a test - it is a sort of thing that allows a person to have 'learned' that Sydney is in Australia, and even how to book an airline ticket to Sydney, and not notice that they are traveling to Canada.
How do I know? Because - by virtue of my experiences with traditional and online settings - if I were trying to support knowledge in a person, I would not turn to the classroom, but rather, some sort of practice, and even if I were (because of policy or the demands of corporate managers) trying to support remembering in a person, I would contrive to have it presented to them, ovr and over, in the most efficient and ubiquitous means possible, which today is via ICTs.
How do you know whether to believe me?
You don't. Or, more accurately, there is nothing I can provide you that will convince you to believe me if you are not already predisposed to believe me. The best I can do is to suggest a course of action (ie., a set of experiences that you can give yourself) such that, after these experiences, you will come to see the world in the same way I do. That is why my talk to the British Council (and to many other audiences) described just that, a set of practices, and not a set of theorems, or experimental results, or the like.
The practices I presented constitute (one way of describing) the practices I undertake in my own learning and development. The evidence, then, of whether these practices is in whether you believe that I have demonstrated my expertise. This, in turn, depends on your own sense of recognition - some people will recognize that I have achieved a certain degree of expertise, while others will leave the room with the verdict of 'charlatan'.
And what follows is a subtle dance - the connectivism George Siemens talks about - where you demonstrate your expertise and I demonstrate mine - and where each of us adopts some of the practices of the others (or rejects them, as the case may be) and where the connections between people with similar practices is reinforced, and knowledge demonstrated in such a community not by what it says (hence the fate of critical theory) but by what it does. This is the process (and I have explained elsewhere the properties of the network that will grant the process some degree of reliability).