Content-type: text/html Downes.ca ~ Stephen's Web ~ Dr. Jonassen- Let Us Learn to Solve Problems

Stephen Downes

Knowledge, Learning, Community

Sept 21, 2005

Thad Crews, Jr. wrote:

Given the lack of response by anyone with empirical data to support their opinions, I am guessing this research area is primarily persuasion rather than science or even systematic. If the existing research community is satisfied with discussing thoughts and ideas apart from experimental findings, then I guess I am as well. However we have to realize why those outside the community may not be moved by our opinions. Still, it has been a great discussion! :-)

Well this is an opinion that is shared not only by those outside the community but also by many who make administrative decisions within the community itself. There is indeed a movement that holds considerable sway within the political and administrative circles (the Campbell Collaboration) which advocates this point of view. One sees the view replicated in external studies of learning effectiveness (for example, the OECD PISA studies).

The premise of such a view is that the effectiveness of interventions - in our field, technological or pedagogical interventions - can be measured using a scientific method based on experimentation. The methodology demanded is a rigorous set of blind studies designed, through the use of test and control groups, to isolate and measure the impact of independent variables on desired outcomes. For example, if you wanted to learn whether cue cards improved students' knowledge of mathematics, two otherwise similar groups of students would be tested, one using cue cards and the other not, both before and after the intervention; the difference in learning outcome, if any, could reliably be attributed to the cue cards. The design of such experiments is standardized, allowing for their replication, and the conclusion reached is the result of an analysis of an aggregation of such experiments.

The question is, is this method of research in education effective? Certainly, there is no shortage of empirical research in learning; the journals are filled with reports (complete with comprehensive statistical analysis) of various interventions tested on various student groups. True, the number of students tested in such research is typicall very small (n=36 is typical). True, such research very rarely employs a control group. Nonetheless, there is a widespread belief on the part of those working in the field (or, at least, those reviewing submissions to academic journals) that such methodology provides useful and reliable research results.

Of course, such variables are rarely independent. For example, the effectiveness of cue cards (cards with written questions and answers on them) will depend on (say) students' literacy levels. Cue cards will (say) be more effective for students with high literacy levels, and less effective for students with low literacy levels. But it is not possible in the experimental design to isolate literacy levels from the use of cue cards; literacy levels will impact both cue-card and non-cue card based learning. Therefore, in order to effectively determine the effectiveness of cue cards, you must also measure for literacy levels, measue the impact of higher literacy levels on learning, and then subtract this impact from the observed impact of the use of cue cards, leaving a residual impact that is the impact of cue cards independently of literacy learning. This sort of post-experimental analysis forms the foundation of, say, the Thomas Fuchs and Ludger Woessmann conclusion that computers have a negative impact on learning.

But which variables do you eliminate? Cue cards require literacy in order to be effective; nobody supposes that cue cards will work if students cannot read them. But if the impact of literacy is removed from the experimental results, then the effectiveness of cue cards will be found to be minimal or even negative - after all, using cue cards with people who cannot read amounts to a net waste of time. This means that the design of the analysis is of crucil importance to an understanding fo the experimental results, and how people like Patrick Bussière and Tomasz Gluszynski can conclude that computers *do* improve learning, contradicting Fuchs and Woessmann, yet based on exactly the same experimental data.

Even more significantly, which variables do you *include* in the experimental design. One of the reasons the PISA study was significant was that it extended beyond the school walls to measure such things as access to a computer at home and parents' socio-economic standing. This was an explicit recognition that factors outside the school can influence the effectiveness of pedagogy and technology inside the school. But why stop with access to a computer at home and parents' socio-economic standing? Why not, for example, measure to determine whether Sesame Street was available to the students being tested? Why not measure for the number of newspapers available in the community? Why not measure for whether students worked after school? Why not measure for the number of video games played, the number of web pages visited, the number of instant messages sent, in the hours outside school?

It can be argued (and I would argue) that the design of the experiment can be (and is) adjusted in order to suit the conclusion you are trying to establish. The designers of the PISA study, which had as a major objective an analysis of public versus private education, were very interested in parental income, because the issue of parental income is often related to the issue of public versus private schooling. Hence they measured for parental income, and in subsequent analysis (such as in Fuchs and Woessmann) factored that out in comparisons of public and provate education, and incidentally, computer and non-computer aided instruction. Varying this analysis even slightly (for example, measuring not for a child's parents' income, but for a child's classmates' parents' income) would yield a very different result (for example, that children of poor parents who nonetheless hang out with children of rich parents do better than children of poor parents who do not).

Other points could be adduced, but to put the point plainly and bluntly: experimental design in education is a shell game. The result from such scientific research depends more on the assumptions made by the researchers and analysists as on the actual empirical pheonomena being studied. Indeed, the empirical results are themselves almost irrelevant; you may as well work with random data. And in practice, what we see is empirical research being used to obscure, rather than highlight, the theoretical or political presumptions informing the outcome.

To understand why this is the case is to understand how science itself works. Because scientists, like educatuional theorists, encounter the same sort of dilemma. Nature, like human brains, can be characterized in three very broad strokes:

First, it consists of multiple mutually dependent variables. Nothing in nature exists in isolation; not only does each entity ineliminably interact with every other entity (which makes, say, the three-body problem so intractible) but entities constitute interactive composites of other bodies (the breakfast you ate this morning quite literally becomes part of the brain that's learning algebra this afternoon). It is impossible, in other words, to fully understand the one without understanding the whole.

Second, the empirical evidence available *underdetermines* the set of possible understandings of the whole. This is especially evident in cases of limited observational evidence (the very same empirical data supported both the Ptolomeic and the Copernican view of the universe), but applies even in conditions of total observational evidence. Thus, for any set of observational data, multiple and mututally contradictory hypothese are always equally plausible.

And third, the process of empirical investigation itself changes the empirical data. It changes both the set of data available to researchers (or in slogan form, you measure what you're looking for (cf. Lakatos on the Millikan oil drop experiment) and (famously) it changes the data being measured itself. The selection of variables problem, highlighted above, is but one instance of this wider set of phenomena.

The upshot is, the idea that you can obtain knowledge merely by observing is false. Ineliminable in any research methodology is what Lakoff would call a 'frame', Kuhn a 'paradigm' and others call a 'world view' - a logical, linguistic and theoretical framework describing, among other things, what sort of things there are in the world, how they interact with each other, and how we can know about them. Empirical evidence rarely ever weighs in disputes between adherents of one frame or another, so rarely, in fact, that it is a matter of historical importance - Galileo and the pendulum or Einstein and the bending of light for example. As often important (if not more so) is the development of a new logical or linguistic paradigm - Aristotle's Prior Analytics, Descartes's Discourse on Method, Newton's Philosophiae naturalis principia mathematica, or Russell and Whitehead's Principia Mathematica, for example.

Now let me be clear - all this is *not* to say that experimentation and empirical data play no role in science. Rather, it is to say that what counts as empirical evidence and experimental methodology is almost always determined from *within* a paradigm, and seldom ever adjudicates between them.

Let's return to the PSA study, for example, to explain this. The PISA methodology was important for two major reasons: first, researchers measured, not for stated curricular outcomes, but for independently established reading and writing outcomes (which in turn, according to the report, were selected based on their impact of a student's eventual economic success in the marketplace). And second, researchers measured, not students designated to have achieved a certain level of education (by, say, placement in a grade) but students at a certain age (thus measuing in some students achievement in areas they had not yet been taught in certain school systems).

Is this approach empircally wrong? Well, no - if you believe that the purpose of an education is to enable a student to achieve maximal economic performance in a market economy, and if you have previously established a correlation between certain educational outcomes and maximal economic performance, then this methodology is appropriate. The PISA researchers were looking for a very specific *type* of learning, based on a very specific *type* of educational theory (one in which a linear causal relation holds between educational inputs and performance outputs).

Now we look at David Jonassen's theory, which advocates the use of a problem-solving mthodology in learning, and ask whether it even makes sense to evaluate it empirically, using a PISA-style methodology, against - well, what? cue cards? computers in learning? memorization and rote? And we ask, what would the empirical determinates of such an investigation be? Whether problem-solving students achieve higher economic performance in market-based societies? Whether problem-solving students achieve higher results on standard grade twelve math tests? And it becomes apparent that a demand for empirical evidence of that sort is not merely ridiculous, it is a demand that, on analysis, does not even make sense. It is, quite literally, an empty demand, because there is no set of conditions that could ever satisfy it.

So how do we decide between competing frames, between competing world views? To a certain degree it is, as Crews suggests, "primarily persuasion". But in a wider sense, it is through a methodology first characterized by Charles Sanders Pierce as 'abduction' and known more popularly today as 'inference to the best explanation'. To describe it in boldest outlines, the methodology is to (a) create, using whatever logical, linguistic and analytical tools are at your disposal, a theoretical framework, and (b) to select, among competing frameworks, the best 'fit' for the world as we know it.

How do we determine the best 'fit'? This (of course) varies between theories - Hegel's triadic logic (thesis, antithesis, synthesis) fits the world very differently from the logical positivists' predicate calculus. But loose criteria have (more or less) been adopted generally:

- simplicity - a simple theory is, all other things being equal, preferred over a complex theory (this is sometimes loosely known as Ockham's razor)
- scope - a theory that explains more things is, all other things being equal, preferred over a theory that explains fewer things
- expressiveness - a theory that says more about what we do not observe is, all other things being equal, preferred over a theory that says less (hence, a theory that makes predictions, for example, which can later be tested, is better than a theory that cannot be used to make predictions)

Based on this, then (and keeping in mind that some disputants may employ other criteria), the determination of whether or not to accept problem-based learning becomes a question of whether problem-based learning is simpler, wider in scope, and more expressive than competing theories. And it obtains this standing, not merely on its own merit, but also based on the merit of more general theories (of mind, say) of which it is a part. Specifically: on my reading (which David Jonassen is of course in a better position to offer) this theory is one of a class of theories that may be characterized as 'representationalist' theories of mind. On such theories, human knowledge consists not merely in an accumulation of facts or skills, but rather, in the (innate? or intentional?) construction of a set of one or more representations - or models - of the world. Such theories are typically (though not exclusively) based on cognitivist theories of mind, that is, theories that suggest that human mental processes consist in the manipulation and presentation of propositional entities (sentences, facts, states of affairs) - cf. Jerry Fodor, Representations. The idea is that the presentation of learning in the form of problems assists the student not only in creating these models or representations, but also in navigating through them, and the nature of representations is such that they are able to negotiate not only problems they have learned to solve, but also novel problems, which may be subsumed under the same representation or frame.

In a sense, Jonassen's theory does for individual learning what abduction does for science; it explicitly incorporates the role of abstract entities into knowledge creation in such a way as to allow the knower to view the empirical data (the problem, the evidence) from a point of view (indeed, one could say the problem creates the point of view) rather than as undifferentiated data, thus allowing for the creation of mental representations that are simpler, have wider scope, and greater predictive power than, say, a mere accumulation of data.

Leaving aside questions of whether my characterization of problem-based learning is accurate, we now ask, how would we evaluate - through empirical science - this theory as opposed to, say, rote learning. Let us suppose, for example, that we are the PISA investigators, that we have just received a large sum of money from OECD to go out and run some studies. And let us suppose, say, that like the PISA investigators, we have decided that performance against a certain set of outcomes in literacy and mathematics will be the basis against which the effectiveness of problem-based learning and rote memorization will be measured.

I contend that the finding would be (at best): no significant difference.

Why? Because the PISA evaluators measured accumulation of facts. Because they view learning as the accumulation of a certain set of outcomes, as indicated by performance on a set of well known and widely understood set of examination metrics, which practitioners of rote learning can reasonably be predicted to include in the body of learning to be accumulated. The tests measure 'ABC', the rote practitioners have their students memorize 'ABC', and the output is therefore 'ABC'. In problem-based learning, meanwhile, the practitioners teach [A...Z] (in other words, a model of the alphabet, if you will), the students learn the mechanism for generating the sequence, and (assuming the questions are sufficiently clear as to allow the inference) when asked to generate an appropriate response, they general 'ABC'.

Or, more colloquially: you can teach both a sheep and a human to go to water, and if that's the only learning outcome you measure, then the learning is the same for both humans and sheep. The outcome of empirical research depends on what you ask, what you're looking for.

So what would lead us to favour problem-based learning over rote learning?

If we could show (which I doubt) that the representational theory of mind is more simple than the (behaviourist, cause-effect) theory of mind. Or if we could show that (as we probably can) that the representational theory of mind has wider scope (for example - representationalism explains, in a way that behaviourism doesn't, the roles of perspective, emotion and motivation in learning). Or if we could show (as we probably can) that representationalism is more expressive than behaviourism. And in a similar way, in order to support problem-based learning against alternatives within the representationalist framework, we would need to adduce a similar set of criteria.

None of this would look like empirical science because it is not, at heart, an instance of (the traditional picture of) empirical science. It is the matching of one conceptual framework against another, where the mechanisms of mediation are logical, linguistic and analytical tools, as the nature, capacities and mechanisms of the theory are described and explained. The best argument Jonassen can advance for his theory is to describe the theory; the question of fit is determined, over time, by the cumulative experiences of a multitude of practitioners against nebulous and undefined criteria. And the determination of whether problem-based learning is widely accepted will depend as much on whether we decree that the desirable output of the educational system is improved economic performance in a market based economy, or adroit adaptation in a rapidly changing environment.

For my own part, my theory of mind is not a cognitivist and representationalist theory of mind, and theory of education therefore very different in its fundamentals from Jonassen's. But the difference between us is equally untestable - it amounts (on my phrasing) to trying to determine whether it is better to see the word or to read the word - a nonsense sort of question that would lead to nonsense empirical results. For I - quite literally - see the world in a different way from a Dewey or a Piaget or a Jonassen. And that (of course) becomes my task to describe and explain as well as I can.



Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Nov 23, 2024 4:42 p.m.

Canadian Flag Creative Commons License.

Force:yes