Content-type: text/html Downes.ca ~ Stephen's Web ~ Research and Evidence

Stephen Downes

Knowledge, Learning, Community

Half an Hour, May 02, 2015

I wrote the other day that the study released by George Siemens and others on the history and current state of distance, blended, and online learning was a bad study. I said, "the absence of a background in the field is glaring and obvious." In this I refer not only to specific arguments advanced in the study, which to me seem empty and obvious, but also the focus and methodology, which seem to me to be hopelessly naive.

Now let me be clear: I like George Siemens, I think he has done excellent work overall and will continue to be a vital and relevant contributor to the field. I think of him as a friend, he's one of the nicest people I know, and this is not intended to be an attack on his person, character or ideas. It is a criticism focused on a specific work, a specific study, which I believe well and truly deserves criticism.

And let me clear that I totally respect this part of his response, where he says that "in my part of the world and where I am currently in my career/life, this is the most fruitful and potentially influential approach that I can adopt." His part of the world is the dual environments of Athabasca University and the University of Texas at Arlington, and he is attempting to put together major research efforts around MOOCs and learning analytics. He is a relatively recent PhD and now making a name for himself in the academic community.

Unfortunately, in the realm of education and education theory, that same academic community has some very misguided ideas of what constitutes evidence and research. It has in recent years been engaged in a sustained attack on the very idea of the MOOC and alternative forms of learning not dependent on the traditional model of the professor, the classroom, and the academic degree. It is resisting, for good reason, incursions from the commercial sector into its space, but as a consequence, clinging to antiquated models and approaches to research.

Perhaps as a result, part of what Siemens has had to do in order to adapt to that world has been to recant his previous work. The Chronicle of Higher Education, which for years has advanced the anti-technology and anti-change argument on behalf of the professoriate, published (almost gleefully, it seemed to me), this abjuration as part and parcel of its article constituting part of the marketing campaign for the new study.

When MOOCs emerged a few years ago, many in the academic world were sent into a frenzy. Pundits made sweeping statements about the courses, saying that they were the future of education or that colleges would become obsolete, said George Siemens, an author of the report who is also credited with helping to create what we now know as a MOOC.

"It's almost like we went through this sort of shameful period where we forgot that we were researchers and we forgot that we were scientists and instead we were just making decisions and proclamations that weren't at all scientific," said Mr. Siemens, an academic-technology expert at the University of Texas at Arlington.

Hype and rhetoric, not research, were the driving forces behind MOOCs, he argued. When they came onto the scene, MOOCs were not analyzed in a scientific way, and if they had been, it would have been easy to see what might actually happen and to conclude that some of the early predictions were off-base, Mr. Siemens said.

This recantation saddens me for a variety of reasons. For one this, we - Siemens and myself and others who were involved in the development of the MOOC - made no such statements. In the years between 2008, when the MOOC was created, and 2011, when the first MOOC emerged from a major U.S. university, the focus was on innovation and experimentation in a cautious though typically exuberant attitude. 

Yes, we had long argued that colleges and education had to change. But none of us ever asserted that the MOOC would accomplish this in one fell swoop. Those responsible for such rash assertions were established professors with respected academic credentials who came out of the traditional system, set up some overnight companies, and rashly declared that they had reinvented education.

It's true, Siemens has moved over to that camp, now working with EdX rather than the connectivist model we started with. But the people at EdX are equally rash and foolish:

(Anant) Argarwal (who launched EdX) is not a man prone to understatement. This, he says, is the revolution. "It's going to reinvent education. It's going to transform universities. It's going to democratise education on a global scale. It's the biggest innovation to happen in education for 200 years." The last major one, he says, was "probably the invention of the pencil". In a decade, he's hoping to reach a billion students across the globe. "We've got 400,000 in four months with no marketing, so I don't think it's unrealistic."

Again, these rash and foolish statements are coming from a respected university professor, a scion of the academy, part of this system Siemens is now attempting to join. As he recants, it is almost as though he recants for them, and not for us. But the Chronicle (of course) makes no such distinction. Why would it?

But the saddest part is that we never forgot that we were scientists and researchers. As I have often said in talks and interviews, there were things before MOOCs, there will be things after MOOCs, and this is only one stage in a wider scientific enterprise. And there was research, a lot of it, careful research involving hundreds and occasionally thousands of people, which was for the most part ignored by the wider academic community, even though peer reviewed and published in academic journals. Here's a set of papers by my colleagues at NRC, Rita Kop, Helene Fournier, Hanan Sitlia, Guillaume Durand. An additionally impressive body of papers has been authored and formally published by people like Frances Bell, Sui Fai John Mak, Jenny Mackness, and Roy Williams. This is only a sampling of the rich body of research surrounding MOOCs, research conducted by careful and credible scientists.

I would be remiss in not citing my own contributions, a body of literature in which I carefully and painstakingly assembled the facts and evidence leading toward connectivist theory and open learning technology. The Chronicle has never allowed the facts to get in the way of its opinions, but I have generally expected much better of Siemens, who is (I'm sure) aware of the contributions and work of the many colleagues that have worked with us over the years.

Here's what Siemens says about these colleagues in his recent blog post on the debate:

One approach is to emphasize loosely coupled networks organized by ideals through social media. This is certainly a growing area of societal impact on a number of fronts including racism, sexism, and inequality in general. In education, alt-ac and bloggers occupy this space. Another approach, and one that I see as complimentary and not competitive, is to emphasize research and evidence. (My emphasis)


In the previous case he could have been talking about the promulgators of entities like Coursera, Udacity and EdX, and the irresponsible posturing they have posed over the years. But in this case he is talking very specifically about the network of researchers around the ideas of the early MOOCs, connectivism, and related topics.

And what is key here is that he does not believe our work was based in research and evidence. Rather, we are members of what he characterizes as the 'Alt-Ac' space - "Bethany Nowviskie and Jason Rhody 'alt-ac' was shorthand for 'alternative academic' careers." Or: "the term was, in Nowviskie's words,' a pointed push-back against the predominant phrase, 'nonacademic careers.' 'Non-academic' was the label for anything off the straight and narrow path to tenure.'" (Inside Higher Ed). Here's Siemens again:

This community, certainly blogs and with folks like Bonnie Stewart, Jim Groom, D'Arcy Norman, Alan Levine, Stephen Downes, Kate Bowles, and many others, is the most vibrant knowledge space in educational technology. In many ways, it is five years ahead of mainstream edtech offerings. Before blogs were called web 2.0, there was Stephen, David Wiley, Brian Lamb, and Alan Levine. Before networks in education were cool enough to attract MacArthur Foundation, there were open online courses and people writing about connectivism and networked knowledge. Want to know what's going to happen in edtech in the next five years? This is the space where you'll find it, today.

He says nice things about us. But he does not believe we emphasize research and evidence.

With all due respect, that's a load of crap. We could not be "what's going to happen in edtech in the next five years" unless we were focused on evidence and research. Indeed, the reason why we are the future, and not (say) the respected academic professors in tenure track jobs is that we, unlike them, respect research and evidence. And that takes me to the second part of my argument, the part that states, in a nutshell, that what was presented in this report does not constitute "research and evidence." It's a shell game, a con game.

Let me explain. The first four chapters of this study are instances of what is called a 'tertiary study' (this is repeated eight times in the body of the work). And just as "any tertiary study is limited by the quality of data reported in the secondary sources, this study is dependent on the methodological qualities of those secondary sources." (p. 41) So what are the 'secondary sources'? You can find them listed in the first four chapters (the putative 'histories') (for example, the list on pp. 25-31). These are selected by doing a literature search, then culling them to those that meet the study's standards. The secondary surveys round up what they call 'primary' research, which are direct reports from empirical studies.

Here's a secondary study that's pretty typical: 'How does tele-learning compare with other forms of education delivery? A systematic review of tele-learning educational outcomes for health professionals'.The use of the archaic term 'tele-learning' may appear jarring, but despite many of the studies being from the early 2000s I selected this one as an example because it's relatively recent, from 2013. This study (and again, remember, it's typical, because the methodology in the tertiary study specifically focuses on these types of studies):

The review included both synchronous (content delivered simultaneously to face-to-face and tele-learning cohorts) and asynchronous delivery models (content delivered to the cohorts at different times). Studies utilising desktop computers and the internet were included where the technologies were used for televised conferencing, including synchronous and asynchronous streamed lectures. The review excluded facilitated e-learning and online education models such as the use of social networking, blogs, wikis and BlackboardTM learning management system software.


Of the 47 studies found using the search methods, 13 were found to be useful for the purposes of this paper. It is worth looking at the nature of this 'primary literature':

 

(Sorry about the small size - you can view the data in the original study, pp. 72-73)

Here's what should be noticed from these studies:

  • They all have very small sample sizes, usually less than 50 people, with a maximum size less than 200 people
  • The people studies are exclusively university students enrolled in a traditional university course
  • The method being studies is almost exclusively the lecture method
  • The outcomes are assessed almost exclusively in the form of test results
  • Although many are 'controlled' studies, most are not actually controlled for "potential confounders"

This is what is being counted as "evidence"for "tele-learning educational outcomes." No actual scientific study would accept such 'evidence' for any conclusion, however tentative. But this is typical and normal in the academic world Siemens is attempting to join, and this is by his own words what constitutes "research and evidence."

Why is this evidence bad? The sample sizes are too small for quantificational results (and the studies are themselves are inconsistent so you can't simply sum the results).The sample is biased in favour of people who have already had success in traditional lecture-based courses, and consists of only that one teaching method. A very narrow definition  of 'outcomes' is employed. And other unknown factors may have contaminated the results. And all these criticisms apply if you think this is the appropriate sort of study to measure educational effectiveness, which I do not.

I said above it was a con game. It is. None of these studies is academically rigorous. They are conducted by individual professors running experiments on their own (or sometimes a colleague's) classes.The studies are conducted by people without a background in education, subject to no observational constraints, employing a theory of learning which has been for decades outdated and obsolete. These people have no business pretending that what they are doing is 'research'. They are playing at being researchers, because once you're in the system, you are rewarded for running these studies and publishing the results in journals specifically designed for this purpose.

What it reminds me of is the sub-prime mortgage crisis. What happened is that banks earned profits by advancing bad loans to people who could not afford to pay them. The value of these mortgages was sliced into what were called 'tranches' (which is French for 'slice', if you ever wondered) and sold as packages - so they went from primary sources to secondary sources. These then were formed into additional tranches and sold on the international market. From secondary to tertiary. By this time they were being offered by respectable financial institutions and the people buying them had no idea how poorly supported they were. (I'm not the first to make this comparison.)

Not surprisingly, the reports produce trivial and misleading results, producing science that is roughly equal in value to the studies that went into it. Let's again focus on the first chapter. Here are some of the observations and discussions:

it seems likely that asynchronous delivery is superior to traditional classroom delivery, which in turn is more effective than synchronous distance education delivery. (p. 38)

both synchronous and asynchronous distance education have the potential to be as effective as traditional classroom instruction (or better). However, this might not be the case in the actual practice of distance education (p. 39)

all three forms of interaction produced positive effect sizes on academic performance... To foster quality interactions between students, an analysis of the role of instructional design and instructional interventions planning is essential.

In order to provide sufficient academic support, understanding stakeholder needs is a main prerequisite alongside the understanding of student attrition (p.40)


I'm not saying these are wrong so much as I am saying they are trivial. The field as a whole (or, at least, as I understand it) has advanced far beyond talking in such unspecific generalities as 'asynchronous', 'interaction' and 'support'. Because the studies themselves are scientifically empty, no useful conclusions can be drawn from the metastudy, and the tertiary study produces vague statements that are worse than useless (worse, because they are actually pretending to be new and valuable, to be counted as "research and evidence" against the real research being performed outside academia).

Here is the 'model' of the field produced by the first paper:

It's actually more detailed than the models provided in the other papers. But it is structurally and methodologically useless, and hopelessly biased in favour of the traditional model of education as practiced in the classrooms where the original studies took place. At best it could be a checklist of things to think about if you're (say) using PowerPoint slides in your classroom. But in reality, we don't know what the arrows actually mean, the 'interaction' arrows are drawn from Moore (1989) , and the specific bits (eg. "use of LMS") say nothing about whether we should or whether we shouldn't.

The fifth chapter of the book is constructed differently from the first four, being a summary of the results submitted from the MOOC Research Institute (MRI). Here's how it is introduced:

Massive Open Online Courses (MOOCs) have captured the interest and attention of academics and the public since fall of 2011 (Pappano, 2012). The narrative driving interest in MOOCs, and more broadly calls for change in higher education, is focused on the promise of large systemic change.


The unfortunate grammar obscures the meaning, but aside from the citation of that noted academic, Laura Pappano of the New York Times, the statements are generally false. Remember, academics were studying MOOCs prior to 2011. And the interest of academics (as opposed to hucksters and journalists) was not focused on 'the promise of large systemic change' nearly so much as it was to ionvestigate the employment of connectivist theory in practice. But of course, this introduction is not talking about cMOOs at all, but rather, the xMOOCs that were almost exclusively the focus of the study.

Indeed, it is difficult for me to reconcile the nature and intent of the MRI with what Siemens writes in his article:

What I've been grappling with lately is "how do we take back education from edtech vendors?". The jubilant rhetoric and general nonsense causes me mild rashes. I recognize that higher education is moving from an integrated end-to-end system to more of an ecosystem with numerous providers and corporate partners. We have gotten to this state on auto-pilot, not intentional vision.


Let's examine the MOOC Research Institute to examine this degree of separation:

MOOC Research Initiative (MRI) is funded by the Bill & Melinda Gates Foundation as part of a set of investments intended to explore the potential of MOOCs to extend access to postsecondary credentials through more personalized, more affordable pathways.
To support the MOOC Research Initiative Grants, the following Steering Committee has been established to provide guidance and direction:
Yvonne Belanger, Gates Foundation
Stacey Clawson, Gates Foundation
Marti Cleveland-Innes, Athabasca University
Jillianne Code, University of Victoria
Shane Dawson, University of South Australia
Keith Devlin, Stanford University
Tom (Chuong) Do, Coursera
Phil Hill, Co-founder of MindWires Consulting and co-publisher of e-Literate blog
Ellen Junn, San Jose State University
Zack Pardos, MIT
Barbara Means, SRI International
Steven Mintz, University of Texas
Rebecca Petersen, edX
Cathy Sandeen, American Council on Education
George Siemens, Athabasca University
With a couple of exceptions, these are exactly the people and the projects that are the "edtech vendors" vendors Siemens says he is trying to distance himself from. He has not done this; instead he has taken their money and put them on the committee selecting the papers that will be 'representative' of academic research taking place in MOOCs.
 


Why was this work necessary? We are told:

Much of the early research into MOOCs has been in the form of institutional reports by early MOOC projects, which offered many useful insights, but did not have the rigor — methodological and/or theoretical expected for peer-reviewed publication in online learning and education (Belanger & Thornton, 2013; McAuley, Stewart, Siemens, & Cormier, 2010).


We already know that this is false - and it is worth noting that this study criticizing the lack of academic rigour cites a paper titled  'Bioelectricity: A Quantitative Approach' (Belanger & Thornton, 2013) and an unpublished paper from 2010 titled 'The MOOC model for digital practice' (McAuley, Stewart, Siemens, & Cormier, 2010). A lot of this paper - and this book - is like that. Despite all its pretensions of academic rigour, it cites liberally and lavishly from non-academic sources in what appears mostly to be an effort to establish its own  relevance and to disparage the work that came before.

I commented on this paper in my OLDaily post:

The most influential thinker in the field, according to one part of the study, is L. Pappano (see the chart, p. 181). Who is this, you ask? The author of the New York Times article in 2012, 'The Year of the MOOC'. Influential and important contributors like David Wiley, Rory McGreal, Jim Groom, Gilbert Paquette, Tony Bates (and many many more)? Almost nowhere to be found.


Here is the chart of citations collated from the papers selected by the committee for the MOOC Research Network (p. 181):


 Here is the citation frequencies from the same papers (p. 180):

What is interesting to note in these citations is that the people who Siemens considers to be 'Alt-Ac' above - Mackness, Stewart, Williams, Cormier, Kop, Williams, Mackness - all appear in this list. Some others - Garrison (I assume they mean Randy Garrison, not D.D.) and Terry Anderson, notably, are well known and respected writers in the field. The research we were told several times does not exist apparently does exist. The remainder come from the xMMOC community, for example,  Pritchard from EdX, Chris Peich from Stanford, Daniel Seaton (EdX). Tranches.

But what I say about the rest of the history of academic literature in education remains true. The authors selected to be a part of the MOOC Research Institute produced papers with only the slightest - if any - understanding of the history and context in which MOOCs developed. They do not have a background in learning technology and learning theory (except to observe that it's a good thing). The incidences of citations arise from repeated references to single papers (like this one) and not a depth of literature in the field.

What were the conclusions of this fifth paper? As a result, nothing more substantial than the first four (quoted, pp. 188-189):

  • Research needs to create with theoretical underpinnings that will explain factors related to social aspects in MOOCs
  • Novel theoretical and practical frameworks of understanding and organizing social learning in MOOCs are necessary
  • The connection with learning theory has also been recognized as another important feature of the research proposals submitted to MRI
  • The new educational context of MOOCs triggered research for novel course and curriculum design principles

This is why I said in my assessment of the paper that "the major conclusion you'll find in these research studies is that (a) research is valuable, and (b) more research is needed." These are empty conclusions, suggesting that either the authors of the original papers, or the authors summarizing the papers, had almost nothing to say.

In summary, I stand by my conclusion that the book is a muddled mess. I'm disappointed that Siemens feels the need to defend it by dismissing the work that most of his colleagues have undertaken since 2008, and by advancing this nonsense as "research and evidence."



Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Dec 22, 2024 12:31 p.m.

Canadian Flag Creative Commons License.

Force:yes