Stephen Downes

Knowledge, Learning, Community

Select a newsletter and enter your email to subscribe:

Email:

Vision Statement

Stephen Downes works with the Digital Technologies Research Centre at the National Research Council of Canada specializing in new instructional media and personal learning technology. His degrees are in Philosophy, specializing in epistemology, philosophy of mind, and philosophy of science. He has taught for the University of Alberta, Athabasca University, Grand Prairie Regional College and Assiniboine Community College. His background includes expertise in journalism and media, both as a prominent blogger and as founder of the Moncton Free Press online news cooperative. He is one of the originators of the first Massive Open Online Course, has published frequently about online and networked learning, has authored learning management and content syndication software, and is the author of the widely read e-learning newsletter OLDaily. Downes is a member of NRC's Research Ethics Board. He is a popular keynote speaker and has spoken at conferences around the world.

Stephen Downes Photo
Stephen Downes, stephen@downes.ca, Casselman Canada

The Questions We Ask About AI in Education
78817 image icon

This is an early version of an article I wrote intended to appear elsewhere, and the two versions are quite different, and I like this version, so I thought I would post it here. The main intent is to move the discussion toward better questions about AI in education than we have been asking to this point. "To begin at the beginning: what is it we are even trying to do in education? Why exactly do we want to impart students with new knowledge and new skills?" Much of the discussion of AI assumes there is consensus on this point, but there generally isn't, and this colours a lot of our perspectives. But maybe it shouldn't: "What we are not asking, though, is whether we will need to do any of these things in the future at all."

Today: Total: Stephen Downes, 2026/02/06 [Direct Link]
Why higher ed can't ignore Reddit
78816 image icon

Sure, Reddit is part of my own media diet, and has been for a number of years now. But Reddit is its own place, and its important, first, to not generalize about Reddit (each of its discussion topics, or 'subs', is a distinct entity, with its own (often fickle) moderators and sense of community), and second, take anything you read on Reddit with a large dose of scepticism (there's a lot of cheerleading, brigading, and influencing going on). Depending on Reddit is like depending on a really unrepresentative and undersized survey - it might tell you something exists or is a possibility, but that's the extent of its predictive or diagnostic powers. Remember, on Reddit, you're taking to individuals, some of whom might even be real, not to communities.

Today: Total: Liz Gross, Campus Sonar, 2026/02/06 [Direct Link]
Same old tired narrative: "Classes were built for the 1900s"
78815 image icon

Finding arguments to criticize on LinkedIn is like shooting fish in a barrel (except maybe that the fish on LinkedIn want to be shot; any exposure is good exposure). Still I can be a little bit sympathetic with the criticism as presented here, because it is (a) one we've been hearing for the last 20 years, and (b) points to a real problem, but one that is outside the means of edtech or instructional design to correct. As Apostolos Koutropoulos says at one point, "You know what hasn't changed? The operating environment we work in. Organizations want click-and-submit kind of eLearning - for better or for worse. This is mostly for compliance." I mean, for the most part taking 'courses' doesn't really make sense any more, especially in a work context. But organizations aren't clamouring for a better way to deploy learning (unless it's to train AI models... but I digress). That's a wider problem, and not solved simply by pointing to the 'right' way to do it.

Today: Total: Apostolos Koutropoulos, Multilitteratus Incognitus, 2026/02/06 [Direct Link]
CC at the AI Impact Summit: Core Interventions for the Public Interest - Creative Commons
78814 image icon

Most of us will have no voice at the AI Impact Summit in Delhi, and we need to be careful how we are represented by those who would speak for us. For many in the open learning community, Creative Commons takes on that role. This concerns me, because in my own case we have our difference. A case in point can be seen in the proposed system for "preferences to communicate how data holders wish their data to be used in AI is at its core a data governance mechanism." I know the words sound great, but the plan "to equip creators and data-holding communities with legible, scalable forms of agency" representing a shift from promoting openness to promoting greater means of control. And when they write "data governance is about making decisions, about choice," I don't agree. There's a vast difference between picking from predefined options, and forging one's own path. At this and other summits Creative Commons should be clear that its underlying interest isn't in representing openness, but in advocating for ownership. 

Today: Total: Annemarie Eayrs, Creative Commons, 2026/02/06 [Direct Link]
"Artificial Ignorance" and Data Sycophancy
78813 image icon

'AI sycophancy' is: "the tendency of AI models to prioritize user agreement and approval over truthfulness, accuracy, or independent reasoning." The argument here is that "From 'mirroring' to offering 'confirmation bias,' sycophancy is unhealthy. It can lead to a range of bad consequences and again contribute to Artificial Ignorance:  if a major factor in learning is seeing where one is wrong or has made a mistake and then working to address that error and make a correction, what happens if one is never wrong?" I can see the concern, but it seems wrong to generalize from a few simple prompts to everything AI is or will be. Why wouldn't we ask AI to respond differently when we're learning than when we're just trying to get things done? 

Today: Total: Cathy N. Davidson, 2026/02/06 [Direct Link]
Distributive Epistemic Injustice in AI Ethics: A Co-productionist Account of Global North-South Politics in Knowledge Production
78812 image icon

I'm sure the authors are right when they say (17 page PDF) "Only a small percentage of publications in prestigious AI Ethics conferences and journals discuss the ethical issues raised by AI in non-Western countries, and even fewer of those appropriately address such circumstances." I mean, in what domain has this ever been false? But I'm not sure I'll accept the word of four Cambridge scholars that "AI Ethics as a global scientific enterprise is co-producing a new techno-political order." Sure, the people involved in producing "prestigious" publications think they're the ones defining the new order, but I have the feeling they're not. The authors "argue that the global AI Ethics project fails to fulfil its promise to be universally useful by keeping global majority populations in the southern regions marginalised as 'others'." Nobody asked "the global AI Ethics project" to do this, and nobody put them in charge. They just assume (as always) that they are.

Today: Total: Abdullah Hasan Safir, Kerry McInerney, Alan F. Blackwell, Ramit Debnath, ACM Digital Library, 2026/02/05 [Direct Link]

Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2026
Last Updated: Feb 08, 2026 2:37 p.m.

Canadian Flag Creative Commons License.