Stephen Downes

Knowledge, Learning, Community

Select a newsletter and enter your email to subscribe:

Email:

Vision Statement

Stephen Downes works with the Digital Technologies Research Centre at the National Research Council of Canada specializing in new instructional media and personal learning technology. His degrees are in Philosophy, specializing in epistemology, philosophy of mind, and philosophy of science. He has taught for the University of Alberta, Athabasca University, Grand Prairie Regional College and Assiniboine Community College. His background includes expertise in journalism and media, both as a prominent blogger and as founder of the Moncton Free Press online news cooperative. He is one of the originators of the first Massive Open Online Course, has published frequently about online and networked learning, has authored learning management and content syndication software, and is the author of the widely read e-learning newsletter OLDaily. Downes is a member of NRC's Research Ethics Board. He is a popular keynote speaker and has spoken at conferences around the world.

Stephen Downes Photo
Stephen Downes, stephen@downes.ca, Casselman Canada

When Is a Theory Superficial?
77878 image icon

According to Google, I'm the first to say this: classes aren't causes.

Let me explain what I mean. This article asks what makes a theory 'deep', as opposed to superficial. The authors suggest, "superficial theories have minimal explanatory content, whereas deep theories have excess explanatory content." An explanation typically involves a causal relationship, but the authors use this formulation to allow for classes of things - functions, dispositions, categories - to describe causes. As in, for example, "poison causes death." But the property of 'being a poison' isn't what makes a thing cause death. That's just a classification of a substance, a way of describing it. We may as well say 'skull and crossbones cause death'. So I say "classes aren't causes." For me, a 'deep' theory describes a specific mechanism where actual interactions between one thing and another cause a specific effect. Such explanations may be generalized, but the generalization forms no part of the explanation. In my view.

Today: Total: Jeremy Pober, Eric Schwitzgebel, The Splintered Mind, 2025/05/02 [Direct Link]
slop capitalism
77877 image icon

This is an interesting article, though I resist its central argument. It is, in a nutshell: "The problem with slop capitalism, in my view, is its attempt to... replace the 'jungle of functionalist rationality' which de Certeau saw in the cities of the 1980s with the desert of artificial rationality we see in the cities and social platforms of 2025." The use of the term 'slop' is suggestive, as is this image: we suppose that what we get from AI is inferior to what we had before. But I was alive in the 1980s. The 'functionalist rationality' of those times was a mess, a mix of Thatcherism, Reganomics, total quality management, 22.5% interest, Bhopal, garbage, famine, corruption and war. Today is bad but it's actually better than it was in the 1980s. Rather than a narrowing of the channels, what many people see in AI is an opening of the floodgates, something the functionaries may view with suspicion and fear, but for the rest of us, signifies hope and the possibility of change for the better.

Today: Total: Aidan Walker, How To Do Things With Memes How To Do Things With Memes, 2025/05/02 [Direct Link]
Google search’s made-up AI explanations for sayings no one ever said, explained
77876 image icon

I first encountered the phrase on TWIT Sunday. "Last week, the phrase 'You can't lick a badger twice' unexpectedly went viral on social media. The nonsense sentence - which was likely never uttered by a human before last week - had become the poster child for the newly discovered way Google search's AI Overviews makes up plausible-sounding explanations for made-up idioms." Ironically, the phrase 'You can't lick a badger twice' now has a meaning - but it's meta-metaphorical, meaning something like (to paraphrase), "garbage in, a workable interpretation of garbage out". Via Doug Belshaw

Today: Total: Kyle Orland, Ars Technica, 2025/05/02 [Direct Link]
What Would “Good” AI Look Like?
77875 image icon

This is an interesting question that could probably have received a more thorough treatment than it receives here. Some of the qualities are uncontroversial - they describe AI that is green, error-free, open source, based on consent, etc. But what about things like governance? Anil Dash writes, "Alternative creation, ownership and governance models for AI tools that address the corporate chaos of today's big names are well past due." That's what existed before, when AI was a research exercise, and as soon as there was money to be made (at least in theory) it went corporate. Why wouldn't these same stewards do the same thing again?

Today: Total: Anil Dash, 2025/05/02 [Direct Link]
Connectomics 2.0: Simulating the brain
77874 image icon

It's one thing to build a complete connectome of a fly brain. It's quite another to understand how that brain works. "Even if you could incorporate every detail about the imaged neurons and their interactions with one another, the connectome would still represent a single moment in time—devoid of information about how these connections change with experience. 'To me, that is what makes a brain a brain,' says Adriane Otopalik, a group leader at Janelia who previously worked in Marder's lab as a graduate student. 'It seems odd to me to design a model that totally ignores that level of biology.'" The connectome describes the fly's knowledge, but not how it learns or acts.

Today: Total: Laura Dattaro, The Transmitter: Neuroscience News and Perspectives, 2025/05/02 [Direct Link]
AI Agents Are Coming to a Classroom Near You
77873 image icon

I've discussed AI agents and related protocols (MCP, A2A) in previous posts, but unlike David Ross, I don't actually see them being applied to classroom learning. And I consider the recommendations to be misguided (the number one priority in the table illustrated is "invest in adaptive learning pilots", which has been the same recommendation coming from this crowd for decades, and has never been useful). If AI does anything, it will free students from teaching, adaptive or otherwise, and allow them to learn by creating and doing.

Today: Total: David Ross, Getting Smart, 2025/05/01 [Direct Link]

Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2025
Last Updated: May 03, 2025 09:37 a.m.

Canadian Flag Creative Commons License.