Stephen Downes

Knowledge, Learning, Community

Please select a newsletter and enter your email to subscribe.

Stephen Downes spent 25 years as an expert researcher at the National Research Council of Canada, specializing in new instructional media and personal learning technology. With degrees in Philosophy and a background in journalism and media, he is one of the originators of the first Massive Open Online Course, has published frequently about online and networked learning, and is the author of the widely read e-learning newsletter OLDaily. He is a popular keynote speaker and has presented at conferences around the world. [More]

Support OLDaily

OLDaily has been free and open to all readers since 2001. It is a valuable and widely-used resource for educators, researchers, and learners worldwide. Please consider a monthly contribution to sustain the time and resources required to publish it every day.


Here's what's in the latest edition of OLDaily

Why product discovery matters more than ever in the age of AI
79242 image icon

When I was a kid I built a little cabin on an old wagon in our yard. Eventually my father said i was time to take it down and give the neighbours a break. I took it down, then decided to rebuild it even better. The new wood cabin was a huge improvement, but it lasted exactly one day before being taken down. It didn't matter that I had built it better and faster; it was just the wrong thing at the wrong time. Today, now that I don't have a 'job', I've been working hard on my personal learning environment (PLE) application, CList. But is it the right thing for the right time any more? I wrestle with that question, which is why this article appealed to me, even though you can stop reading after maybe the first third (again, it's an AI article that goes on and on and on and on....). The point is good: "A team can release three AI-powered features in a single sprint. If none of them improves conversion, retention, or satisfaction, the speed was wasted. The features were built efficiently. They just were not worth building." (p.s. don't get me wrong - working with code like this is the most fun I've had in a long time and while it would be nice if it was widely adopted, it's not really necessary). (p.p.s I really need a better name than CList - I'm open to ideas).

Today: Total: Jared Molton, Udacity, 2026/05/06 [Direct Link]
Mature AI Use vs. Immature AI Use
79241 image icon

This paper makes a useful distinction which I'll share here, so you don't have to wade through the AI-generated reams of text. It divides AI use policies into two domains: ethics, and maturity. You can take it from there; the actual paper employs a naive (though commonly held) perspective on ethical frameworks as "meant to govern behavior across a community," while on the other side growth, effort and learning from feedback are taken as indicators of maturity. The useful bit in this paper is that our immediate reaction should not be to just create an ethics policy that governs allowable use. We need to look beyond what we shouldn't do, to what it's worthwhile to do. This requires a lot more thought. (p.s. a link to 'Glow and Grow', for the record).

Today: Total: Mike Kentz, How We Frame Machines, 2026/05/06 [Direct Link]
Literacy-slop
79240 image icon

Read the Emily Segal post first, then this post. Belshaw argues here that "If we swap 'Digital literacy' for 'Taste' then it's a socially-negotiated relation between people, tools, practices, contexts, and communities." From which we can argue, "Literacy-slop is the credential without the community of practice; it's the qualification without the learning; the skills certificate for getting an AI agent to click through a self-paced module on digital skills. It looks like literacy, satisfying the classifier. But it's just curation without a social body." Or put another way: "There exists a whole complex of knowledge, dispositions, and social relationships that makes someone capable in various digital contexts." The labels are just the socially accepted markers of success or of capability in that context.

My view: there's a lot right here, but I don't agree with it all. Words don't have meaning on their own, sure. They only have meaning in a context. But context can be anything; it doesn't need to be a community or a society. It doesn't have to be negotiated. There is no process of 'making meaning'. Context is (literally) the network of entities a thing is embedded in; meaning is the emergent pattern in that network that is recognized by a viewer when prompted by the thing. There is no one meaning, no 'real' meaning, obviously, because there are many viewers, many ways of seeing the same things. Any negotiation that happens isn't about the actual meaning; it's about establishing and holding power in that community, a hierarchy of symbolism, just like taste.

Today: Total: Doug Belshaw, Open Thinkering, 2026/05/06 [Direct Link]
Tasteslop
79239 image icon

Read this article first, then Doug Belshaw's take. This article is on the phenomenon of 'taste' (as in, "she has good taste"). The point here is "Taste is not really a property of various objects. It is a socially validated relation between objects, people, histories, scenes, and timing." In other words, you can't really have good taste unless there's an audience that sees you and afirms your good taste. OK? Next: "Tasteslop emerges when the visible signs (or 'markers') of taste are extracted from those relations and redeployed generically." It's like slapping a Gucci lable on your t-shirt. The point here is that AI can recognize (via pattern matching) what counts as a sign of good taste (like a Gucci lable) but not when the sign has been misapplied either via "lost meaning or what would need to replace them for things to feel legitimately fresh." AI intensifies this because it will just slap a taste marker into any old context, breaking down the whole culture and 'taste hierarchy' the taste marker belongs to.

Today: Total: Emily Segal, NEMESIS, 2026/05/06 [Direct Link]
‘Close to zero impact’: US study casts doubt on effect of phone ban in schools
79238 image icon

Surely if anything will cause us to stop looking at 'test scores' as a measure of impact, this will, right? "The report concluded that among schools instituting a ban: For academic achievement, average effects on test scores are consistently close to zero." After all, "Researchers say findings are not reason to shy away from restrictions as MPs consider ban in England's schools."

Today: Total: Richard Adams, The Guardian, 2026/05/06 [Direct Link]
The "AI Job Apocalypse" Is a Complete Fantasy
79237 image icon

When a tech or finance company puts the phrase 'understanding of humans' in the subhead, that's a red flag. A folk theory of 'human nature' is not a good basis for informed commentary. Neither is saying 'of course Keynes was wrong.' That doesn't invalidate the entire message here, though. Historically, when a new technology has been introduced, that has increased, not decreased, employment and wealth. It's not a simple case of "we found new and different productive endeavors to fill our time." Would that this were the case. Historically (and this is not the a16z message) though wealth increased, scarcity persisted because that wealth was not shared, and it was never possible to survive on a 15 hour work week. To me, the assertion that AI won't eliminate jobs is not an assertion about AI (which very much could reduce our need for labour) or even an assertion about people (if it were, a16z would be saying a guaranteed income would increase social wealth) but rather an assertion that the exploitation will continue (just as it has through previous rounds of technology development).

Today: Total: David George, a16z, 2026/05/06 [Direct Link]

Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2026
Last Updated: May 06, 2026 12:37 p.m.

Canadian Flag Creative Commons License.