Stephen Downes

Knowledge, Learning, Community

Select a newsletter and enter your email to subscribe:

Email:

Vision Statement

Stephen Downes works with the Digital Technologies Research Centre at the National Research Council of Canada specializing in new instructional media and personal learning technology. His degrees are in Philosophy, specializing in epistemology, philosophy of mind, and philosophy of science. He has taught for the University of Alberta, Athabasca University, Grand Prairie Regional College and Assiniboine Community College. His background includes expertise in journalism and media, both as a prominent blogger and as founder of the Moncton Free Press online news cooperative. He is one of the originators of the first Massive Open Online Course, has published frequently about online and networked learning, has authored learning management and content syndication software, and is the author of the widely read e-learning newsletter OLDaily. Downes is a member of NRC's Research Ethics Board. He is a popular keynote speaker and has spoken at conferences around the world.

Preserving The Web Is Not The Problem. Losing It Is.
78896 image icon

This is just a short opinion post from the director of the Wayback Machine at the Internet Archive but the story it references is a significant development: "some major news organizations - including The Guardian, The New York Times, and Reddit - are limiting or blocking access to their content in the Internet Archive's Wayback Machine." They're doing it of course because they can't collect money from AI companies if those companies think they can just get the content from Wayback. Mark Graham argues that "the Wayback Machine is built for human readers. We use rate limiting, filtering, and monitoring to prevent abusive access." But of course this might not always be the case, so the content companies are protecting their turf. But in the long term, protecting their turf may cause more harm than good: "significant chunks of our journalistic record and historical cultural context simply... disappear."

Today: Total: Mark Graham, TechDirt, 2026/02/24 [Direct Link]
How do students regulate their learning with a genAI chatbot?
78895 image icon

Here's the set-up: "Thirty university students were tasked to read texts and write an essay within 45 minutes." Here's the pay-off: "Chatbot users achieved higher essay scores than non-users. Chatbot interaction frequencies correlated positively with high cognitive activities." How is this possible? This paper (11 page PDF) explores the question, exploring the usual trade-off between cognitive offloading and pedagogically sound design. "The findings highlight the need to support students' learning regulation skills to mitigate their outsourcing of critical processes while using genAI tools." In other words, there is a difference between an AI application that will write an essay for you, and an AI application that will teach you the content so you can write an essay. I think we knew that, though: it's why we discourage parents from completing their children's homework or project. The real issue is, under what circumstances can the student be motivated to turn down the parent's (or AI's) completion of their work even if it is freely offered?

Today: Total: Lyn Lim, Maria Bannert, Learning Letters, 2026/02/24 [Direct Link]
A.I. Isn't People
78894 image icon

This article begins with the question, "How many Reddit posts does it take to learn to read?" The answer "all of them" is intended to show the difference between human learning and AI learning. The intent is to show '200 lines of Python code does not understand anything'. It's a bizarre supposition, to be sure. But in response I invite the reader to consider the same questions asked about humans. Do the chemicals and interactions in a human neuron 'understand' anything? If given only Reddit posts, would it not take a lot of posts to learn how to read? That's the problem with these human-AI comparisons: we assume these almost-magical human abilities that in reality stem from (a) a wider range of experience from all our senses, and (b) a lot of interconnected neurons. The proposal that human understanding is fundamentally different does not follow from arguments like this. Yet people keep making them.

Today: Total: Rusty Foster, Today in Tabs, 2026/02/24 [Direct Link]
Unpacking the Ethics of AI in Education
78893 image icon

I am interviewed by Geoff Cain for Episode #61 of Simon Says: Educate! Unpacking the Ethics of AI in Education.  See also this summary and useful slide presentation from Ronald Lethcoe. "When a state board, accreditor, or institution publishes an AI ethics framework, that document is not a neutral distillation of shared human values. It reflects who was in the room, what risks they prioritized, and which political and cultural assumptions they brought with them."

Today: Total: Geoff Cain, Simon Says: Educate!, Spotify, 2026/02/23 [Direct Link]
What We Must Do About AI In Education
78892 image icon

Eamon Costello writes, "The USA is AI's primary regulator and ideological driver. Its dominant cultural values will be increasingly embedded in it." I'm not sure that this is true, but let's assume it is. But Costello's observation that the existing "dominant cultural values" are so toxic and should never be allowed to infuse AI strikes me as a very good argument for not continuing to educate people the way we have in the past. Look how that turned out! I mean, how did so many people acquire the values and views that they did? This is not just a U.S. problem, either; we can look to many other societies where the national cultural values have gone wildly astray through no fault of AI (or even of educational technology in general). Where learning and development are concerned, I am personally far more concerned about advertising and mass media than I am about AI. 

Today: Total: Eamon Costello, GenAI:N3, 2026/02/23 [Direct Link]
Perfectly Imperfect
78890 image icon

In retrospect, it wasn't worth watching the hockey game. But the question of what it is worth doing as a writer or an academic is an interesting and complex debate. Should I, for example, have chosen to write my article by hand instead of AI? What more would I have achieved? This is an old question. For example, some of the most prolific authors use speech instead of a typewriter and then automated transcription to create the article. I've tried this with my talks, but never really made it work. What does it mean to be a scholar in the age of AI? I saw an article on LinkedIn (since lost because the algorithm bounced it out of view before I could capture the link) saying humans should always form the research question, do the literature review, analyze the evidence and draw conclusions. I wanted to ask: why these things? I don't think we have a good answer yet. We poretend it's because AI is flawed, but then we get arguments like "I typo [often] therefore I am [human]." I use spell-check in OLDaily because I used to get complaints about my spelling. Cory Doctorow reports using Ollama, an open-source LLM, as a typo-catcher. This led to a denoument from Jürgen Geuter on the ethics of using AI saying Doctorow "tries to make it (the criticism) look unreasonable by making it just a conversation about tech without regarding how that technology affects the world and the people in it."

Today: Total: Alan Levine, CogDogBlog, 2026/02/23 [Direct Link]

Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2026
Last Updated: Feb 24, 2026 2:37 p.m.

Canadian Flag Creative Commons License.