Stephen Downes

Knowledge, Learning, Community

Select a newsletter and enter your email to subscribe:

Email:

Vision Statement

Stephen Downes works with the Digital Technologies Research Centre at the National Research Council of Canada specializing in new instructional media and personal learning technology. His degrees are in Philosophy, specializing in epistemology, philosophy of mind, and philosophy of science. He has taught for the University of Alberta, Athabasca University, Grand Prairie Regional College and Assiniboine Community College. His background includes expertise in journalism and media, both as a prominent blogger and as founder of the Moncton Free Press online news cooperative. He is one of the originators of the first Massive Open Online Course, has published frequently about online and networked learning, has authored learning management and content syndication software, and is the author of the widely read e-learning newsletter OLDaily. Downes is a member of NRC's Research Ethics Board. He is a popular keynote speaker and has spoken at conferences around the world.

Stephen Downes Photo
Stephen Downes, stephen@downes.ca, Casselman Canada

A 2-Minute Video That Shouldn't Exist
78825 image icon

This article begins with a well-produced pedagogically sound video made by one person, "a proof of concept for what happens when the tools of professional media production become accessible to anyone with internet access and basic technical literacy." The cost of producing the video is dramatically reduced. But at the same time, the author brought a lot of expertise to the table. "Most parents generating 'Five Little Speckled [Whatever]' will have none of this. They'll make versions that intuitively 'feel right' but may inadvertently violate key pedagogical principles." Now the sort of democratization made possible by technology in general and AI in particular is something I've often touted. But arguably, "tools democratize in theory but stratify in practice based on existing inequalities in time, knowledge, and resources." 

Today: Total: Nik Bear Brown, The Cognitive Commons, 2026/02/09 [Direct Link]
An Illusion of Consensus: What the Government Isn't Saying About the Results of its AI Consultation
78824 image icon

The highlight of this article is the link to all the expert reports submitted to the Canadian government (44 Mb zip) informing its AI consultation. Michael Geist suggests at once that "the experts took their mandate seriously and provided candid, action-oriented advice" while "the direct advice from the experts that identifies policy choices and their implications is consistently softened into 'government-speak' with balancing discussion that creates an illusion of consensus that isn't really there." So, business as usual. On the issue of trust and safety, especially, was "a major concern from the public responses and the government is likely headed toward making AI governance, audits, transparency, and risk-based regulation key elements of its AI strategy," though "there is far less consensus in the expert reports... Some want to move quickly, while others warn that overly broad regulation will slow deployment, disadvantage domestic firms, and regulate technologies Canada does not control. Those disagreements largely disappear in the government's summary."

Today: Total: Michael Geist, 2026/02/09 [Direct Link]
"AI sycophancy" is not always harmful
78823 image icon

Good article that touches on several themes, including the one in the title. The set-up is a misleading article from Futurism claiming "one out of every 1,500 chatbot interactions results in a psychotic break," which of course is absurd. Mike Caulfield uses this as an opportunity to demonstrate how discerning readers get to the source, which in this case is a paper showing "AI validates questionable beliefs or delusion beliefs." But even better, he questions this too. "So much of this comes down to the problem of people using LLMs as chatbots and conceptualizing the problem as if AI was a respected elder in your community offering news and advice," he says. "But it's a bad frame." I've had this experience. Sometimes the chatbot is right and sometimes you're right, but it's not always clear which is which. "You can't set a rule that the LLM will always correct a user when they are wrong because the LLM is not always right." Look at me working with ChatGPT to plan a route through Iceland. It's a constant back and forth, and step by step I find myself verifying what ChatGPT says to me. If the AI isn't willing to change based on what I say, it's going to route me from Keflavik to Akranes along a nice flat 44km path... across open water. See also: Aaron Tay on the sycophancy fallacy.

Today: Total: Mike Caulfield, The End(s) of Argument, 2026/02/09 [Direct Link]
First Monday @ 30
78822 image icon

First Monday will be shutting down as of its May, 2026, issue. First Monday was always the journal I thought journals should aspire to be. "First Monday has never tolerated advertisements in any form on its virtual pages. First Monday is truly openly accessible at all levels and has always been so, from its beginning with its first issue released on 6 May 1996." I published my Hacking Memes article there in 1999 and had always planned to write a second for them, on critical literacies, but I never found the time, and now, I guess, I've run out of time. It's dispiriting to see the end of something so good, but there's a lot of that going around these days.

Today: Total: First Monday, 2026/02/09 [Direct Link]
Discord will require a face scan or ID for full access next month
78821 image icon

The story is that "Discord announced on Monday that it's rolling out age verification on its platform globally starting next month, when it will automatically set all users' accounts to a "teen-appropriate" experience unless they demonstrate that they're adults." The subtext (in my view) is that the web is moving to a fractured 'verification required' versus 'verification not required' architecture. We see this in this story as people migrate from 'verification required' Discord to 'verification not required' Zulip. The subtext of the subtext is Discord's (likely) upcoming IPO, which will force it into prioritizing revenue over users (Bloomberg says it has already filed). That means more than just face scans, it means advertising and targeted content.

Today: Total: Stevie Bonifield, The Verge, 2026/02/09 [Direct Link]
Institute for Infrastructure and Interoperable Data in Learning (I2IDL)
78820 image icon

Shelly Blake-Plock writes, "The sudden closure of the Advanced Distributed Learning Initiative (ADL) has created a significant capability gap across the Department of Defense (DoD) as well as for commercial vendors and global government stakeholders." I'm a bit behind the curve on this one, though in fairness I can't find any news release or coverage of this beyond  Institute for Infrastructure and Interoperable Data in Learning (I2IDL). This came as a surprise; even the Wikipedia article about ADL is written as if it still exists. Anyhow, the point of this white paper is to assert that I2IDL "is proposed as an independent, non-governmental, non-profit organization that will advance the science of open source data systems serving learning, training, and education organizations across the globe." Count me in as willing to support this initiative as it moves forward. Via Jeevan Joshi Pamu. Related: The CIA World Factbook has also been shut down.

Today: Total: Institute for Infrastructure and Interoperable Data in Learning, 2026/02/09 [Direct Link]

Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2026
Last Updated: Feb 09, 2026 2:37 p.m.

Canadian Flag Creative Commons License.