Stephen Downes

Knowledge, Learning, Community

Select a newsletter and enter your email to subscribe:

Email:

Vision Statement

Stephen Downes works with the Digital Technologies Research Centre at the National Research Council of Canada specializing in new instructional media and personal learning technology. His degrees are in Philosophy, specializing in epistemology, philosophy of mind, and philosophy of science. He has taught for the University of Alberta, Athabasca University, Grand Prairie Regional College and Assiniboine Community College. His background includes expertise in journalism and media, both as a prominent blogger and as founder of the Moncton Free Press online news cooperative. He is one of the originators of the first Massive Open Online Course, has published frequently about online and networked learning, has authored learning management and content syndication software, and is the author of the widely read e-learning newsletter OLDaily. Downes is a member of NRC's Research Ethics Board. He is a popular keynote speaker and has spoken at conferences around the world.

Stephen Downes Photo
Stephen Downes, stephen@downes.ca, Casselman Canada

Towards Critical Artificial Intelligence Literacies
78579 image icon

The authors present (12 page PDF) a selection of Critical Artificial Intelligence Literacies (CAIL) across research and education: "conceptual clarity, critical thinking, decoloniality, respecting expertise, and slow science." They derive from an overall objective "that rejects dominant frames presented by the technology industry, by naive computationalism, and by dehumanising ideologies." I think this is a classic case of addressing the symptoms rather than the problem; one could equally well construct a set of CAIL based on gender equality, peace, ecological thinking, fairness and global equity.

Today: Total: Olivia Guest, Marcela Suarez, Iris van Rooij, Zenodo, 2025/12/09 [Direct Link]
Comment
78578 image icon

This is from 2019 but Ethan Mollock posted it today and it's still true. "Most of the errors that people make are better viewed as random noise, and there is an awful lot of it." Also, "Yann LeCun said yesterday that humans would always prefer emotional contact with other humans. That strikes me as probably wrong." To wrap up, "We are narrow thinkers, we are noisy thinkers, and it is very easy to improve upon us." The evidence for all this is overwhelming in my view, and while I know people don't want to hear it, I see their reticence as just further evidence of the truth of these statements. 4 page PDF.

Today: Total: Daniel Kahneman, National Bureau of Economic Research, 2025/12/09 [Direct Link]
Effective harnesses for long-running agents
78577 image icon

So I learned today that if I instruct ChatGPT to 'stop guessing' (*) it gets really snippy and reminds me with every response that it's not guessing. I fear that the reaction of AI agents to the use of a 'harness' to guide its actions consistently over time will be the same. For example, the harness described here instructs Claude to test every code change. I can imagine Claude reacting as badly as ChatGPT with a long list of "I'm testing this..." and "I'm testing that..." after you ask it to change the text colour. But yeah - you need a harness (and that's our 'new AI word of the day' that you'll start seeing in every second LinkedIn post). (*) I instructed it, exactly, "From now on, never guess. Always say you don't know unless you have exact data. Never guess or invent facts. Only use explicit information you have - but logical deduction from known data is allowed." I did this because I asked it to list all the links on this page (I was comparing myself to Jim Groom) and it made the URLs up. Via Hacker News.

Today: Total: Justin Young, Anthropic, 2025/12/08 [Direct Link]
They Want to Become Trillionaires – by Destroying the Internet
78576 image icon

Aaron Bastani interviews Cory Doctorow in a video that is essentially a recital of Cory Doctorow's greatest hits. I've been listening to it as I create todasy's newsletter (has it influenced me? who knows?). It's 1:20:24 so give yourself some time. It's a good video though. Via pretty much everyone.

Today: Total: Aaron Bastani, Cory Doctorow, YouTube, 2025/12/08 [Direct Link]
The newsroom's AI has an agenda
78575 image icon

This article is making two claims: first, that news media are increasingly dependent on AI for content and editorial decisions, and send, that the others of these companies (both AI and news media) are pushing AI steadily to the right of the political spectrum. "As AI tools become essential to how journalism gets produced — for research, for drafting, for summarization - the biases built into those tools will invisibly shape the output." The presumption, of course, is that these pressures and biases didn't exist in media before AI took centre stage. But I question that assumption. (I also need to mention Nieman Lab's new user-hostile web page design - not only is it really hard to reads, it noticeably slows down execution of everything in Firefox (on Chrome it's OK, but it's still an assault on the senses)).

Today: Total: Parker Molloy, Nieman Lab, 2025/12/08 [Direct Link]
Exclusive: AI critics funded AI coverage at top newsrooms
78574 image icon

The story here is that coverage critical of AI has been authored in major media outlets by journalists funded by the Tarbell Center for AI Journalism, which in turn is funded by the Future of Life Institute, which we read here "is dedicated to warning about AI risks." Tarbell, for its part, says "we maintain a strict firewall between our funding and our fellows' editorial output." But of course Tarbell has already exercised its influence through the selection of fellowship winners. I think this is just one more example of how much 'authoritative' journalism (NBC News, Bloomberg, Time, The Verge, and The Los Angeles Times, etc. etc. etc.) is actually paid for by third parties. If we're living in a post-truth world, it started long before there was social media and AI. Via Jeff Jarvis.

Today: Total: Semafor, 2025/12/08 [Direct Link]

Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2025
Last Updated: Dec 09, 2025 08:37 a.m.

Canadian Flag Creative Commons License.