Stephen Downes

Knowledge, Learning, Community

Select a newsletter and enter your email to subscribe:

Email:

Vision Statement

Stephen Downes works with the Digital Technologies Research Centre at the National Research Council of Canada specializing in new instructional media and personal learning technology. His degrees are in Philosophy, specializing in epistemology, philosophy of mind, and philosophy of science. He has taught for the University of Alberta, Athabasca University, Grand Prairie Regional College and Assiniboine Community College. His background includes expertise in journalism and media, both as a prominent blogger and as founder of the Moncton Free Press online news cooperative. He is one of the originators of the first Massive Open Online Course, has published frequently about online and networked learning, has authored learning management and content syndication software, and is the author of the widely read e-learning newsletter OLDaily. Downes is a member of NRC's Research Ethics Board. He is a popular keynote speaker and has spoken at conferences around the world.

Stephen Downes Photo
Stephen Downes, stephen@downes.ca, Casselman Canada

Unpacking the Ethics of AI in Education
78893 image icon

I am interviewed by Geoff Cain for Episode #61 of Simon Says: Educate! Unpacking the Ethics of AI in Education.  See also this summary and useful slide presentation from Ronald Lethcoe. "When a state board, accreditor, or institution publishes an AI ethics framework, that document is not a neutral distillation of shared human values. It reflects who was in the room, what risks they prioritized, and which political and cultural assumptions they brought with them."

Today: Total: Geoff Cain, Simon Says: Educate!, Spotify, 2026/02/23 [Direct Link]
What We Must Do About AI In Education
78892 image icon

Eamon Costello writes, "The USA is AI's primary regulator and ideological driver. Its dominant cultural values will be increasingly embedded in it." I'm not sure that this is true, but let's assume it is. But Costello's observation that the existing "dominant cultural values" are so toxic and should never be allowed to infuse AI strikes me as a very good argument for not continuing to educate people the way we have in the past. Look how that turned out! I mean, how did so many people acquire the values and views that they did? This is not just a U.S. problem, either; we can look to many other societies where the national cultural values have gone wildly astray through no fault of AI (or even of educational technology in general). Where learning and development are concerned, I am personally far more concerned about advertising and mass media than I am about AI. 

Today: Total: Eamon Costello, GenAI:N3, 2026/02/23 [Direct Link]
Perfectly Imperfect
78890 image icon

In retrospect, it wasn't worth watching the hockey game. But the question of what it is worth doing as a writer or an academic is an interesting and complex debate. Should I, for example, have chosen to write my article by hand instead of AI? What more would I have achieved? This is an old question. For example, some of the most prolific authors use speech instead of a typewriter and then automated transcription to create the article. I've tried this with my talks, but never really made it work. What does it mean to be a scholar in the age of AI? I saw an article on LinkedIn (since lost because the algorithm bounced it out of view before I could capture the link) saying humans should always form the research question, do the literature review, analyze the evidence and draw conclusions. I wanted to ask: why these things? I don't think we have a good answer yet. We poretend it's because AI is flawed, but then we get arguments like "I typo [often] therefore I am [human]." I use spell-check in OLDaily because I used to get complaints about my spelling. Cory Doctorow reports using Ollama, an open-source LLM, as a typo-catcher. This led to a denoument from Jürgen Geuter on the ethics of using AI saying Doctorow "tries to make it (the criticism) look unreasonable by making it just a conversation about tech without regarding how that technology affects the world and the people in it."

Today: Total: Alan Levine, CogDogBlog, 2026/02/23 [Direct Link]
Inclusivity, Ethics, and Accessibility for Learners with Disabilities
78889 image icon

What I like about this open access book (131 page PDF) is that it offers a variety of voices from a multi-national perspective addressing issues related to ethics and accessibility from different points of view. There are a couple of technology-specific articles, including a contribution from Silvester Krčméry on transforming inclusion in the age of AI, and an article from Pankhuri Bajpai on subjective well-being in digital education. I also especially appreciated Sehrish Sachwani's article on nervous system regulation as a foundational condition for learning. "A classroom that feels lively, engaging, neutral, or stimulating to one learner may feel overwhelming, exhausting, or threatening to another." I can feel this. The book is listed on Merlot and available on Google Docs, though I found it easiest to read a downloaded version.

Today: Total: Munir Moosa Sadruddin, Sehrish Sachwani, MERLOT, 2026/02/24 [Direct Link]
I needed a scheduling tool that respects privacy. So I built one.
78888 image icon

"Maybe you've been thinking 'someone should make a tool that does X,' says Doug Belshaw. "Maybe that person is you?" At a certain point I may stop running these instances of "I built x tool using AI" but I'll keep posting so long as it remains fun. And it is fun, because it feels like that explosion of cool we saw when the web first reached a large audience in the mid to late 1990s. What we could do then never really went away, not even after the dot com crash of 2003, because the basic tools were in everyone's hands. That's also true today; AI is just math and data, and there's enough of both out there that what we're doing today won't disappear. The real issue isn't "AI yes or no". It's how we can prevent commercial interests from degrading it the way they degraded the web.

Today: Total: Doug Belshaw, Open Thinkering, 2026/02/23 [Direct Link]
No, you couldn't do this before an LLM because if you could it would have been done already
78887 image icon

Another example of the sort of thing being done with the tools today. "I spent all day building a Claude Code skill to remediate handwritten math notes in Canvas courses," writes Mike Caulfield. Again, we need to be clear about how we're evaluating 'success' here. "Will it make me rich? No... I won't be rich, but we will be able to build better and more accessible systems because of this, and that is a very good thing." It is a very good thing. 

Today: Total: Mike Caulfield, 2026/02/23 [Direct Link]

Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2026
Last Updated: Feb 23, 2026 3:37 p.m.

Canadian Flag Creative Commons License.