Stephen Downes

Knowledge, Learning, Community

Select a newsletter and enter your email to subscribe:

Email:

Vision Statement

Stephen Downes works with the Digital Technologies Research Centre at the National Research Council of Canada specializing in new instructional media and personal learning technology. His degrees are in Philosophy, specializing in epistemology, philosophy of mind, and philosophy of science. He has taught for the University of Alberta, Athabasca University, Grand Prairie Regional College and Assiniboine Community College. His background includes expertise in journalism and media, both as a prominent blogger and as founder of the Moncton Free Press online news cooperative. He is one of the originators of the first Massive Open Online Course, has published frequently about online and networked learning, has authored learning management and content syndication software, and is the author of the widely read e-learning newsletter OLDaily. Downes is a member of NRC's Research Ethics Board. He is a popular keynote speaker and has spoken at conferences around the world.

'Pokemon Go' Players Unknowingly Trained Delivery Robots With 30 Billion Images
78997 image icon

The main lesson here isn't that some company tricked its users into providing data for free. It seems pretty clear that Pokémon Go players understood that the information, and especially the photos, they submitted would be used to train the AI. In a similar fashion, I am under no illusion that the photos and reviews I upload to Google Maps won't be used in the same way. Of course it will. No, the main takeaway is that we're moving from an era where all AIs were trained on text into an era where they are trained on geospatial data, photographs, and other non-text data. See also Popular Science.

Today: Total: Slashdot, 2026/03/17 [Direct Link]
The Trust Tax: Why Every AI Deployment in Education Fails or Succeeds on a Single Variable
78996 image icon

I don't disagree with the main point here, though I do have an issue with defining 'trust' in any useful way. But I digress. Here's what Nik Bear Brown is arguing: what matters in AI-in-education deployment isn't what the AI is capable of doing, it's whether we can trust it. "It is calibrated trust — a state where a user's confidence in a system accurately matches the system's actual reliability." We obviously don't want students to trust it too much, but you can also trust it too little. Then people "exhibit what researchers call 'algorithmic aversion.' They disengage." And there are other problems around trust - the 'honeypot effect', where you learn to depend on a system, which then changes; the 'adversarial trap', where a system you trusted turns out to be (say) spying on you; and the 'bias problem', where a system you trust is subtly leading you astray. These are all, says Brown, pedagogical issues. Getting them wrong has consequences for learning.

Today: Total: Nik Bear Brown, 2026/03/17 [Direct Link]
The key problem with the "brain in a vat" thought experiment
78995 image icon

This short article uses a philosophical classic to address what might be called 'the embodiment problem'. The classic is, of course, the question, 'how do we know we are not brains in vats'? All our sensations, all our physical experiences, could be wired up as inputs into the brain. Could we tell the difference. This article argues that we could, because it would be much too complex to simulate our experiences. "Thompson and Cosmelli conclude (18 page PDF) that to really envat a brain, you must embody it. Your vat would necessarily end up being a substitute body." Well - sure. Even the simplest version of 'brain in a vat' postulates some external mechanism standing in for the human body. That's the whole point. But the question is more subtle: is it the case that there can be one and only one possible cause for a given set of conscious experiences? If the answer is 'yes', then our options for both ourselves and for AI are fundamentally limited. But on what grounds would you argue 'yes'? This article doesn't really offer those grounds, beyond saying it's complex. But complexity doesn't prove necessity.

Today: Total: Adam Frank, Big Think, 2026/03/17 [Direct Link]
Robots Didn't Kill the Internet
78994 image icon

Carlo Iacono argues convincingly that today's 'dead internet' isn't the result of AI, it's the result of incentives. Platforms are asking for things that hold attention and produce a useful signal. "That question, applied at scale and compounded over years, is what killed the internet. Not robots. Incentives." The internet has become a giant casino, he argues. Websites are engineered to keep people clicking, and they collect their cut in the form of advertising revenue. "The internet did not start rotting because robots learned to write. It started rotting when platforms became casinos. The robots are just very efficient casino staff." 

Today: Total: Carlo Iacono, Hybrid Horizons, 2026/03/17 [Direct Link]
How we're reimagining Maps with Gemini
78993 image icon

I've had some fun - with hilarious results - trying to use ChatGPT to plan cycling routes. So I'm not really sure how well an integration of AI and Google Maps would work. But it couldn't be worse, because at least the map layer would impose constraints (such as, don't plan cycling routes over open water). If we understand the AI as an interpreter - accepting questions and translating them into map queries - then it might be useful. But there are so many ways this could go wrong. Can't fault them for trying, though. Related: Gemini Embedding - "model that maps text, images, video, audio and documents into a single embedding space."

Today: Total: Miriam Daniel, Google, 2026/03/18 [Direct Link]
Who Owns AI-Generated Content?
78992 image icon

"The legal trajectory of AI-generated content presents a pivotal opportunity for open education, directly addressing the twin problems of legal uncertainty and eroded trust outlined at the outset," writes Rory McGreal. First, AI-generated content is automatically open content. "The clear consensus that purely AI-generated works are not copyrightable and belong to the public domain provides a stable legal foundation. Educators can use such content without fear of copyright infringement, licensing fees, or complex attribution chains. This demystifies a major part of the 'minefield,' transforming the 'what if' from a source of dread into a clear guideline: autonomous GenAI can be used to create OER lessons." That doesn't mean 'anything goes'. "The academic community must uphold principles of authorship, accountability, and transparency. Using public domain AI content does not absolve educators of the need for due diligence, citation of specific sources, or ethical disclosure of AI assistance in human-AI collaborations."

Today: Total: Rory McGreal, unitwin-unoe, 2026/03/16 [Direct Link]

Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2026
Last Updated: Mar 18, 2026 1:37 p.m.

Canadian Flag Creative Commons License.