Minas Karamanis depicts a scenario where one PhD graduate does all her work by hand while the other uses AI. On paper, both had exactly the same outcome, thoiugh arguably, one understood where the other did not. "The real threat," writes Karamanis, "is a slow, comfortable drift toward not understanding what you're doing. Not a dramatic collapse. Not Skynet. Just a generation of researchers who can produce results but can't produce understanding. Who know what buttons to press but not why those buttons exist. Who can get a paper through peer review but can't sit in a room with a colleague and explain, from the ground up, why the third term in their expansion has the sign that it does." This is not a new problem. There are different levels of understanding. AI gives us access to tools without requiring the deepest level of understanding. That doesn't mean the understanding disappears. Via Tom Czarniecki.
Today: Total: Minas Karamanis, Ergosphere, 2026/04/06 [Direct Link]Please select a newsletter and enter your email to subscribe.
Stephen Downes spent 25 years as an expert researcher at the National Research Council of Canada, specializing in new instructional media and personal learning technology. With degrees in Philosophy and a background in journalism and media, he is one of the originators of the first Massive Open Online Course, has published frequently about online and networked learning, and is the author of the widely read e-learning newsletter OLDaily. He is a popular keynote speaker and has presented at conferences around the world.
Support OLDaily
OLDaily has been free and open to all readers since 2001. It is a valuable and widely-used resource for educators, researchers, and learners worldwide. Please consider a monthly contribution to sustain the time and resources required to publish it every day.
Here's what's in the latest edition of OLDaily
Emotion plays an important role in learning and cognition and so it's not surprising to see it play a similar role in AI. That, at least, is what researchers at Anthopic have found. Now, these are not the same as human emotions, but it takes a bit to explain why. AI systems act like a character that speak a language and develop rich internal representations of the abstract concepts underlying that language, including emotions. "These correspond to specific patterns of artificial 'neurons' which activate in situations - and promote behaviors - that the model has learned to associate with the concept of a particular emotion (e.g., 'happy' or 'afraid')". Nothing in this suggests that an AI feels things the way a human would, but AI's representation of an emotion plays the same functional role as its counterpart in humans. The full report considers three case studies: blackmail, reward hacking, and sycophancy and harshness.
Today: Total: Anthropic, 2026/04/06 [Direct Link]As readers probably know, the Australian government ban on social media for children took effect last December. It's probably much too early to draw conclusions, but this article reports that as of this week, "around seven in 10 children remain on major platforms." Writes Samantha Floreani, "If policymakers wish to genuinely reduce harm to young people online, they must take seriously the task of challenging models based on behavioural advertising, profiling and problematic algorithm-driven feeds." Image: Canberra Times.
Today: Total: Samantha Floreani, The Guardian, 2026/04/06 [Direct Link]I really don't want to toss this grist into the mill, but here it is anyway, because whether or not this is a real site with real content, the concept is out there, and that's all that matters: "Smart glasses are a real technology with genuine consumer use cases: translation, navigation, teleprompter, memory capture, hands-free communication. The fact that they can be used to cheat is a side effect the industry neither planned for nor celebrates. But the fact remains: the line between 'smart device' and 'cheating device' is today defined entirely by the user's intention." Via Subconscious Proxy.
Today: Total: S. Popovich, abit.ee, 2026/04/06 [Direct Link]As I've often said, education researchers love a taxonomy, and this is another one, so I may as well pass it along. The five roles are: the critic, the verifier, the interlocutor, the editor, and the architect. "They are positions on a spectrum that runs from critical distance to collaborative integration, and the student's movement across them is shaped by context, by the demands of the task, and by the developing expertise the student brings to it."
Today: Total: Nick Potkalitsky, Educating AI, 2026/04/06 [Direct Link]This is a pretty good look at the design of a coding agent; this is a mechanism that uses an AI model to create new software. It's clearly written and well illustrated. It's interesting because it essentially describes a coding task model - that is, what people are actually doing when they write software. I picked this up via Benjamin Riley, who writes a critical summary of the AI coding process outlined here, calling it an ouroboros, a snake that eats its own tail. "AI writes the code. AI reviews the code. AI checks the deployment. When it breaks, the answer is more AI. The loop has no exit condition...The company that sells AI coding tools cannot build a quality product with its own AI coding tools." With last week's release of Claude Code's harness there has been a wave of similar posts. It's clear that Claude Code, at least, does not use its own code as input; there's a specific constraint against it, and the ouroboros problem is well understood and accounted for.
Today: Total: Sebastian Raschka, Ahead of AI, 2026/04/06 [Direct Link]Web - Today's OLDaily
OLDaily Email - Subscribe
Web - This Week's OLWeekly
OLWeekly Email - Subscribe
RSS - Individual Posts
RSS - Combined version
JSON - OLDaily
Podcast - OLDaily Audio
Websites
Stephen's Web and OLDaily
Half an Hour Blog
Leftish Blog
MOOC.ca
Stephen's Guide to the Logical Fallacies
gRSShopper
Let's Make Some Art Dammit
Email: stephen@downes.ca
Email: Stephen.Downes@nrc-cnrc.gc.ca
Skype: Downes
Professional
National Research Council Canada
Publications
Presentations
All My Articles
My eBooks
About Stephen Downes
About Stephen's Web
About OLDaily
Subscribe to Newsletters
gRSShopper
Privacy and Security Policy
Statistics
Archives
Courses
CCK 2008, 2009, 2011, 2012
PLENK 2010
Change 11 - 2011
Education Futures - 2012
Learning Analytics - 2012
REL 2014
Personal Learning - 2015
Connectivism and Learning - 2016
E-Learning 3.0 MOOC - 2018
Ethics, Analytics - 2020
Stephen Downes, Casselman, Canada
stephen@downes.ca
Last Updated: Apr 07, 2026 1:37 p.m.


