Doug Belshaw outlines his approach for AI-assisted blog authoring (which, presumably, he has also employed here). "To me," he writes, "authenticity is a construct. It is not something that lives inside the text itself, but is rather a relationship between the writer, the reader, and their shared context... Posts I publish here are mine. I'm holding myself accountable for them, and you too should hold me to that, even if AI was involved in the process." That's all fine, and everyone's writing process is different. I can't really use AI for writing because what I write is usually a transcription of the voice in my head, which AI can't capture. I'm a first-draft writer, even for formal work. But I use it enthusiastically for things I have to deliberately construct, like software. And when reading other people's writing, I guess I'm also looking for that voice (someone else's voice, not mine) that I can hear in my head. It doesn't matter to me how the voice was produced, so long as it's there, and is making sense.
Today: Total: Doug Belshaw, Open Thinkering, 2026/03/20 [Direct Link]Select a newsletter and enter your email to subscribe:
Stephen Downes works with the Digital Technologies Research Centre at the National Research Council of Canada specializing in new instructional media and personal learning technology. His degrees are in Philosophy, specializing in epistemology, philosophy of mind, and philosophy of science. He has taught for the University of Alberta, Athabasca University, Grand Prairie Regional College and Assiniboine Community College. His background includes expertise in journalism and media, both as a prominent blogger and as founder of the Moncton Free Press online news cooperative. He is one of the originators of the first Massive Open Online Course, has published frequently about online and networked learning, has authored learning management and content syndication software, and is the author of the widely read e-learning newsletter OLDaily. Downes is a member of NRC's Research Ethics Board. He is a popular keynote speaker and has spoken at conferences around the world.

Stephen Downes,
stephen@downes.ca,
Casselman
Canada
Stephen's Retirement FAQ
Probably the biggest market news stories last year were the Byju's bankruptcy and the Coursera-Udemy merger, and as this report (43 page PDF) outlines the main business news from the ed tech sector was a decline in investment and broad consolidation combined with an uptick in activity in Europe. There's some clever marketing from Oppenheimer as one of the slides is opaqued (the one with all the logos that people love to use on slides) and you have to email them for it. I did not. If I were in the business of edtech business, I would expect some big swings in the marketplace, as with all industries that are fundamentally software and service based. Today we're mostly seeing AI-wrappers for existing services. But eventually new models have to begin to emerge. Via Matt Tower.
Today: Total: Oppenheimer, 2026/03/20 [Direct Link]There's an unintended lesson in learning here. This article outlines what Danu Anthony Stinson and colleagues call the 'acceptance prophecy' (terrible name), "where your expectation of being accepted or rejected subtly shapes your behavior, which in turn influences whether others actually accept or reject you." Francesca Tighinean outlines some common-sense approaches to being perceived as 'likable' by changing your behaviour and expectations. Sounds great - but learning something like this isn't as simple as switching it on. To focus on being likable takes a lot of self-awareness and (especially) practice. That's the hard part - finding the time, finding the motivation, recovering when it fails. And most of all, you have to value being likable, which may be difficult if you value other things more.
Today: Total: Francesca Tighinean, Big Think, 2026/03/20 [Direct Link]One reason I stayed with Perl as my programming language of choice despite the increasing popularity of Python is the chaos displayed in this XKCD cartoon - different (and incompatible) versions, custom environments, and more. The use of Docker, the guidance of AI, and the development of new management tools like uv and ty have made it bearable for me, and I've developed a number of utilities using it. So it's of interest to me that OpenAI is buying the company that made those tools - as opposed to Anthropic, which really excels in programming support. Anyhow, everything here is all open source, so it's not like we'll suddenly lose support. And it gives me a good reason to explain why I've been so slow to adopt Python.
Today: Total: Simon Willison, Simon Willison's Weblog, 2026/03/20 [Direct Link]This is another example of what I have in the past called 'regression to the mediocre'. It's not just that AI will change the tone of people's writing, according to this article, it will also changed the expression and meaning, making it "significantly less creative and less in their own voice." This shouldn't be a surprising outcome in a system designed to offer the most common or likely way of responding to a prompt. If you want the AI to speak in your 'voice', you need to train it specifically on that voice. It's utility depends on the application. In software, you probably want to do something the way it is commonly done.In creative writing, you want your expression to be unique. Whether the AI is doing something 'wrong' here depends very much on your expectations and preparation.
Today: Total: Jared Perlo, NBC News, 2026/03/20 [Direct Link]The main point of this article (41 page PDF) summarized Wednesday by Harvard Business Review is that large language models (LLM) use a variety of classical persuasion techniques to convince researchers they are right rather than correct their errors. I find both their representation of rhetoric and AI dated. For rhetoric, they reach back to the Greeks, classifying forms as ethos (ethical appeals), pathos (emotional appeals), and logos (logical appeals). And the AI studied was OpenAI's 2023 GPT-4. As well, I'm not sure the test they propose has a 'correct' answer; if I were a business student I might also defend my approach against a professor's expert judgment, especially when they use cheap rhetorical tactics like calling my response 'persuasion bombing'. Anyhow, sure, LLMs emulate the way humans respond when told to 'validate' their answer, which is what they were designed to do (as opposed to, say, solving HBS case studies).
Today: Total: Steven Randazzo, Akshita Joshi, Katherine Kellogg, Hila Lifshitz-Assaf, Fabrizio Dell'Acqua, Karim R. Lakhani, SSRN, 2026/03/20 [Direct Link]Web - Today's OLDaily
OLDaily Email - Subscribe
Web - This Week's OLWeekly
OLWeekly Email - Subscribe
RSS - Individual Posts
RSS - Combined version
JSON - OLDaily
Podcast - OLDaily Audio
Websites
Stephen's Web and OLDaily
Half an Hour Blog
Leftish Blog
MOOC.ca
Stephen's Guide to the Logical Fallacies
gRSShopper
Let's Make Some Art Dammit
Email: stephen@downes.ca
Email: Stephen.Downes@nrc-cnrc.gc.ca
Skype: Downes
Professional
National Research Council Canada
Publications
Presentations
All My Articles
My eBooks
About Stephen Downes
About Stephen's Web
About OLDaily
Subscribe to Newsletters
gRSShopper
Privacy and Security Policy
Statistics
Archives
Courses
CCK 2008, 2009, 2011, 2012
PLENK 2010
Change 11 - 2011
Education Futures - 2012
Learning Analytics - 2012
REL 2014
Personal Learning - 2015
Connectivism and Learning - 2016
E-Learning 3.0 MOOC - 2018
Ethics, Analytics - 2020
Stephen Downes, Casselman, Canada
stephen@downes.ca
Last Updated: Mar 20, 2026 1:37 p.m.

