'AI sycophancy' is: "the tendency of AI models to prioritize user agreement and approval over truthfulness, accuracy, or independent reasoning." The argument here is that "From 'mirroring' to offering 'confirmation bias,' sycophancy is unhealthy. It can lead to a range of bad consequences and again contribute to Artificial Ignorance: if a major factor in learning is seeing where one is wrong or has made a mistake and then working to address that error and make a correction, what happens if one is never wrong?" I can see the concern, but it seems wrong to generalize from a few simple prompts to everything AI is or will be. Why wouldn't we ask AI to respond differently when we're learning than when we're just trying to get things done?
Today: Total: Cathy N. Davidson, 2026/02/06 [Direct Link]Select a newsletter and enter your email to subscribe:
Stephen Downes works with the Digital Technologies Research Centre at the National Research Council of Canada specializing in new instructional media and personal learning technology. His degrees are in Philosophy, specializing in epistemology, philosophy of mind, and philosophy of science. He has taught for the University of Alberta, Athabasca University, Grand Prairie Regional College and Assiniboine Community College. His background includes expertise in journalism and media, both as a prominent blogger and as founder of the Moncton Free Press online news cooperative. He is one of the originators of the first Massive Open Online Course, has published frequently about online and networked learning, has authored learning management and content syndication software, and is the author of the widely read e-learning newsletter OLDaily. Downes is a member of NRC's Research Ethics Board. He is a popular keynote speaker and has spoken at conferences around the world.

Stephen Downes,
stephen@downes.ca,
Casselman
Canada
I'm sure the authors are right when they say (17 page PDF) "Only a small percentage of publications in prestigious AI Ethics conferences and journals discuss the ethical issues raised by AI in non-Western countries, and even fewer of those appropriately address such circumstances." I mean, in what domain has this ever been false? But I'm not sure I'll accept the word of four Cambridge scholars that "AI Ethics as a global scientific enterprise is co-producing a new techno-political order." Sure, the people involved in producing "prestigious" publications think they're the ones defining the new order, but I have the feeling they're not. The authors "argue that the global AI Ethics project fails to fulfil its promise to be universally useful by keeping global majority populations in the southern regions marginalised as 'others'." Nobody asked "the global AI Ethics project" to do this, and nobody put them in charge. They just assume (as always) that they are.
Today: Total: Abdullah Hasan Safir, Kerry McInerney, Alan F. Blackwell, Ramit Debnath, ACM Digital Library, 2026/02/05 [Direct Link]This article identifies "three different approaches through which authors tend to understand and make sense of the moral concerns raised by AI: (1) a 'principle-based' approach, (2) a 'lived realities' approach and (3) a 'power structures' approach." Each has its own strength and weaknesses, which the authors identify. The intent of the taxonomy is to consider how people address ethical questions about AI in general (and no, they are not the first to do so) and "the three approaches we identify can thus be understood as three different lenses through which authors understand and grapple with moral concerns raised by AI." Most of the article is a straight-forward survey of the ethical issues as viewed through the three approaches, useful though not revolutionary.
Today: Total: Elizabeth Liz M. Groen, Tamar Sharon, Marcel Becker, AI and Ethics, 2026/02/05 [Direct Link]This is a report from the BETT London show (formerly known as the 'British Educational Training and Technology Show', but we can't have meaningful names any more). The headlines from this year's BETT were about what you would expect, so no need to linger on them. In this article Natalia Kucirkova argues that what new is "a broader cultural change: even new companies understand that evidence is no longer a 'nice to have,' but a core expectation of serious EdTech." This focus on impact is intuitively appealing, but hides a hidden caveat: there can't be different sorts of programs leading to different outcomes any more. The measurement becomes the standard. So we see, for example, "Turkey shared that it has adopted the EduEvidence certification scheme to ensure its testbed efforts align with international criteria for what 'good' looks like." Is that what we want?
Today: Total: Edtech Insiders, 2026/02/05 [Direct Link]This is a good article that really makes clear a lot of the institutional constraints to using AI. It's presented in a '10 lessons learned' format, and while some of the lessons are useful (such as, "culture is infrastructure") the greater value is found in the contrast between what we might call an 'institutional understanding' and the wide open context that is the English language as understood by an AI. In an institution, even simple words, like 'enrollment', 'course' or 'undergraduate' have a specific meaning (and indeed may mean different things in different offices) while to an AI any old off-the-shelf meaning will do. As well, there is a distinction between 'facts' and 'truth' as the institution understands them - there's an 'official' story for everything - and what the wider community may view as factual and truthful, which may include some unofficial perspectives. And there are some things for which there should be no official answer - such as 'which professor gives the best grades?' - even if there is a fact of the matter that people want to know.
Today: Total: Bharathwaj Vijayakumar, Samyukta Alapati, EDUCAUSE Review, 2026/02/05 [Direct Link]The authors' idea of public engagement is positively quaint: "As for where we fall on the issue, both authors of this piece have written letters to the editor throughout our careers." I wonder what they would say about activist engagement working directly with the community? Still, the discussion is one essential for any academic; this one is framedin the divergent views of Max Weber, who " argued that sharing one's opinion flies in the face of scientific neutrality," and Pierre Bourdieu, who argued "possessing knowledge without sharing it is akin to deciding not to help a person in peril." Here's another recent discussion of the same issue from Boston College and an older one from Australia.
Today: Total: Stéphane Allaire, Frédéric Deschenaux, University Affairs, 2026/02/05 [Direct Link]Web - Today's OLDaily
OLDaily Email - Subscribe
Web - This Week's OLWeekly
OLWeekly Email - Subscribe
RSS - Individual Posts
RSS - Combined version
Podcast - OLDaily Audio
Websites
Stephen's Web and OLDaily
Half an Hour Blog
Leftish Blog
MOOC.ca
Stephen's Guide to the Logical Fallacies
gRSShopper
Let's Make Some Art Dammit
Email: stephen@downes.ca
Email: Stephen.Downes@nrc-cnrc.gc.ca
Skype: Downes
Professional
National Research Council Canada
Publications
Presentations
All My Articles
My eBooks
About Stephen Downes
About Stephen's Web
About OLDaily
Subscribe to Newsletters
gRSShopper
Privacy and Security Policy
Statistics
Archives
Courses
CCK 2008, 2009, 2011, 2012
PLENK 2010
Change 11 - 2011
Education Futures - 2012
Learning Analytics - 2012
REL 2014
Personal Learning - 2015
Connectivism and Learning - 2016
E-Learning 3.0 MOOC - 2018
Ethics, Analytics - 2020
Stephen Downes, Casselman, Canada
stephen@downes.ca
Last Updated: Feb 06, 2026 3:37 p.m.

