[Home] [Top] [Archives] [About] [Options]

OLWeekly

The Questions We Ask About AI in Education
Stephen Downes, 2026/02/06


Icon

This is an early version of an article I wrote intended to appear elsewhere, and the two versions are quite different, and I like this version, so I thought I would post it here. The main intent is to move the discussion toward better questions about AI in education than we have been asking to this point. "To begin at the beginning: what is it we are even trying to do in education? Why exactly do we want to impart students with new knowledge and new skills?" Much of the discussion of AI assumes there is consensus on this point, but there generally isn't, and this colours a lot of our perspectives. But maybe it shouldn't: "What we are not asking, though, is whether we will need to do any of these things in the future at all."

Web: [Direct Link] [This Post][Share]


Why higher ed can't ignore Reddit
Liz Gross, Campus Sonar, 2026/02/06


Icon

Sure, Reddit is part of my own media diet, and has been for a number of years now. But Reddit is its own place, and its important, first, to not generalize about Reddit (each of its discussion topics, or 'subs', is a distinct entity, with its own (often fickle) moderators and sense of community), and second, take anything you read on Reddit with a large dose of scepticism (there's a lot of cheerleading, brigading, and influencing going on). Depending on Reddit is like depending on a really unrepresentative and undersized survey - it might tell you something exists or is a possibility, but that's the extent of its predictive or diagnostic powers. Remember, on Reddit, you're taking to individuals, some of whom might even be real, not to communities.

Web: [Direct Link] [This Post][Share]


Same old tired narrative: "Classes were built for the 1900s"
Apostolos Koutropoulos, Multilitteratus Incognitus, 2026/02/06


Icon

Finding arguments to criticize on LinkedIn is like shooting fish in a barrel (except maybe that the fish on LinkedIn want to be shot; any exposure is good exposure). Still I can be a little bit sympathetic with the criticism as presented here, because it is (a) one we've been hearing for the last 20 years, and (b) points to a real problem, but one that is outside the means of edtech or instructional design to correct. As Apostolos Koutropoulos says at one point, "You know what hasn't changed? The operating environment we work in. Organizations want click-and-submit kind of eLearning - for better or for worse. This is mostly for compliance." I mean, for the most part taking 'courses' doesn't really make sense any more, especially in a work context. But organizations aren't clamouring for a better way to deploy learning (unless it's to train AI models... but I digress). That's a wider problem, and not solved simply by pointing to the 'right' way to do it.

Web: [Direct Link] [This Post][Share]


CC at the AI Impact Summit: Core Interventions for the Public Interest - Creative Commons
Annemarie Eayrs, Creative Commons, 2026/02/06


Icon

Most of us will have no voice at the AI Impact Summit in Delhi, and we need to be careful how we are represented by those who would speak for us. For many in the open learning community, Creative Commons takes on that role. This concerns me, because in my own case we have our difference. A case in point can be seen in the proposed system for "preferences to communicate how data holders wish their data to be used in AI is at its core a data governance mechanism." I know the words sound great, but the plan "to equip creators and data-holding communities with legible, scalable forms of agency" representing a shift from promoting openness to promoting greater means of control. And when they write "data governance is about making decisions, about choice," I don't agree. There's a vast difference between picking from predefined options, and forging one's own path. At this and other summits Creative Commons should be clear that its underlying interest isn't in representing openness, but in advocating for ownership. 

Web: [Direct Link] [This Post][Share]


"Artificial Ignorance" and Data Sycophancy
Cathy N. Davidson, 2026/02/06


Icon

'AI sycophancy' is: "the tendency of AI models to prioritize user agreement and approval over truthfulness, accuracy, or independent reasoning." The argument here is that "From 'mirroring' to offering 'confirmation bias,' sycophancy is unhealthy. It can lead to a range of bad consequences and again contribute to Artificial Ignorance:  if a major factor in learning is seeing where one is wrong or has made a mistake and then working to address that error and make a correction, what happens if one is never wrong?" I can see the concern, but it seems wrong to generalize from a few simple prompts to everything AI is or will be. Why wouldn't we ask AI to respond differently when we're learning than when we're just trying to get things done? 

Web: [Direct Link] [This Post][Share]


Distributive Epistemic Injustice in AI Ethics: A Co-productionist Account of Global North-South Politics in Knowledge Production
Abdullah Hasan Safir, Kerry McInerney, Alan F. Blackwell, Ramit Debnath, ACM Digital Library, 2026/02/05


I'm sure the authors are right when they say (17 page PDF) "Only a small percentage of publications in prestigious AI Ethics conferences and journals discuss the ethical issues raised by AI in non-Western countries, and even fewer of those appropriately address such circumstances." I mean, in what domain has this ever been false? But I'm not sure I'll accept the word of four Cambridge scholars that "AI Ethics as a global scientific enterprise is co-producing a new techno-political order." Sure, the people involved in producing "prestigious" publications think they're the ones defining the new order, but I have the feeling they're not. The authors "argue that the global AI Ethics project fails to fulfil its promise to be universally useful by keeping global majority populations in the southern regions marginalised as 'others'." Nobody asked "the global AI Ethics project" to do this, and nobody put them in charge. They just assume (as always) that they are.

Web: [Direct Link] [This Post][Share]


An overview of AI ethics: moral concerns through the lens of principles, lived realities and power structures
Elizabeth Liz M. Groen, Tamar Sharon, Marcel Becker, AI and Ethics, 2026/02/05


This article identifies "three different approaches through which authors tend to understand and make sense of the moral concerns raised by AI: (1) a 'principle-based' approach, (2) a 'lived realities' approach and (3) a 'power structures' approach." Each has its own strength and weaknesses, which the authors identify. The intent of the taxonomy is to consider how people address ethical questions about AI in general (and no, they are not the first to do so) and "the three approaches we identify can thus be understood as three different lenses through which authors understand and grapple with moral concerns raised by AI." Most of the article is a straight-forward survey of the ethical issues as viewed through the three approaches, useful though not revolutionary.

Web: [Direct Link] [This Post][Share]


Impact by Design: The Emerging Era of EdTech 2.0
Edtech Insiders, 2026/02/05


This is a report from the BETT London show (formerly known as the 'British Educational Training and Technology Show', but we can't have meaningful names any more). The headlines from this year's BETT were about what you would expect, so no need to linger on them. In this article Natalia Kucirkova argues that what new is "a broader cultural change: even new companies understand that evidence is no longer a 'nice to have,' but a core expectation of serious EdTech." This focus on impact is intuitively appealing, but hides a hidden caveat: there can't be different sorts of programs leading to different outcomes any more. The measurement becomes the standard. So we see, for example, "Turkey shared that it has adopted the EduEvidence certification scheme to ensure its testbed efforts align with international criteria for what 'good' looks like." Is that what we want?

Web: [Direct Link] [This Post][Share]


When AI Meets Data: The Promise and the Pressure of Bringing AI into Higher Education Systems
Bharathwaj Vijayakumar, Samyukta Alapati, EDUCAUSE Review, 2026/02/05


This is a good article that really makes clear a lot of the institutional constraints to using AI. It's presented in a '10 lessons learned' format, and while some of the lessons are useful (such as, "culture is infrastructure") the greater value is found in the contrast between what we might call an 'institutional understanding' and the wide open context that is the English language as understood by an AI. In an institution, even simple words, like 'enrollment', 'course' or 'undergraduate' have a specific meaning (and indeed may mean different things in different offices) while to an AI any old off-the-shelf meaning will do. As well, there is a distinction between 'facts' and 'truth' as the institution understands them - there's an 'official' story for everything - and what the wider community may view as factual and truthful, which may include some unofficial perspectives. And there are some things for which there should be no official answer - such as 'which professor gives the best grades?' - even if there is a fact of the matter that people want to know.

Web: [Direct Link] [This Post][Share]


Public engagement: duty or detriment?
Stéphane Allaire, Frédéric Deschenaux, University Affairs, 2026/02/05


The authors' idea of public engagement is positively quaint: "As for where we fall on the issue, both authors of this piece have written letters to the editor throughout our careers." I wonder what they would say about activist engagement working directly with the community? Still, the discussion is one essential for any academic; this one is framedin the divergent views of Max Weber, who " argued that sharing one's opinion flies in the face of scientific neutrality," and Pierre Bourdieu, who argued "possessing knowledge without sharing it is akin to deciding not to help a person in peril." Here's another recent discussion of the same issue from Boston College and an older one from Australia.

Web: [Direct Link] [This Post][Share]


The rise of Moltbook suggests viral AI prompts may be the next big security threat
Benj Edwards, The Guardian, 2026/02/05


You remember Clawdbot, which changed its name to Moltbot when I covered it just a few days ago? Now it's called OpenClaw (you may want to bookmark this Wikipedia page in case things change again). It has grown exponentially and been found to be subject to any number of malicious attacks. It also inspired Moltbook, which is a Facebook for AI agents. The Wikipedia page, again to track this fast-moving phenomenon. Here's the best of Moltbook (from the first few days). Casey Newton gives us five ways to think of Moltbook. Security concerns. This is a pretty good overview, but it's probably AI-written. Are humans infiltrating Moltbook?

Web: [Direct Link] [This Post][Share]


Test post
Stephen Downes, Half an Hour, 2026/02/04


Icon

This is totally a test of gRSShopper

Web: [Direct Link] [This Post][Share]


This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2026 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.