[Home] [Top] [Archives] [About] [Options]

OLWeekly

They have to be able to talk about us without us
Anil Dash, 2025/12/05


Icon

So I think everything Anil Dash says here is right, but it's wrong. What's right? Like I said, everything: he's describing how to create a message that will reach everyone it needs to reach, with enough fidelity that they can understand it and act on it. This means (among other things) that the people who receive the message will have to be able to talk about you without you. You know - they way I'm talking about Anil Dash right now. He has no input; it's all on me, but it's his message being spread. So where is it wrong? Well - if everybody does this, we'll just drown in clearly communicated messages that everyone can understand. The very idea of sending a message to 'everybody' doesn't scale. Despite what Dash says, if enough people are doing it, it is impossible to spread a message at scale without the resources. That's why even today with a global communications system anyone can use we're still drowning is corporate slop and advertising.

Web: [Direct Link] [This Post][Share]


AI is Destroying the University and Learning Itself
Ronald Purser, Current Affairs, 2025/12/04


Icon

I've written articles like this - the ones where I take a pile of notes (like, say, these OLDaily posts) from the last year or so, organize them into themes, and then build a narrative around them. I pinnacled in the form around 12 years ago in London at Greenwich and the LSE. This article is like that, documenting the many many ways AI is going to kill the university or (in the words of Tyler Cowen) "will persist as a dating service, a way of leaving the house, and a chance to party and go see some football games." The sheer volume of notes is evidence of a community roiling. But this is a community based on rigid hierarchy and protocol, that exploits a large percentage of its work force, that denies access to the majority of society, and fails a third of those who enter. I'm not going to say that AI is the answer to all things, but it's directly impacting things that have needed attention as long as I have been active in the sector.

Web: [Direct Link] [This Post][Share]


What does it mean to understand language?
Colton Casto, Anna Ivanova, Evelina Fedorenko, Nancy Kanwisher, arXiv, 2025/12/04


Icon

I have mixed feelings about this paper (17 page PDF). On the one hand, the turning point for my efforts to learn French can when I began to apply frames to what I was trying to say (so I could decide things like tense and gender once and then not worry about it). On the other hand, I think of fMRI as a modern form of phrenology. What I can derive from this is the idea that language learning isn't just about language. As the authors put it, "a deep understanding of language... requires the exportation of information from the brain's core language system to other cognitive and neural systems that can build models." It's not, they say, that the whole brain is responsible for language processing, it's just that language processing depends on (shall we say) other systems that have multiple uses. 

Web: [Direct Link] [This Post][Share]


The Q, K, V Matrices
Arpit Bhayani, 2025/12/04


Icon

This is a useful reconstruction of the transformer architecture introduced in 2017 describing 'attention' and kicking off what would become the AI revolution stating in 2022. As Arpit Bhayani writes, "at the core of the attention mechanism in LLMs are three matrices: Query, Key, and Value. These matrices are how transformers actually pay attention to different parts of the input." This tells us what words in a sentence matter the most, and allows us to create the three matrices to more accurately predict what should come next. This is why AI isn't going away; look how simple and straightforward this is.

Web: [Direct Link] [This Post][Share]


Beyond Infographics: How to Use Nano Banana to *Actually* Support Learning
Philippa Hardman, Dr Phil's Newsletter, 2025/12/04


Icon

This is quite a good discussion on how to use Nano Banana to support genuine learning activities. As Philippa Hardman points out, the software is trained using text, not images, and thus avoids many of the issues of other image-generating software, while having issues of its own (including, as I showed a few days ago, making stuff up instead of relying on the source). But these examples - things like creating metaphors or generating fill-in-the-blank images - are generally resilient to that.

Web: [Direct Link] [This Post][Share]


James Marriott calls my critique "frustratingly naive."
Carlo Iacono, Substack, 2025/12/04


Icon

I think this is well stated: "The cognitive impacts of smartphone adoption are documented... So where's the real disagreement? His diagnosis: screen culture inherently biases toward poorer quality thought. As he puts it, "the general bias of a screen culture is towards poorer quality thought and information." The medium itself degrades cognition. My diagnosis: we've built extractive attention economies that exploit cognitive vulnerabilities for profit, and we're blaming the victims of this extraction for their own exploitation. The problem isn't screens; it's what we've designed screens to do." Image: Kidtown Melbourne.

Web: [Direct Link] [This Post][Share]


Solving the Puzzle of Cheating in Video Games
Charles Joshua Horn, The Blackwell Philosophy and Pop Culture Series, 2025/12/03


Icon

This article discusses the definition of cheating in video games, and in particular the question of whether it is possible to cheat in a single-player game. I think the considerations also apply to education. The definition Charles Joshua Horn eventually settles on is 'to gain unfair advantage over others', which means it would be impossible in a single-player game. But I play No Man's Sky, and we often share things we've built or expeditions we've run in pictures and videos online, and 'glitch' builds allow some (console) but not all (PC) players to do certain things. So I can never do the great builds and hence suffer (oh! the pain) socially. As education slowly transitions from a competitive endeavour to a, if you will, single-player game, the same considerations seem to apply. Something to think about.

Web: [Direct Link] [This Post][Share]


Take Back Your Feed: A Simple Guide to Getting Started With RSS (Even If Tech Intimidates You)
Mike Taylor, 2025/12/03


Icon

If you're reading this on LinkedIn, this article serves as a simple reminder that you don't have to live that way... (You can follow OLDaily in your RSS reader with this URL).

Web: [Direct Link] [This Post][Share]


The Memory Paradox: Why Our Brains Need Knowledge in an Age of AI
Barbara Oakley, Michael Johnston, Kenzen Chen, Eulho Jung, Terrence Sejnowski, SSRN, 2025/12/03


Icon

This is a very long and very detailed article (50 page PDF) arguing the following: "we explain how underuse of the brain's declarative and procedural memory systems undermines reasoning, impedes learning, and diminishes productivity. We critique contemporary pedagogical models that downplay memorization and basic knowledge, showing how these trends erode long-term fluency and mental flexibility." Now my own reaction is one of scepticism (especially in the use of terms like 'engram' and 'manifold', and even in the distinction between procedural and declarative memory (though I asked ChatGPT about this and it said the distinction is well-founded)). It should definitely be passed along because it's quite the summary of the argument, though I would be interested in what people with greater experience in actual neuroscience would have to say about it. The paper is dated from last May but was referenced by Graham Attwell today.

Web: [Direct Link] [This Post][Share]


An Issue...
2025/12/02


Icon

Following from the 'future of university' posts here yesterday and today, we have this thread beginning with this post: "An issue we're seeing at all levels of university is that many students are simply refusing to do *anything*. They aren't reading the syllabus, aren't following assignment guidelines, aren't engaging with material, ignoring deadlines. And this might seem like old news, but it truly has ramped up." I don't know how true this is, but if true, is definitely a sign of things changing. Via Johannes Ernst, who comments, "Read this thread by a prof and weep."

Web: [Direct Link] [This Post][Share]


The résumé is dying, and AI is holding the smoking gun
Benj Edwards, Ars Technica, 2025/12/04


Icon

This is the sort of thing digital badges were supposed to manage, but really, there was never any hope. "Due to AI, the traditional hiring process has become overwhelmed with automated noise... The flood of ChatGPT-crafted résumés and bot-submitted applications has created an arms race between job seekers and employers, with both sides deploying increasingly sophisticated AI tools in a bot-versus-bot standoff that is quickly spiraling out of control." Where does it leave my own prediction, specifically, that AI will allow employers to find potential staff from their online work? I'm not sure - I mean, it could still work, especially if we don't care whether or not they used AI to produce their work.

Web: [Direct Link] [This Post][Share]


I Went All-In on AI. The MIT Study Is Right.
Josh Anderson, The Leadership Lighthouse, 2025/12/04


Icon

So let's be clear about what was defined as 'failure' here: "I got the product launched. It worked (but) I needed to make a small change and realized I wasn't confident I could do it. My own product, built under my direction, and I'd lost confidence in my ability to modify it... Not immediate failure - that's the trap. Initial metrics look great. You ship faster. You feel productive. Then three months later, you realize nobody actually understands what you've built." But is this failure, really? Perhaps it is, by the traditional standards of software design. But why not just as the AI to make the change? As usual, what counts as 'failure' depends on what you're trying to do. Create an application that works? AI is often successful. Become an expert software developer? Then having someone (or something) is a non-starter. But we knew that.

Web: [Direct Link] [This Post][Share]


5 tech predictions for 2026 and beyond, according to Amazon CTO Dr. Werner Vogels
Amazon Staff, Amazon, 2025/12/03


Icon

The content of this article is attributed to Amazon's Chief Technology Officer (CTO) which makes it relevant to read. Other than that, it is laughably bad. "Our Astro team has documented people building genuine relationships with companion robots," write the ghost writers. "Developers who thrive in this AI-augmented world must become modern polymaths." And education? Citing Khan Academy as evidence, the authors write "for most of human history only the wealthy could afford a personal tutor. That's about to change." But somehow, teachers are not going away, which raises the question, what will they do? Become polymaths?  Vogel's own version of the article on his website is much better, but still carries the same messages. "For adults, AI is a tool. For Generation Alpha, it's an extension of thinking. They've deleted "impossible" from their operating system and replaced it with 'not yet.'"

Web: [Direct Link] [This Post][Share]


Major AI conference flooded with peer reviews written fully by AI
Miryam Naddaf, Nature, 2025/12/03


Icon

This article reports on an an analysis posted online by Pangram Labs that found around 21% of next year's International Conference on Learning Representations (ICLR) "peer reviews were fully AI-generated, and more than half contained signs of AI use... Among other things, they flagged hallucinated citations and suspiciously long and vague feedback on their work." It doesn't help that "many manuscripts that had been submitted to the conference with suspected cases of AI-generated text: 199 manuscripts (1%) were found to be fully AI-generated; 61% of submissions were mostly human-written; but 9% contained more than 50% AI-generated text." People may be quick to blame AI, which is fair, but I think we need to ask questions about the incentives at work and the ethics on display - especially as these same authors and reviewers are those teaching today's students. (This is one of those articles where enough content is posted online to attract reposters and search engines, but where the last six paragraphs are behind the paywall).

Web: [Direct Link] [This Post][Share]


A Critically Informed Conversation with Terry Anderson: Visions on the Next Generation of Online and Distance Education
Aras Bozkurt, Terry Anderson, Open Praxis, 2025/12/02


Icon

There's a bit of a surprise in this interview-style article (6 page PDF) as Terry Anderson says this: "I largely gave up work on the COI model as I found it couldn't really scale beyond the small group, constructivist pedagogical paradigm with in which it was built." He doesn't really see AI standing in: "teacher presence with the role modelling, care, affection and deep content knowledge will likely still be needed and will survive." At the same time, it's a mistake to simply reject AI. "Distance education has always been about access. Therefore, distance teachers and administrators will have to turn our social justice efforts towards insuring equitable and affordable access to the networks and machines of AI enhanced education."

Web: [Direct Link] [This Post][Share]


Is the future of education outside universities?
Annalee Newitz, New Scientist, 2025/12/02


Icon

This article feels a bit overly optimistic, but at the same time I am sympathetic with the core argument, which is that higher education is to be found increasingly outside academic institutions. Professors and academics are taking to the open internet, writes Annalee Newitz, in response to things like funding cuts and limitations in what they can say in class. To quote Karen Attiah, "This is not the time for media literacy or historical knowledge to be held hostage by institutions bending the knee to authoritarianism and fear." This article describes people like Attiah and Abigail Thorn who runs Philosophy Tube. "When the public can't gain access to higher education, Hall suggested, then higher education should come to the public. And that is exactly what educators are doing now." Well, some of them are. A few. Probably not enough. (If you hit New Scientist's 'article limit', try here).

Web: [Direct Link] [This Post][Share]


Generative AI has access to a small slice of human knowledge
Deepak Varuvel Dennison, Aeon, 2025/12/02


Icon

OK, to be clear, I don't believe that the author's father's tumour was cured by the traditional herbal concoction he was taking. My feeling is that, if this medicine really cured cancer, we'd know. But I do take the author's point that AI is missing out when it doesn't include the wide range of human languages - and therefore, human experiences - that exist. My view is essentially this, as stated by Deepak Varuvel Dennison: "I have my doubts about whether Indigenous knowledge truly works as claimed in every case. Especially when influencers and politicians invoke it superficially for likes, views or to exploit identity politics, generating misinformation without sincere enquiry. However, I'm equally wary of letting it disappear." Related: A team of African researchers has unveiled African Next Voices, the largest known AI-ready dataset for African languages.

Web: [Direct Link] [This Post][Share]


L’IA au cœur du nouvel écosystème de communication
Pierre Levy, Pierre Levy's Blog, 2025/12/03


Icon

If you don't speak French you'll want to translate this item, but it's worth the effort. The headline says "AI at the heart of a new communications ecosystem". What does that mean? Pierre Levy writes (translated by Firefox "I invite my readers to stop considering AI models in isolation. In reality, they cannot be separated from the information ecosystem to which they belong and depend. This ecosystem can be described as a three-station circuit: people, data and models," thus leading to a point at the heart of my own thinking on the subject, "Once this point is acquired, it is clear that ethical problems cannot be limited to models but that they must extend to the creation of the data that drives them, that is to say to all of our online behavior."

Web: [Direct Link] [This Post][Share]


Creating a Github-Based Webpage with AI
Miguel Guhlin, Another Think Coming, 2025/12/05


Icon

This is an article that is at once practical and that offers food for thought. The practical: describing how AI guided the author though the step-by-step process of setting up a web page with feedback form using GitHub and Google Sheets. The food for thought: some reflections from Carl Hendrick on what this all means: "The lesson here is not that AI has discovered a new kind of learning, but that it has finally begun to exploit the one we already understand... he answer is that teaching and learning are not the same thing, and we've spent too long pretending they are. Learning, the actual cognitive processes by which understanding is built, may indeed follow lawful patterns that can be modelled, optimised, and delivered algorithmically... ut teaching, in its fullest sense, is about more than optimising cognitive mechanisms. It is about what we value, who we hope our students become, what kind of intellectual culture we create."

Web: [Direct Link] [This Post][Share]


Representational Realism and the Problem of Tacit Belief
Eric Schwitzgebel, The Splintered Mind, 2025/12/02


Icon

Does it make sense to treat our beliefs as though they were explicitly represented in the brain? Eric Schwitzgebel offers some arguments that show how this leads to confusion. It's based on what's called 'the problem of tacit belief', where a 'tacit belief' is a belief you have, but which is not explicitly represented (for example, that there are fewer that 2,342,609 beers in the fridge). Representationalists will say that these are derived from 'core beliefs', but what counts as a 'core belief'? "But the distinction has no practical importance, doesn't map onto ordinary patterns of belief attribution, and has no empirical support." Why does this matter? Think of all the (pseudo-)theories of education based on 'belief' as a distinct state of mind or brain. That's why.

Web: [Direct Link] [This Post][Share]


The end of pretend - AI and the case for universities of formation
Jim Dickinson, WonkHe, 2025/12/01


Icon

Jim Dickinson offers "a secret that would get me thrown out of the Magic Circle," specifically that "the industrial model of education was built on scarcity, and scarcity made a certain kind of pretending possible... a period of pretending that the old skills still mattered." Why? "Maybe the sorting and the signalling is the problem. The degree classification system was designed for an elite era where classification signalled that the graduate was better than other people." The existence of AI eliminates that scarcity. So "if the content delivery can be automated, the campus has to be for something else. That something else is formation." In other words, "shift from 'I have information you lack' to 'I can work with you on problems that matter.'... from 'I'm better than you' to 'I can contribute.' From pretending to becoming." Maybe it's cultivating judgement, maybe it's knowing how we know. "The hopeful answer is that universities can be places where people become more fully human."

Web: [Direct Link] [This Post][Share]


The art of reimagining universities: a vision for higher education
Rathna Ramanathan, HEPI, 2025/12/01


Icon

According to this article, the problem with universities is that they're siloed. "This made sense when knowledge was more stable, and career paths were more predictable. But today's urgent challenges don't heed disciplinary boundaries and require insights from science, policy, economics, ethics, design, and creative practice simultaneously." Maybe. But no matter how they're structured, universities need to sustain an interest in addressing today's urgent challenges if they're to remain relevant. This is reflected in what Rathna Ramanathan calls 'the deeper shift': "the principle of 'addressing shared conditions' (that) makes complicity in global crises unavoidable rather than optional, preventing justice and sustainability from being relegated to elective courses or diversity initiatives." Do read the comments as well as they provide some necessary pushback.

Web: [Direct Link] [This Post][Share]


Looking for Root Causes is a False Path: A Conversation with David Blank-Edelman
David Blank-Edelman, Michael Stiefel, InfoQ, 2025/12/04


Icon

I watch Mayday with interest not simply because I like airplanes so much but because the investigation of why aircraft crash (and why so few of them do) teaches me a lot about how we know what we know. This article isn't about airplane crashes, it's about site reliability, but many of the conclusions are the same, for example, about why there's rarely a 'root cause' for any event, about why 'human error' is rarely the cause of any crash, and about how complex systems are, well, complex, which means there are elements of them that elude understanding entirely. As Michael Stiefel says in this interview, it's like "Rilke's famous expression, 'Living the question', because an architecture is never done. You never should think of it as done."

Web: [Direct Link] [This Post][Share]


The Testing Effect: Why Retrieval Practice is Your Most Powerful Learning Tool
Mike Taylor, 2025/12/03


Icon

The problem to my mind with an article like this is that it sets up a straw man in an attempt to bolster (political?) support for one specific practice. In this case, the straw man is the practice of 'rereading' in order to remember a body of content, as opposed to interval-based testing, a.k.a. 'retrieval practice'. I suppose testing will be the best option is (a) your only learning object is to remember a body of content, though not necessarily to understand it, and (b) you have no practical application in which to employ the content other than testing. So why is the drill-and-kill method being proposed here? The applications tell the story: compliance training, CRM-system training, 'quick hits'. Learning where there is no utility or agency on the part of the learner.

Web: [Direct Link] [This Post][Share]


Goodbye, ivory tower. Hello, inclusive communities
Karim Seghir, The PIE News, 2025/12/01


Icon

I'm not sure how good of an example Ajman University in the United Arab Emirates is, but to my mind they're at l;east saying the right things about migrating the university from an ivory tower to the community it serves. "To foster genuine openness to the world outside, Ajman University (AU) has removed the physical barriers around our perimeter... Community members will soon gain access to even more spaces at AU, including the library, the industry incubator, the innovation hub, and athletic facilities... the most significant measure of Ajman University's success will be our ability to foster social change and nurture the changemakers who will drive it forward." 

Web: [Direct Link] [This Post][Share]


Massification and its Unacknowledged Trade-offs | HESA
Alex Usher, HESA, 2025/12/01


Icon

Alex Usher gives the idea of 'massification' of university access a light overview, ending with a call to reconsider what we expected to get from it. There were three major failings, he says: job mismatch, underfunding and cost, and the lock the upper classes still held on elite institutions. And, he says, "there are huge swathes of the population that deeply resent the idea that they need to spend more time in school in order to enjoy a middle-class life." I don't think we can pin the rising inequality in society on universities, though there's a case to be made that they oversold promises of social mobility. I think we should offer higher education to anyone who wants it, because the social benefits are so great, but at the same time we need to somehow decouple the need for it from the possibility of having a good life. 

Web: [Direct Link] [This Post][Share]


4 ways you can 'role play' with AI
Paul Bradshaw, Online Journalism Blog, 2025/12/02


Icon

These are techniques trhat are used in design or development workshops to test ideas, and they can be used as well as a way to approach interacting with AI. There's nothing especially new about the idea of using role-play except that your AI can be more flexible in shifting from role to role. Of course, "AI responses can be sycophantic and gullible so use role prompting to give them a sceptical and stricter role." The article goes into each of the fopur roles in some detail and would be a good starter worksheet.

Web: [Direct Link] [This Post][Share]


If You Want Skilled Workers, Stop Treating Them Like They’re Disposable
Ann Kowal Smith, Forbes, 2025/12/01


Icon

I rarely agree with Forbes but they're on point here: "Companies that want skilled workers must fund training, reward mentorship, and support education. You can't harvest what you refuse to plant... If companies want workers, they have to flip the script: stop acting like entitled beneficiaries of the education system and start behaving like investors in the future of work." There are examples, as the article points out. 

Web: [Direct Link] [This Post][Share]


This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2025 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.