I Vibe-Coded An AI That Fact-Checks, Challenges, and Debates Any Article
Stefan Bauschard,
Education Disrupted,
2026/03/20
Stefan Bauschard reports, "I typed a simple prompt: 'I'd like to build an app that fact-checks articles on the web.' And Claude built it." The article describes the process in a matter-of-fact way; I found it a bit naive when talking about "when CNN or Fox News offers their predictably slanted post-mortems" ant switching model APIs if you want a left or right biased review, but I think the main take-away here is that it is possible to do things like this. Not, 'it is possible to create applications that fact-check articles'. But rather 'people can create applications that do things they want'.
Web: [Direct Link] [This Post][Share]
My 7-step approach for authentic AI-assisted blogging
Doug Belshaw,
Open Thinkering,
2026/03/20
Doug Belshaw outlines his approach for AI-assisted blog authoring (which, presumably, he has also employed here). "To me," he writes, "authenticity is a construct. It is not something that lives inside the text itself, but is rather a relationship between the writer, the reader, and their shared context... Posts I publish here are mine. I'm holding myself accountable for them, and you too should hold me to that, even if AI was involved in the process." That's all fine, and everyone's writing process is different. I can't really use AI for writing because what I write is usually a transcription of the voice in my head, which AI can't capture. I'm a first-draft writer, even for formal work. But I use it enthusiastically for things I have to deliberately construct, like software. And when reading other people's writing, I guess I'm also looking for that voice (someone else's voice, not mine) that I can hear in my head. It doesn't matter to me how the voice was produced, so long as it's there, and is making sense.
Web: [Direct Link] [This Post][Share]
Global Learning & Education 2025 Annual Report
Oppenheimer,
2026/03/20
Probably the biggest market news stories last year were the Byju's bankruptcy and the Coursera-Udemy merger, and as this report (43 page PDF) outlines the main business news from the ed tech sector was a decline in investment and broad consolidation combined with an uptick in activity in Europe. There's some clever marketing from Oppenheimer as one of the slides is opaqued (the one with all the logos that people love to use on slides) and you have to email them for it. I did not. If I were in the business of edtech business, I would expect some big swings in the marketplace, as with all industries that are fundamentally software and service based. Today we're mostly seeing AI-wrappers for existing services. But eventually new models have to begin to emerge. Via Matt Tower.
Web: [Direct Link] [This Post][Share]
The real reason some people are instantly likable
Francesca Tighinean,
Big Think,
2026/03/20
There's an unintended lesson in learning here. This article outlines what Danu Anthony Stinson and colleagues call the 'acceptance prophecy' (terrible name), "where your expectation of being accepted or rejected subtly shapes your behavior, which in turn influences whether others actually accept or reject you." Francesca Tighinean outlines some common-sense approaches to being perceived as 'likable' by changing your behaviour and expectations. Sounds great - but learning something like this isn't as simple as switching it on. To focus on being likable takes a lot of self-awareness and (especially) practice. That's the hard part - finding the time, finding the motivation, recovering when it fails. And most of all, you have to value being likable, which may be difficult if you value other things more.
Web: [Direct Link] [This Post][Share]
Thoughts on OpenAI acquiring Astral and uv/ruff/ty
Simon Willison,
Simon Willison's Weblog,
2026/03/20
One reason I stayed with Perl as my programming language of choice despite the increasing popularity of Python is the chaos displayed in this XKCD cartoon - different (and incompatible) versions, custom environments, and more. The use of Docker, the guidance of AI, and the development of new management tools like uv and ty have made it bearable for me, and I've developed a number of utilities using it. So it's of interest to me that OpenAI is buying the company that made those tools - as opposed to Anthropic, which really excels in programming support. Anyhow, everything here is all open source, so it's not like we'll suddenly lose support. And it gives me a good reason to explain why I've been so slow to adopt Python.
Web: [Direct Link] [This Post][Share]
AI is changing the style and substance of human writing, study finds
Jared Perlo,
NBC News,
2026/03/20
This is another example of what I have in the past called 'regression to the mediocre'. It's not just that AI will change the tone of people's writing, according to this article, it will also changed the expression and meaning, making it "significantly less creative and less in their own voice." This shouldn't be a surprising outcome in a system designed to offer the most common or likely way of responding to a prompt. If you want the AI to speak in your 'voice', you need to train it specifically on that voice. It's utility depends on the application. In software, you probably want to do something the way it is commonly done.In creative writing, you want your expression to be unique. Whether the AI is doing something 'wrong' here depends very much on your expectations and preparation.
Web: [Direct Link] [This Post][Share]
GenAI as a Power Persuader: How Professionals Get Persuasion Bombed When They Attempt to Validate LLMs
Steven Randazzo, Akshita Joshi, Katherine Kellogg, Hila Lifshitz-Assaf, Fabrizio Dell'Acqua, Karim R. Lakhani,
SSRN,
2026/03/20
The main point of this article (41 page PDF) summarized Wednesday by Harvard Business Review is that large language models (LLM) use a variety of classical persuasion techniques to convince researchers they are right rather than correct their errors. I find both their representation of rhetoric and AI dated. For rhetoric, they reach back to the Greeks, classifying forms as ethos (ethical appeals), pathos (emotional appeals), and logos (logical appeals). And the AI studied was OpenAI's 2023 GPT-4. As well, I'm not sure the test they propose has a 'correct' answer; if I were a business student I might also defend my approach against a professor's expert judgment, especially when they use cheap rhetorical tactics like calling my response 'persuasion bombing'. Anyhow, sure, LLMs emulate the way humans respond when told to 'validate' their answer, which is what they were designed to do (as opposed to, say, solving HBS case studies).
Web: [Direct Link] [This Post][Share]
Mapping Out Claude Courses
Miguel Guhlin,
Another Think Coming,
2026/03/20
I'm including this link mostly for my own benefit, as I may want to return to this list of courses on Claude to build my own skills a bit. Miguel Guhlin writes, "Curious about Claude's offerings, I asked it to lay out the courses for me as an educator... The review would start with Round 1, then move to Round 2 to learn more stuff, depending on how much I can stretch my brain. I really have to space my learning out just to give myself time to process new ideas and concepts."
Web: [Direct Link] [This Post][Share]
How to Build Practice-Based Learning Activities with AI
Philippa Hardman,
Dr Phil's Newsletter,
2026/03/19
I personally think this is only the tip of the iceberg. Philippa Hardman shows how to use AI to create four types of simulation (paraphrased): structured roleplay, to practise difficult conversations in real time; decision simulator, to navigate complex trade-offs with compounding consequences; feedback simulator, to get perspective-specific critique on work products; and adaptive case study, to interview a character to diagnose the real problem. These are the low-hanging fruit of Management 101. Much more complex and interesting possibilities suggest themselves: flight simulations, machine operations, chemical reactions, and more.
Web: [Direct Link] [This Post][Share]
Whole-Brain Connectomic Graph Model Enables Whole-Body Locomotion Control in Fruit Fly
Zehao Jin, Yaoye Zhu, Chen Zhang, Yanan Sui,
arXiv,
2026/03/19
In 2024 the entire fruit fly connectome was mapped (a connectome is the full set of connections in a neural network). Last year, the researchers simulated the entire connectome on a computer and then ran some tests to see what would happen. The result: the fruit fly exhibited fruit fly behaviour (such as walking toward food and cleaning its antennae) without training. This illustrates that (simple) behaviour can be hard-coded into a neural network, in addition to being learned through a series of training events. I found this (13 page PDF) via Nir Diamant but he doesn't link to the original study anywhere in his article. For shame! See also: RoboHorizon, XROM, Eon (with TikTok video).
Web: [Direct Link] [This Post][Share]
Liberty and Zhi: Chinese and Anglo-American Ideas of the University
Alex Usher,
HESA,
2026/03/19
"When it comes to student development, it is not necessarily only about self-development. It is always part of a broader communal or collective development alongside the individual's own development." This is presented as the Chinese perspective in this interview with Lili Yang of the University of Hong Kong. "There has been a huge mistake in the policy pathways of recent decades in places like the UK when it comes to tuition fees. The mistake lies in overlooking - or deliberately ignoring - the fact that higher education and student development contribute not only to private returns for individuals." Back in my days as a student activist we used data showing students gain 23% of the benefit of a university education; society gained the rest. Despite this, students paid a much greater percentage of the cost - in effect, subsidizing the broader economy. I have no idea what the statistic would look like today, but I'm sure it exists, and is continuing to be ignored.
Web: [Direct Link] [This Post][Share]
Introducing Vistral: A Grammar of Graphics for Streaming Data
Leland Wilkinson,
Timeplus,
2026/03/19
This is really interesting on a number of levels. The web page introduces an open source project called Vistral, which is a TypeScript library that brings what it calls the Grammar of Graphics to streaming data. "Here's the thing about real-time data: it never stops. A traditional chart takes a complete dataset, computes scales, renders pixels, and you're done. But streaming data keeps arriving. Your axes need to shift. Old points need to expire. Aggregations need to update incrementally." You can learn more about the grammar of graphics from this presentation (which I found today via Data Science Weekly Issue 643). "It's what we've been building internally at Timeplus to power our streaming dashboards, and now it's available for every developer under open source Apache 2.0 license." You could lose yourself in this; at the very least look at the presentation.
Web: [Direct Link] [This Post][Share]
The Best Tacit Knowledge Videos on Every Subject
Parker Conley,
LessWrong,
2026/03/19
Videos (like my own series) that seek to pass on the thinking process behind an activity are known as 'tacit knowledge videos'. This website serves as a Schelling point for some of the best examples. "Tacit knowledge is extremely valuable. Unfortunately, developing tacit knowledge is usually bottlenecked by apprentice-master relationships," writes Conley. I could think offhand of many more topics to add (bicycle maintenance is one) and I'm sure many of the best videos are left out of this list. Nonetheless, the site makes a compelling point (in my opinion) about the development and use of numerous and usually open learning resources online.
Web: [Direct Link] [This Post][Share]
Habermas, democratic discourse, and class
Lisa Herzog,
Crooked Timber,
2026/03/18
This is a reflection on the passing of Jürgen Habermas a few days ago. I am not even remotely a Habaemas scholar, so I cannot judge on the adequacy of this summary and critique of his thought, though assuming it is well represented, the response seems reasonable to me - a politics of discourse does favour those who were fortunate enough to learn the skills and manners required for success in this forum, and that practical actions - and especially interactions - matter at least as much as words. "In other words, democracy-as-discourse, important as this idea remains, has preconditions in the wider socio-economic system of society that Habermas did, arguably, not sufficiently address."
Web: [Direct Link] [This Post][Share]
Beyond 'Blame the Student': Correspondence Bias in University Attrition Activities
Colin Bee,
Col's Weblog,
2026/03/18
I'm not going to disagree. "Universities tend to treat dropping out, failing, or disengaging as if it directly reveals what students are like (motivation, ability, resilience), rather than seeing these behaviours as products of their circumstances and the institutional environment." This is an example of correspondence bias, "a candidate for the most robust and repeatable finding in social psychology" and more properly known as Fundamental Attribution Error (FAE), explains Colin Beer. He recommends a telling paper (51 page PDF): "Factors that contribute to completion rates for RUN students are nuanced, complex and multifaceted. The issues facing RUN cohorts and regional universities will not be addressed by adopting narratives that attribute blame to either students or institutions."
Web: [Direct Link] [This Post][Share]
GenAI for Instructional Designers: It should be the sidekick"
StefaniePanke,
AACE,
2026/03/18
Interview with Luke Hobson, "an instructional designer, author, educator and social media influencer." It's a wide-ranging discussion that I can't really summarize (but all the usual hits are in there - AI creating more work, a few cool tools, I make my own GPTs, "ethics, student data, privacy, deepfakes", AI slop, don't ask professors to change, "it should be the sidekick"). Related: Matt Crosslin: Most People Don't Need a "GenAi for Dummies" Book Anymore.
Web: [Direct Link] [This Post][Share]
Escaping the clutches of big tech - initiative from Norway
Alastair Creelman,
The corridor of uncertainty,
2026/03/18
Alastair Creelman summarizes and comments on a European report called Breaking Free - Pathways to a fair technological future (100 page PDF) published by the Norwegian Consumer Council. Though drawn directly from the executive summary, his reading of the recommendations is a lot narrower than mine. The biggest recommendation in the actual 'Recommendations' section (first, and occupying the most space) is to "tear down the walls" and create open and interoperable digital media. It also recommends governments "fund nascent competitors and infrastructure" and "prioritise open-source technology in public procurement." It also calls for stricter merger control and for competition law to be enforced.
Web: [Direct Link] [This Post][Share]
The Purpose of Protocols
Laurens Hof,
connectedplaces.online,
2026/03/18
This is a really good article that offers a lot of room for thought. It addresses a common fault of decentralized social network protocols like ActivityPub, ATmosphere and Matrix: they are silent on how the communities that use them ought to be governed. This "did not produce a neutral outcome but a highly specific one: the concentration of power in the hands of actors... its beneficiaries were predictable: whichever actors had the resources to build in the space the protocols left ungoverned." I don't think it's that simple; different protocols leave different (and differently-sized) ungoverned spaces to be exploited, and different (and differently-sized) common pool resources to be owned. I wish the had lingered longer on Elinor Ostrom's design principles for the governance of these common-pool resources. "Ostrom's central finding was that communities can successfully govern shared resources without either privatization or central authority." We need to talk more about this. And also about what it means to say a protocol creates (or requires) common pool resources. Image: Ashley Hodgson.
Web: [Direct Link] [This Post][Share]
Jennifer Berkshire: The Collapsing Dream of Ed-Tech in the Schools
Diane Ravitch,
Diane Ravitch's blog,
2026/03/18
This is a well-written argument against what we'll simply call 'the ed tech industry'. It fits in nicely with current criticisms of the introduction of AI in schools. Mostly the narrative isn't wrong - Diane Ravitch points (with a nod to Audrey Watters) to a litany of failures, and (with a nod to Anya Kamenetz) unsavory practices, and (with other relevant nods) to relentless promotion from the companies and a corresponding decline in U.S. education test scores. And yet... it strikes me as interesting is that there is no mention of No Child Left Behind and (related) the singular transformation of the U.S. system into a centralized teach-to-the-standardized-test type of management. And while I won't ignore the fact that ed tech companies were part and parcel of this transformation, it didn't have to be that way. And, I mean, education has largely survived everywhere else in the world, even in an era of ed tech, and it remains true to this day that the biggest predictor of educational outcomes isn't ed tech, its socio-economic status and (on a national scale) inequality. There is amnesia, as Jennifer Berkshire says, but it's a very selective amnesia. Related: Eric Sheninger, From Compliance to Competency (summarized).
Web: [Direct Link] [This Post][Share]
'Pokemon Go' Players Unknowingly Trained Delivery Robots With 30 Billion Images
Slashdot,
2026/03/17
The main lesson here isn't that some company tricked its users into providing data for free. It seems pretty clear that Pokémon Go players understood that the information, and especially the photos, they submitted would be used to train the AI. In a similar fashion, I am under no illusion that the photos and reviews I upload to Google Maps won't be used in the same way. Of course it will. No, the main takeaway is that we're moving from an era where all AIs were trained on text into an era where they are trained on geospatial data, photographs, and other non-text data. See also Popular Science.
Web: [Direct Link] [This Post][Share]
The Trust Tax: Why Every AI Deployment in Education Fails or Succeeds on a Single Variable
Nik Bear Brown,
2026/03/17
I don't disagree with the main point here, though I do have an issue with defining 'trust' in any useful way. But I digress. Here's what Nik Bear Brown is arguing: what matters in AI-in-education deployment isn't what the AI is capable of doing, it's whether we can trust it. "It is calibrated trust — a state where a user's confidence in a system accurately matches the system's actual reliability." We obviously don't want students to trust it too much, but you can also trust it too little. Then people "exhibit what researchers call 'algorithmic aversion.' They disengage." And there are other problems around trust - the 'honeypot effect', where you learn to depend on a system, which then changes; the 'adversarial trap', where a system you trusted turns out to be (say) spying on you; and the 'bias problem', where a system you trust is subtly leading you astray. These are all, says Brown, pedagogical issues. Getting them wrong has consequences for learning.
Web: [Direct Link] [This Post][Share]
The key problem with the "brain in a vat" thought experiment
Adam Frank,
Big Think,
2026/03/17
This short article uses a philosophical classic to address what might be called 'the embodiment problem'. The classic is, of course, the question, 'how do we know we are not brains in vats'? All our sensations, all our physical experiences, could be wired up as inputs into the brain. Could we tell the difference. This article argues that we could, because it would be much too complex to simulate our experiences. "Thompson and Cosmelli conclude (18 page PDF) that to really envat a brain, you must embody it. Your vat would necessarily end up being a substitute body." Well - sure. Even the simplest version of 'brain in a vat' postulates some external mechanism standing in for the human body. That's the whole point. But the question is more subtle: is it the case that there can be one and only one possible cause for a given set of conscious experiences? If the answer is 'yes', then our options for both ourselves and for AI are fundamentally limited. But on what grounds would you argue 'yes'? This article doesn't really offer those grounds, beyond saying it's complex. But complexity doesn't prove necessity.
Web: [Direct Link] [This Post][Share]
Robots Didn't Kill the Internet
Carlo Iacono,
Hybrid Horizons,
2026/03/17
Carlo Iacono argues convincingly that today's 'dead internet' isn't the result of AI, it's the result of incentives. Platforms are asking for things that hold attention and produce a useful signal. "That question, applied at scale and compounded over years, is what killed the internet. Not robots. Incentives." The internet has become a giant casino, he argues. Websites are engineered to keep people clicking, and they collect their cut in the form of advertising revenue. "The internet did not start rotting because robots learned to write. It started rotting when platforms became casinos. The robots are just very efficient casino staff."
Web: [Direct Link] [This Post][Share]
How we're reimagining Maps with Gemini
Miriam Daniel,
Google,
2026/03/18
I've had some fun - with hilarious results - trying to use ChatGPT to plan cycling routes. So I'm not really sure how well an integration of AI and Google Maps would work. But it couldn't be worse, because at least the map layer would impose constraints (such as, don't plan cycling routes over open water). If we understand the AI as an interpreter - accepting questions and translating them into map queries - then it might be useful. But there are so many ways this could go wrong. Can't fault them for trying, though. Related: Gemini Embedding - "model that maps text, images, video, audio and documents into a single embedding space."
Web: [Direct Link] [This Post][Share]
Who Owns AI-Generated Content?
Rory McGreal,
unitwin-unoe,
2026/03/16
"The legal trajectory of AI-generated content presents a pivotal opportunity for open education, directly addressing the twin problems of legal uncertainty and eroded trust outlined at the outset," writes Rory McGreal. First, AI-generated content is automatically open content. "The clear consensus that purely AI-generated works are not copyrightable and belong to the public domain provides a stable legal foundation. Educators can use such content without fear of copyright infringement, licensing fees, or complex attribution chains. This demystifies a major part of the 'minefield,' transforming the 'what if' from a source of dread into a clear guideline: autonomous GenAI can be used to create OER lessons." That doesn't mean 'anything goes'. "The academic community must uphold principles of authorship, accountability, and transparency. Using public domain AI content does not absolve educators of the need for due diligence, citation of specific sources, or ethical disclosure of AI assistance in human-AI collaborations."
Web: [Direct Link] [This Post][Share]
AI should be the Guide, not the Ghostwriter
Donald Clark,
Donald Clark Plan B,
2026/03/16
The point of Donald Clark's article is to offer what I guess we can call 'the standard argument': "Generating words, knowledge and solutions is better than simply reading, highlighting text or getting AI to do it for you. Acts of personal generation provide the context for greater understanding and subsequent recall.... This is a short-term pain, long-term gain idea, where desirable difficulties are learning challenges that make the learner study harder in the short term to improve long-term retention and understanding." He then offers an eight-step approach to writing essays along these lines. It's funny, but I would do the eight steps in reverse order - write a version, test my conclusion, identify what's missing, etc. The idea that you use your writing to reason things out and to reach a conclusion is, in my mind, just wrong.
Web: [Direct Link] [This Post][Share]
Openness, transparency and reach: three reasons why public institutions should embrace the Fediverse
Elena Rossini,
2026/03/16
This article is focused mostly on European institutions, though its conclusions could be more widely applied. Ultimately, I think, the recommendation is for institutions to at least include federated social media (such as Mastodon) among their accounts lists. The three reasons are openness, user agency, and reach. People need "a public, open communications platform that is accessible to all citizens, without the need for an account; an independent network not subject to censorship due to opaque algorithms or political bias."
Web: [Direct Link] [This Post][Share]
Random Audits as a Scalable Deterrent to Cheating: Using Game Theory to Design Fair and Effective Academic Integrity Systems for the AI Era
David Wiley,
SSRN,
2026/03/16
On the one hand, I think the proposal is sound: using random audits to deter cheating rather than mass surveillance. On the other hand, however, I think that David Wiley's argument misses the point: if using an AI counts as 'cheating', then probably whatever you are assessing for is the wrong thing to assess. 16 page PDF.
Web: [Direct Link] [This Post][Share]
Aether OS is a computer in a browser built for the AT Protocol
Terrence O'Brien,
The Verge,
2026/03/16
"Impractical, sure," says the website, "but fun." I assume that Aether is using the Personal Data Servers (PDS) as a file store. That would explain that anybody with the address would be able to read them (AT isn't really intended to be a privacy-first protocol). It's an interesting test of the protocol's capabilities.
Web: [Direct Link] [This Post][Share]
Zuck's Ed Tech Baby Goes With A Whimper
Peter Greene,
Curmudgucation,
2026/03/16
This could have been a really useful critique of Summit School and the Chan Zuckerberg Initiative's forays into learning technology, but it undermines its own message with unnecessary sarcasm, typos, missing spaces between words (was it copied from some output of something?) and just plain bad reasoning typified by personal attacks, innuendo and loaded questions. What's the point of this? Readers will find some of the links useful (though many of them are self-links, so look before you click). I get that the name of the blog is Curmudgucation, but this reads more like the opposite of that. Via Thomas Ultican.
Web: [Direct Link] [This Post][Share]
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2026 Stephen Downes Contact: stephen@downes.ca
This work is licensed under a Creative Commons License.