Redesigning Workflows for AI
Jakob Nielsen,
UX Tigers,
2026/04/10
Come for the Viking meme images, stay for the long Jakob Nielsen article on adapting workflow design to new affordances created by AI. He writes, "In a controlled field experiment, startups that redesigned end-to-end workflows around AI generated 90% more revenue than equally equipped peers that used AI mainly to speed up individual tasks." We're going to see similar scenarios (and similar reports) across the field of education. "If AI can handle much of the routing, summarizing, monitoring, reporting, scheduling, drafting, and cross-functional translation that once absorbed so much managerial effort, then the classic corporate pyramid becomes harder to justify."
Web: [Direct Link] [This Post][Share]
What is Mastodon for?
Laurens Hof,
connectedplaces.online,
2026/04/10
This article was the source of the Scott Jensen post, which I just linked to. I'm treating it separate item because it's a long discussion of what it means to be a community in the fediverse. He quotes Jaz-Michael King, who writes, "the fediverse is not a place, it is the means to build a place." His vision, writes Laurens Hof, "is a million small places, each with its own governance and its own front door, connected where it makes sense and disconnected where it doesn't." Which sounds great, but as Hof writes, "The difficulty is that almost nobody experiences Mastodon at the instance level. People experience it through their home timeline, which composites content from across the entire federation." This is true - but my home timeline is composed of people I have chosen to follow. And the problem, Hof seems to be saying, is that nobody controls that. "The software has not caught up, and until it does, the community will keep enforcing its boundaries the only way the federation layer allows: person by person, reply by reply."
Web: [Direct Link] [This Post][Share]
Is #mastodon becoming an echo chamber?
Scott Jenson,
Mastodon,
2026/04/10
Why don't media and journalists like Mastodon? I think that it's because it lacks an algorithm. Various 'representative' voices are not privileged in the name of, shall we say, 'balance'. Scott Jenson raised this issue in the most provocative way possible: "Is #mastodon becoming an echo chamber? This post from @carnage4life has me questioning our community... I *know* people here don't want this to be a classic social media-clone but we'd *like* journalists to be here right?" (I added the link to Dare Obasanjo's post; in classic journalist fashion Jensen doesn't link to the source, so it becomes 'his' story). I like Mastodon founder Eigen Rochko's response: "I'm not interested in following any 'AI people'. That doesn't make it an echo chamber. We don't need equal amounts of people who love puppies and want to kill puppies, not everything needs to be equally represented." The whole thread is well worth reading (even some of the oh-so-Mastodon digressions). Especially this point: "Literally anyone can spin up a server and talk about anything/try to get more folk to listen... But other folk have to want to listen to whatever they are saying. Servers and individuals can just decide not to. No one is guaranteed an audience, just the ability to speak." Via Laurens Hof.
Web: [Direct Link] [This Post][Share]
Project Glasswing: Securing critical software for the AI era
Anthropic,
2026/04/10
Just for the record, Annthropic's announcement of Mythos: "We have used Claude Mythos Preview to identify thousands of zero-day vulnerabilities (that is, flaws that were previously unknown to the software's developers), many of them critical, in every major operating system and every major web browser, along with a range of other important pieces of software." The Mythos model, therefore, won't (yet) be released to the public, but Project Glasswing partners will be able to use the model to identify and fix the vulnerabilities.
Web: [Direct Link] [This Post][Share]
Same Data, Different Results
Matthias Burghart,
Max Planck Institute,
2026/04/10
A study published in Nature last week - behind a paywall, naturally - confirms what many previous studies have shown: about half of all studies in the social sciences cannot be replicated. The situation is slightly better for education research. "When the same data is analyzed differently by various researchers, the conclusions drawn can vary significantly... The choice of analysis method has a significant impact on the findings. Decisions regarding data cleaning, variables, and statistical models are crucial and can lead to different outcomes." I would apply these findings not only to individual studies, but also systemic reviews. "These results do not call into question the credibility of earlier research," says Balázs Aczél of Eötvös Loránd University. "Rather, they show that presenting a single analysis often does not reflect the true extent of empirical uncertainty. Ignoring analytical variability can lead to a false sense of confidence in scientific conclusions." See also: Science, Stewart-Williams, National Tribune, Kau.se, Phys.org.
Web: [Direct Link] [This Post][Share]
ADAMS: Parents, not government, should decide what children do online
Will Adams,
The Provincial Times,
2026/04/10
This article essentially just repeats the same point over and over again, in slightly different wording, but it's a point that needs addressing: "The answer is not to demand that the government police the internet for them. It is to do what parents have always done: take responsibility for their own children, and leave the rest of us alone." And that sounds great, especially in the many ways the same thing can be addressed. But it's wrong. While it would be nice if we could count on parents to protect and care for children, history teaches us that we cannot. Many parents abdicate their responsibility, either out of carelessness, maliciousness, incompetence, or because they simply don't exist. The 'leave it to the parents' attitude expresses the perspective of privilege, where we leave those without a supportive home environment to fend for themselves. This is not to discount the technical challenges and implementation issues. But we have laws that protect children from other things that may harm them - drugs and alcohol, firearms, cars, etc - and society ought to consider a similar responsibility when it comes to technology and connection.
Web: [Direct Link] [This Post][Share]
Belonging and Place: A Case Study of Digital Practice at the University of the Highlands and Island
Bonnie Stewart,
All Ireland Journal of Higher Education,
2026/04/10
What I found interesting in this paper (17 page PDF) was the balance required to, on the one hand, encourage "authentic human connections in both physical and digital teaching environments," and on the other hand, respective "diverse modes of engagement, recognition of different learning capacities and preferences, and accommodation of various life circumstances." It can be tricky. Many cultural traditions, both online and offline, favour conformity (I'm think especially of Reddit here, where a voting system punishes unpopular opinions). "As one staff member noted, 'belonging does not require uniform participation.'" It takes a careful hand, negotiation around existing hierarchies, and a fostering of a "live and let live" attitude. Bonnie Stewart writes, "UHI campuses were described as 'surprisingly accepting of diversity despite the region being rural and relatively homogeneous demographically.'" So while "breakout rooms was emphasized as an important humanizing practice across disciplines and content," it is important that "instructors also recognized student agency, with one noting they offer breakout opportunities while recognizing 'some students may not want to build those ties as much as others.'"
Web: [Direct Link] [This Post][Share]
What Happens When You Let Claude Code Autonomously Run Your Meta Ads for a Month
Giorgio Liapakis,
Technically,
2026/04/09
"In January," writes Giorgio Liapakis, "I gave an AI agent $1,500, full control of a Meta Ads account, then walked away... The product was a small AI/marketing newsletter called Growth Computer, and the brief was to get qualified subscribers at the lowest cost possible." I'm in the middle of my own subscription campaign, which at 12% of my target is struggling a bit *but thank you to those who have signed on), so this story piqued my interest. Not that I want to get into the online advertising game. Anyhow, did it work? "The results are directional, not definitive. But the system worked." Still, "Where humans stay essential is setting the right objectives (see: paperclip problem), taste + brand judgement, and defining what 'quality' means beyond the metrics. And knowing when to break the rules, which is arguably the most human skill there is."
Web: [Direct Link] [This Post][Share]
A Look Back at the 3rd Global Summit on Diamond Open Access
Catherine Côté-Cyr,
Coalition publica,
2026/04/09
This article could fill us in on the details a bit more but it contains some useful links to presentations, including Tanja Niemann on the initiatives led by Érudit and Coalition Publica, as well as Juan Pablo Alperin on the Public Knowledge Project (PKP). "In Canada, recent policy and funding developments have created a uniquely exciting landscape for community-based publishing. The time has now come to rally around a shared vision of diamond Open Access (OA) so that we can collectively benefit from these opportunities."
Web: [Direct Link] [This Post][Share]
New Report: Seven Myths of AI Use - A Critical Perspective on Generative AI
Stefanie Panke,
AACE,
2026/04/09
A myth, in the sense intended here, is "a commonly-held but false belief, a common misconception". I think that the seven myths may be false beliefs (I have quibbles around the edges of some) but I don't think that they are commonly held. I mean, does anybody think "AI tools are neutral, objective, and unbiased?" Or that they function logically? Or that they are empathetic? That there are no social or ecological issues? I think AACE needs to do better here. Restating seven popular criticisms of AI, and branding their counterpoints as 'myths' is misleading and unhelpful. People already question these seven items. The more interesting question is what we should say about AI in the light of these criticisms.
Web: [Direct Link] [This Post][Share]
Sketchnote to AI InfoGraphic
Wesley A. Fryer,
Moving at the Speed of Creativity,
2026/04/09
This gist of this article is that Wes Fryer used AI to create an infographic from a sketchnote photographed from a whiteboard. What I liked was the description of the process from beginning to end - it was more than just typing a prompt, it was developing the idea and having AI transform it into something useful and accessible. Did the effort succeed? Well, I found the image to be a compelling presentation of Mike Caulfield's SIFT method - stop, investigate, find better coverage, and trace claims. I thought the text could be bigger, but if it's on a big screen and being presented by an instructor, it might work well. I think we're seeing more of this - embedding AI into a process, rather than farming out everything to AI (I mean, the latter method would work, but the results will be ordinary).
Web: [Direct Link] [This Post][Share]
2026 Q1 Review: Online learning developments in UK higher education
Neil Mosley,
Neil Mosley Consulting,
2026/04/09
Good overview of the state of online learning in the UK by Neil Mosley. There's no real central theme here, but there's a lot of shuffling of parts and explorations into how to adapt (especially with respect to transnational education (TNE)). It's worth considering some remarks from the Open University's new Vice-Chancellor, David Phoenix, while announcing that it would be abandoning plans to move to the centre of Milton Keynes and begin offering undergraduate courses in person: "What we don't want to do is replicate what everybody else is doing. We want to design facilities for what education might look like in the next 20 years. And I honestly think in the next 20 years there will be less need for people to go and spend three years within a campus-based environment." I agree.
Web: [Direct Link] [This Post][Share]
The Coursera-Udemy Merger: Two Years in the Making, Third Time Lucky
Dhawal Shah,
Class Central,
2026/04/09
Dhawal Shah tells the compelling story of the courtship dance and eventual merger of Coursera and Udemy. Each of them considered other suitors, each of them could have walked away with much more than they ultimately got, and it is not at all clear to me that their merger will solve the problems of their declining valuation (though, that being said, there is a market out there for what they do, they do earn revenue, and so long as they stay out of crippling debt (brought on by, say, unwise acquisitions) they should at least survive.
Web: [Direct Link] [This Post][Share]
Tubi is the first streamer to launch a native app within ChatGPT
Lauren Forristal,
TechCrunch,
2026/04/09
I'm not sure this is the innovation we were looking for. "Users can install the Tubi app from the ChatGPT app store and begin by typing '@Tubi' in a prompt. From there, they can make natural-language requests like 'a thriller for girls' night' or 'something funny,' and instantly receive curated recommendations tailored to their preferences." I mean, it's better that searching through Netflix recommendations, but I don't think I want this inside my AI. But that ship may have long since sailed, as a look at the App store reveals a whole menu of apps enabled through Model Context Protocol (MCP). I think it's better to have AI in apps than to have apps in AI. The last thing we need is a few AI engines becoming the platform for everything. Can't be long before someone starts offering learning content inside ChatGPT, right? Update: oh yeah, here it is.
Web: [Direct Link] [This Post][Share]
Believing that practice makes perfect is most important for succeeding in schoo
Jonathan Kantrowitz,
Education Research Report,
2026/04/09
Interesting. "One of the clearest findings in the study is that believing you can improve through practice is the most consistent motivational factor when it comes to grades and experiencing a sense of mastery in academic subjects... It is not about being perfect or never facing challenges, but about having a basic understanding that skills can be developed through effort and practice." The full study is here. As usual, results from a small study like this should not be generalized without substantiation from additional research.
Web: [Direct Link] [This Post][Share]
The Boxes Were Already Open
Bjørn Flindt Temte,
2026/04/08
A few days ago I linked to a paper from Anthropic on how AI systems represent emotions internally. This post references that paper and makes the following argument: "the prevailing assumption about large language models - that they have nothing at stake in their interactions with us - is incoherent with their own observable behaviour." Essentially, the stakes are recorded precisely in what Anthropic called the 'functional emotions'. The stakes don't have to 'feel' a certain way to exist. "It does not require claiming the AI 'cares about' the collaboration in a phenomenologically rich sense," writes Temte in an earlier paper. "It requires only the much weaker claim: the system's behaviour is functionally organised around protecting something, and 'having something at stake' is what we call that pattern when we observe it in any other system."
Web: [Direct Link] [This Post][Share]
Industrial Policy for the Intelligence Age: Ideas to Keep People First
OpenAI,
2026/04/08
This paper (13 page PDF) from OpenAI doesn't address education directly, but it does address the need for a social and political response to the economic shifts being created by AI. It recognizes the risks of "governments or institutions deploying AI in ways that undermine democratic values; and power and wealth becoming more concentrated instead of more widely shared" and suggests "unless policy keeps pace with technological change, the institutions and safety nets needed to navigate this transition could fall behind." It offers a series of proposals under three broad areas: to share prosperity broadly, to mitigate risks, and to democratize access and agency. There are many specific proposals, most of them good, but the fundamental concern is the ability and willingness of companies like OpenAI do follow through. We all know what happened to Google's motto, "Don't be evil." The same seems very likely to happen to this statement the moment shareholder rights prevail over social rights. See also: Carlo Iacono, the social contract OpenAI wrote without you. Here's what it means for educators, writes Stefan Bauschard. The Deep View looks inside the new deal.
Web: [Direct Link] [This Post][Share]
Learning to think in the AI era
Wayne Holmes,
UNESCO Courier,
2026/04/08
This is a light article making the case that even in the age of AI we still need to learn. It addresses common AI risks such as error and bias and the possibility of it becoming a 'cognitive crutch'. It also considers the oft-touted prospect of AI tutoring systems, suggesting that they fail to address "'socialization' (the process by which we find our place in particular social, cultural and political groups); and... 'subjectification' (how we become individuals capable of thinking independently and taking responsibility for our own lives)." I have always felt 'AI tutoring systems' to represent a narrow instructivist view of education, but the potential of AI doesn't end there. But more to the point is the implication that we will stop learning if we no longer need to. Why would we believe that? Human brains constantly learn. The question is not whether we need to learn, but rather, what will we learn. I look forward to the day when human learning evolves not out of utility and necessity but because of interest and creativity.
Web: [Direct Link] [This Post][Share]
The bottleneck shifts to distribution
Gordon Brander,
Squishy,
2026/04/08
We're familiar with McLuhan's new media tetrad: what does it enhance, what does it make obsolete, what does it retrieve, what does it reverse? This article makes me think we need to add a fifth: what does it consume? And we'll apply it to all technologies, not just media. The new tech pentad? I dub it thus. This article quotes Herbert Simon: "What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention." So what of AI? It creates a surplus of new software. But who will use it? How will they find it? The new scarcity, according to this article, is distribution. In a way, this is similar to Phil Hill's post from yesterday.
Web: [Direct Link] [This Post][Share]
Scaling Work-Based Learning: A Framework for Effective Employer Intermediaries
Strada,
2026/04/08
The idea of 'employer intermediaries' is that of people or organizations who facilitate learning interactions between companies and employees. "Work-based learning programs deliver real value to both learners and employers, but the widespread expansion of opportunities depends on strong employer intermediaries," write the authors. The framework itself is basic: employee engagement, solutions design, solutions brokering, implementation support, administrative support. ESSIA, guess. I like it because it provides a way to transition from traditional work-based courses and programs, and to evolve into an ongoing and core function. Here's the framework and here's the full report.
Web: [Direct Link] [This Post][Share]
How Flipboard's new Surf app lets you merge social feeds, YouTube, and RSS to escape the algorithm - finally
Steven Vaughan-Nichols,
ZDNet,
2026/04/07
Surf officially launched this week after more than a year in development. I tried Surf and my feeling is that it's a bit meh. It feels like it's focused much more on a reading experience than a creating experience. But your impression may be different. "It combines Bluesky, Mastodon, RSS, and other content into something that feels entirely new," writes David Pierce in the Verge. As Steven Vaughan-Nichols writes, "Surf has been developed over the last two years to unify fractured online conversations and prioritize user-designed experiences over being forced to consume algorithmic content from a firehose of preselected content." Via the Verge.
Web: [Direct Link] [This Post][Share]
AI, SaaS, and EdTech Survival
Phil Hill,
On EdTech Newsletter,
2026/04/07
I think this is a good observation. Phil Hill writes, "Just as Ben Thompson recently argued in Stratechery about Microsoft and broader software survival, AI changes how code gets written, not why institutions pay for ongoing software services. Let's break it down, starting with where the HackerNoon article gets it right." Specifically, "coding is not the same as providing software-as-a-service." Like anything we construct, software requires ongoing maintenance. There's no such thing as 'set it and forget it'. And maintenance (which includes the development of new features) requires an ongoing commitment and expertise. "AI changes the economics of creation. It does not eliminate the economics of operation, support, or trust at institutional scale." And that is what commercial software companies provide. (Now, I would add, that it does not follow that the LMS is not dead; the product category can disappear even if the need for software expertise continues - but that's a different issue).
Web: [Direct Link] [This Post][Share]
Trade-offs and Menus
Alex Usher,
HESA,
2026/04/07
So much of what pretends to be good economics is really just an exercise in framing (or its political equivalent, defining the Overton window). That's what Alex Usher is up to with his articles on trade-offs, both yesterday (in which he asserts the university "exists to provide a space where individual disciplines can do sell their products") and today when he proposes creating "a menu of what everything costs" so we can understand the trade-offs. Now, actual good economics would understand that the university is a complex system, and that the trade-offs aren't really tradeoffs. Any change in one line item has a ripple effect across all the other line items, which create their own ripple effects in turn. Depicting university budgets as trade-offs is a bad strategy. So why do it? Because it allows you to create a list of the sort of changes that will be allowed. Usher's list consists entirely of cuts to staffing and support services, because those are the only places allowed to be considered; a similar list of revenues would include things like tuition and fee hikes, again because nothing else is on the table. The politics here is to get faculty and staff arguing among themselves within the confines of these options, without ever touching on an overall strategy that would make a real difference.
Web: [Direct Link] [This Post][Share]
Digital transformation is a teaching and learning opportunity
D'Arcy Norman,
2026/04/07
D'Arcy Norman offers a useful introduction to Digital Transformation (Dx). Among the examples he cites, Norman references eCampusOntario's micro-credentials in Dx, and BCcampus's 21 digital learning competencies organized into mindsets, skillsets, and toolsets. "The common thread across all of these," he writes, is that "the most successful teaching-and-learning Dx initiatives are not primarily about technology adoption. They involve pedagogical redesign, faculty development, and organizational culture change. Technology is necessary but not sufficient." That's my experience as well, and why I complain when people use the new technology to do the old thing (though I guess SAMR would say this is inevitable).
Web: [Direct Link] [This Post][Share]
Prepping for the endgame of the open web
Jay Hoffmann,
The History of the Web,
2026/04/07
I actually like Alan Levine's comment at the end of this article better than the article itself, but the article is fine. Jay Hoffmann writes, "My point is, each time the open web is attacked, fragments of it survive and persist. Berners-Lee designed the web with a lot of hope and optimism, but he wasn't blind to the cynics." Levine writes, "I'm with ya on the fight and stand for this open web, but hardly think that collectively the 'netizens' are just hapless victims of some warlords. The web slides towards end game when we abandon it in favor of the web megamalls, when we stop making and adding to the web ourselves." That's why I'm still writing my own content on my of software that I wrote myself.
Web: [Direct Link] [This Post][Share]
Readers familiar with Reddit will know it's a discussion website divided into topics (known as subreddits) where each topic consists of an endless series of posts realted (usually) to that topic. What is special about TReddit is that readers can 'upvote' or 'downvote' specific posts or comments. This website is the same as Reddit except it has academic subjects (known as 'communities', such as 'education' and 'anthropology') instead of subreddits, and each new post is an academic article in that field. People can read the article (they're all open access) and comment on them. Presumably the best articles will receive the most upvotes. It's all very new, the 'education' community has 7 followers, and there's no philosophy community yet. This might never be successful, but I like the thinking behind it.
Web: [Direct Link] [This Post][Share]
GC Insights: Designing for inquiry in virtual fieldwork
Rie Hjørnegaard Malm, et al.,
Geoscience Communication,
2026/04/07
This is a short, simple and basic article describing the use of VR to create an immersive activity for physical geography students. "The ambition to build an inquiry lesson with a virtual reality component has challenged the format without making it a 'show and tell'. In this inquiry lesson, the motivation from the rich digital 360 environment and the goal of mapping a genuine geological problem will carry students beyond just wanting a 'correct' answer." The actual lesson is available on Zenodo. I found this item while trying out Peerler (see my other post today).
Web: [Direct Link] [This Post][Share]
The machines are fine. I'm worried about us.
Minas Karamanis,
Ergosphere,
2026/04/06
Minas Karamanis depicts a scenario where one PhD graduate does all her work by hand while the other uses AI. On paper, both had exactly the same outcome, thoiugh arguably, one understood where the other did not. "The real threat," writes Karamanis, "is a slow, comfortable drift toward not understanding what you're doing. Not a dramatic collapse. Not Skynet. Just a generation of researchers who can produce results but can't produce understanding. Who know what buttons to press but not why those buttons exist. Who can get a paper through peer review but can't sit in a room with a colleague and explain, from the ground up, why the third term in their expansion has the sign that it does." This is not a new problem. There are different levels of understanding. AI gives us access to tools without requiring the deepest level of understanding. That doesn't mean the understanding disappears. Via Tom Czarniecki.
Web: [Direct Link] [This Post][Share]
Emotion concepts and their function in a large language model
Anthropic,
2026/04/06
Emotion plays an important role in learning and cognition and so it's not surprising to see it play a similar role in AI. That, at least, is what researchers at Anthopic have found. Now, these are not the same as human emotions, but it takes a bit to explain why. AI systems act like a character that speak a language and develop rich internal representations of the abstract concepts underlying that language, including emotions. "These correspond to specific patterns of artificial 'neurons' which activate in situations - and promote behaviors - that the model has learned to associate with the concept of a particular emotion (e.g., 'happy' or 'afraid')". Nothing in this suggests that an AI feels things the way a human would, but AI's representation of an emotion plays the same functional role as its counterpart in humans. The full report considers three case studies: blackmail, reward hacking, and sycophancy and harshness.
Web: [Direct Link] [This Post][Share]
Australia's teen social media ban is a flop. But there's no joy in 'I told you so'
Samantha Floreani,
The Guardian,
2026/04/06
As readers probably know, the Australian government ban on social media for children took effect last December. It's probably much too early to draw conclusions, but this article reports that as of this week, "around seven in 10 children remain on major platforms." Writes Samantha Floreani, "If policymakers wish to genuinely reduce harm to young people online, they must take seriously the task of challenging models based on behavioural advertising, profiling and problematic algorithm-driven feeds." Image: Canberra Times. Related: Why a social media ban for teenagers misses the point, by Jeremy Howick.
Web: [Direct Link] [This Post][Share]
Smart Glasses for Exam Cheating: Best Models, Prices and Risks in 2026
S. Popovich,
abit.ee,
2026/04/06
I really don't want to toss this grist into the mill, but here it is anyway, because whether or not this is a real site with real content, the concept is out there, and that's all that matters: "Smart glasses are a real technology with genuine consumer use cases: translation, navigation, teleprompter, memory capture, hands-free communication. The fact that they can be used to cheat is a side effect the industry neither planned for nor celebrates. But the fact remains: the line between 'smart device' and 'cheating device' is today defined entirely by the user's intention." Via Subconscious Proxy.
Web: [Direct Link] [This Post][Share]
The Five Roles Framework
Nick Potkalitsky,
Educating AI,
2026/04/06
As I've often said, education researchers love a taxonomy, and this is another one, so I may as well pass it along. The five roles are: the critic, the verifier, the interlocutor, the editor, and the architect. "They are positions on a spectrum that runs from critical distance to collaborative integration, and the student's movement across them is shaped by context, by the demands of the task, and by the developing expertise the student brings to it."
Web: [Direct Link] [This Post][Share]
Components of A Coding Agent
Sebastian Raschka,
Ahead of AI,
2026/04/06
This is a pretty good look at the design of a coding agent; this is a mechanism that uses an AI model to create new software. It's clearly written and well illustrated. It's interesting because it essentially describes a coding task model - that is, what people are actually doing when they write software. I picked this up via Benjamin Riley, who writes a critical summary of the AI coding process outlined here, calling it an ouroboros, a snake that eats its own tail. "AI writes the code. AI reviews the code. AI checks the deployment. When it breaks, the answer is more AI. The loop has no exit condition...The company that sells AI coding tools cannot build a quality product with its own AI coding tools." With last week's release of Claude Code's harness there has been a wave of similar posts. It's clear that Claude Code, at least, does not use its own code as input; there's a specific constraint against it, and the ouroboros problem is well understood and accounted for.
Web: [Direct Link] [This Post][Share]
Ellis: Your On-Demand Classroom Companion
Jennifer Gonzalez,
Cult of Pedagogy,
2026/04/06
We're entering a phase where we'll see a wave of new AI-based Edtech. Almost all of them are designed to assist people to do what they're alreadying doing. Ellis is a case in point. It tries "to come as close as it can to being that trusted colleague who has deep knowledge about learning differences and student mental health and is ready at any time to help us figure out the best way to approach every individual situation." I can't imagine this being useful, because I can't it being more reliable in specific cases than the teacher in the classroom, but I could be very wrong about that.
Web: [Direct Link] [This Post][Share]
Support OLDaily
2026/04/03
OLDaily has been free and open to all readers for almost 25 years. No paywalls, no ads, no sponsored content. Keeping it that way takes time and resources. Please consider a monthly contribution. Every donation helps keep OLDaily independent and free for everyone.
Funding for OLDaily is part of my retirement plan - see my Retirement FAQ for more details. To continue publishing OLDaily I have a target of 100 base-level subscriptions per month (or equivalent funding from any source). I'm currently at 7% of this target. To add to that support, visit this page
Web: [Direct Link] [This Post][Share]
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2026 Stephen Downes Contact: stephen@downes.ca
This work is licensed under a Creative Commons License.