Judge orders Anna’s Archive to delete scraped data; no one thinks it will comply
Jon Brodkin,
Ars Technica,
2026/01/19
The story here is that a judge has ordered Anna's Archive to delete the OCLC's WorldCat data it scraped from the website in 2023. WorldCat is "the world's largest library metadata collection." In a blog post Anna's explained that it needed the data in order to develop a comprehensive list of all the books in the world, so it can preserve them. Other sources were inadequate. "We were very surprised by how little overlap there was between ISBNdb and Open Library, both of which liberally include data from various sources, such as web scrapes and library records." OCLC noted the cost it bore as the scraping occurred. "Beginning in the fall of 2022, OCLC began experiencing cyberattacks on WorldCat.org and OCLC's servers that significantly affected the speed and operations of WorldCat.org." Having fought off scrapers on my own site, I can understand OCLC's frustration. But I have to ask, why is this data locked down in the first place? I know, I know, 'business models'. But it seems to me that if anything should be publicly available, it's library metadata.
Web: [Direct Link] [This Post][Share]
Spot the Difference
Audrey Watters,
Second Breakfast,
2026/01/16
I mostly like this post because of the beautiful photo of a loon at the top, which I would show here, but it's owned by (and presumably licensed from) Getty Images, so nobody's really pure, I guess. I'll use one of my own photos to illustrate this post, which anybody can copy. Anyhow, Audrey Watters argues that far from leaving schools unchanged, "these technologies have changed education. They have reshaped how we think about thinking." And that's not wrong. "They have shaped the expectations of what students and teachers believe they can do -- not just the 'everyone should learn to code' stuff and the twisting of the purpose of education to be solely about job training and 'career and future readiness,' but about how students understand their own abilities, how they see (or don't see) their own agency, how they control (or don't control) their own inquiry, curiosity, attention." I think, if you look at the world a certain way, that's all you see. But when I look at the world, I see people who do look to more than just jobs and career readiness. I see agency, creativity, community, and society. Maybe we can't expect schools to help everyone aspire this - but we should.
Web: [Direct Link] [This Post][Share]
Curatorial Silence and Its Impact on Pedagogy
Ann,
All Things Pedagogical,
2026/01/16
This blog post makes the important point that we should keep speaking even when things get difficult. "Curating outward silence does a certain kind of teaching and learning work, and sadly it is work that the big S systems want more of... the systems have a way of taking advantage of that curated hush zone vacuum to fill it with distraction and falsehoods." It almost doesn't matter what you talk about. It doesn't have to be the big issues or topics of the day. What matters is that we keep on talking to each other, just to remind ourselves that we exist, and that the interactions matter more than the message. (Take-home exercise: is this the same as, or different from, 'belonging'?)
Web: [Direct Link] [This Post][Share]
Digest #183: The Importance of Belonging for Student Success
Carolina Kuepper-Tetzel,
The Learning Scientists,
2026/01/16
I have mixed feelings about the whole concept of belonging. I speak here from the perspective of always feeling like the outsider (probably not actually an outsider, but I digress...). And it's that sort of feeling that motivated my work on 'groups versus networks', which argues that building 'belonging' in groups can create real harms. And I question in what sense "the feeling of belonging has been identified as an important factor in education." Is it 'those who belong do the best'? Or do the feelings - of being safe, of being valued - influence learning outcomes? Is is true that (as one of these papers says) "the need to belong is a basic human motivator?" Or is it just a theoretical construct with no real empirical support outside the theory? Maybe 'belonging' is just shorthand for 'socially advantaged'.
Web: [Direct Link] [This Post][Share]
Anthropic Economic Index: new building blocks for understanding AI use
Anthropic,
2026/01/16
Anthropic (which makes the Claude AI system) has released a substantive report (55 page PDF) and some supporting blog posts (one, two) reporting on trends in AI usage across disciplines and in different places. Though the focus is the new set of 'economic primitives' suggested in the report (chapter 2) there's a lot for educators to consider. The new primitives are: task complexity, human and AI skills, use cases (ie., work, learning, home), AI autonomy, and task success. The data shows relations among them. For example, AI is used more for tasks where the expected level of education is higher, but it's rate of success on these tasks declines. The use of Claude for augmentation is more important than the use of Claude to replace humans altogether. There is a "very high correlation between human and AI education... this highlights the importance of skills and suggests that how humans prompt the AI determines how effective it can be." All of that said - the data in this report is complex and the relationships are often subtle, and I would resist articles that try to boil it all down to one or two simple relationships.
Web: [Direct Link] [This Post][Share]
The Refraction Principle: How AI Bends (But Doesn't Break) Human Purpose
Nick Potkalitsky,
Educating AI,
2026/01/15
I'd put this post under the heading of 'human-AI teaming' (or 'human-autonomy teaming (HAT)', which is the exploration of the idea of how humans and AI work together (as opposed to the more usual approach where a human 'supervises' an AI). The model specifically describes a "Seminal Intention (the original human impulse), AI as Refractive Medium (bending and focusing without generating new intentions), and Hybrid Intention (the evolved form that remains fully human-owned)." They also distinguish between centrifugal (exploratory, divergent) and centripetal (focused, convergent) approaches. I guess the question is, is this what AI actually does? Do we use it to parse complexity, surface assumptions, offer diverse perspectives, or provide logical scaffolding? I think a lot of people would say that these are things we as humans bring to the table. But a lot of people could be wrong...
Web: [Direct Link] [This Post][Share]
The AI Ad Gap Widens
Caroline Giegerich, Jack Koch, Debra Aho Williamson,
IAB,
2026/01/15
The use of AI in advertisements doesn't concern me, but only because my own opinion of advertising in general couldn't be lower than it is now. If (as I often say) advertising is the original fake news, AI-generated advertising is the original fake fake news. This article reports on a couple of trends, the most notable of which is the continued use of AI to produce advertising (and where advertising and that other industry lead, content production in general is sure to follow). For me, I think the key distinction is this (and reading this story made me think of this, which is why it appears in the newsletter): AI-generated content is fine by me if I ask for it and know that's what I'm getting. But if I don't ask for it, then if it's artificially generated it doesn't feel like there's any genuine reason why I should be paying attention to it.
Web: [Direct Link] [This Post][Share]
Guardex.ai
2026/01/15
With the arrival of multi-modal AI our lives will get more complicated, not simpler. Multi-model AI uses more than just text, and uses what are called 'world models' to interpret data from cameras and sensors. The experimentation in the classroom has been underway for a while now. Illustrated here is an experiment "to use the AI monitoring application inside our classroom. Just for fun, honestly." The equipment is provided by a company called Guardex.ai, which also promotes the use fo cameras and AI-based analysis to monitor areas for unauthorized intrusion, idle equipment, safety violations, and employees not hard at work. The classroom equivalent of this is what we are told 'only teachers can do'. But when it becomes so questionable when done by a machine (and make no mistake, people won't like it), it's worth asking whether we should be doing it at all.
Web: [Direct Link] [This Post][Share]
A new direction for students in an AI world: Prosper, prepare, protect
Emma Venetis, Mary Burns, Natasha Luther, Rebecca Winthrop,
Brookings,
2026/01/15
This is a lengthy new report from Brookings on the use of AI in learning. "We find that at this point in its trajectory, the risks of utilizing AI in education overshadow its benefits. This is largely because the risks of AI differ in nature from its benefits - that is, these risks undermine children's foundational development." It risks eroding trust between teacher and student, for example. Nonetheless, they recommend a stragegy based on the principles of prosper ("students can prosper through carefully titrated AI use"), prepare (AI literacy and professional development "and systemic planning and access"), and protect ("safeguards on AI for student privacy, safety, emotional well-being, and cognitive and social development"). "A narrow focus on developing effective AI-supported teaching approaches could obscure how students' very ability to learn is being undermined by AI overuse, inappropriate use, and non-productive use, both in and, increasingly, outside the classroom."
Web: [Direct Link] [This Post][Share]
Royal Roads University: A Canadian University Without Tenure or Senate
Alex Usher,
HESA,
2026/01/15
This is quite an interesting interview with Phillip Steenkamp, who has been President of Royal Roads University since 2018, on the current state of the small west coast institution here in Canada. Royal Roads was established to do what so many are recommending: there's no Senate, no tenure, and "a very focused mandate to serve the needs of the labour market and only offer applied and professional programs." So how has it fared? "We have to be very adaptive, very responsive, very nimble." I think the article tries to put the best light on it, but the university has remained small and is struggling. It has retrenched with voluntary retirements and a 10 percent workforce cut. "We've moved from seven schools to three. We've merged the two professional program faculties into a single Faculty of Interdisciplinary Studies." They're making more money from research, and they are launching satellite campuses in places like the Emirates. "We've had a Netflix series filmed here. We do about 60 weddings a year."
Web: [Direct Link] [This Post][Share]
Debunking the AI food delivery hoax that fooled Reddit
Casey Newton,
Platformer,
2026/01/15
Casey Newton shares this item in Platformer (often closed access, but this item appears to be open) describing how he cause a scammer trying to promote a story about a corporation squeezing its delivery employees. As he notes, this is the sort of information he would have accepted without question in the past, but which now requires extra scrutiny because it has become so easy to create. He was able to catch it this time - but what about next time? "That future was worrisome enough when it was a looming cloud on the horizon. It feels differently now that real people are messaging it to me over Signal." Yeah. There's no short-term fix for this. "If there's anything that gives me comfort here, it's that old journalism-school maxims can still help us see through the scams. If it seems too good to be true, it probably is. If your mother says she loves you, check it out. Always get a second source."
Web: [Direct Link] [This Post][Share]
The Mythology Of Conscious AI
Anil Seth,
NOEMA,
2026/01/14
This is quite a good article and more than does the job of setting the tone for today's OLDaily. What we're offered here is an excellent statement of the idea that human consciousness is fundamentally diftinct from artificial intelligence. There's a lot going on in this article, but this captures the flavour of the argumentation: "Unlike computers, even computers running neural network algorithms, brains are the kinds of things for which it is difficult, and likely impossible, to separate what they do from what they are." The article hits on a number of subthemes: the idea of autopoiesis, from the Greek for 'self-production"; the way they differ in how they relate to time; John Searle's biological naturalism; the simulation hypothesis; "and even the basal feeling of being alive". All in all, "these arguments make the case that consciousness is very unlikely to simply come along for the ride as AI gets smarter, and that achieving it may well be impossible for AI systems in general, at least for the silicon-based digital computers we are familiar with." Yeah - but as Anil Seth admits, "all theories of consciousness are fraught with uncertainty."
Web: [Direct Link] [This Post][Share]
The Problem with AI "Artists"
Anjali Ramakrishnan,
O'Reilly,
2026/01/14
OK, how do I express this? Here's the conclusion of this long O'Reilly article on humans, art and creativity: "The fundamental risk of AI 'artists' is that they will become so commonplace that it will feel pointless to pursue art, and that much of the art we consume will lose its fundamentally human qualities." Now, we humans have always made art, long before anyone thought of paying for it - long before there was even money. Why? What makes Taylor Swift better than an AI-generated singer-songwriter? My take is that it's not the content of the art, but instead, it's the provenance. I've written before about the human experience behind her work. Similarly, what's the difference between my videos and somewhat better photosets from Iceland and something a machine might create? It's that I was there and I'm reporting on the lived experience. There's nothing in the media that distinguishes between AI and human generated media, only in why it was made and why we're interested. If you want to get at why any of this matters, you have to look past the economics of it, and ask why it was ever made at all.
Web: [Direct Link] [This Post][Share]
Algorithms and authors: How generative AI is transforming news production
Alexander Wasdahl, Ramesh Srinivasan,
First Monday,
2026/01/14
There are some interesting bits in this article (22 page PDF) even if, in my view, the research basis doesn't allow us to generalize meaningfully. The first is the proposition that news reporting by humans is fundamentally different from that produced by machines. "Journalists engage in selective representation, deciding which events in the world are noteworthy or relevant to their audience, thus shaping public discourse. They accordingly choose words based on what they deem best captures what they wish to report or analyze... While human text represents ideas and can typically provide reasoning behind the choice of words and constructions, algorithmically generated texts merely render outputs without such explanations." Second, and as a result, "the instrumental, efficiency-oriented purposes served by LLMs exist in tension with the values expressed by the individuals interviewed in this study, particularly around accuracy, transparency, editorial autonomy, and accountability." My scepticism exists along two fronts: first, whether the reporter's art is based as much on reason as averred in the article, and second, whether machines are not in fact capable of exercising the same mechanisms themselves.
Web: [Direct Link] [This Post][Share]
HoP 484 You Bet Your Life: Pascal's Wager
Peter Adamson,
History of Philosophy Without Any Gaps,
2026/01/14
Peter Adamson's monumental 'History of Philosophy Without Any Gaps' podcast series has made it to the mid-1600s and Pascal's Wager. Here it is: "Let us weigh the gain and the loss in wagering that God is. Let us estimate these two chances. If you gain, you gain all; if you lose, you lose nothing." By contrast, if you wager that God doesn't exist, you risk losing all, while gaining only a finite amount if you win. Arguably all of choice, game and decision theory follows from this single challenge (let alone a whole school of theological argument). For me, the significance is that it marks the transition to thinking of life in terms of 'value', that is, something that can be counted, weighed and measured. Pascal's wager falls in the middle of the Cartesian revolution I've written about elsewhere, where we transition from sensing to calculating. We are at the end of this stage (Jeff Jarvis describes this in the Gutenberg Parenthesis while John Ralston Saul offers his take on the same phenomenon in Voltaire's Bastards). Can we imagine a future were we no longer weighed, measured and found wanting?
Web: [Direct Link] [This Post][Share]
The Human Advantage: Nine Skills We Can’t Afford to Lose in an AI-Powered World
John Spencer,
Spencer Education,
2026/01/14
This seems to be a day for focusing on human skills in an AI world, and yet I find the descriptions of them to be so lacking. This article is a case in point. John Spencer begins by criticizing efficiency as a value, which is fine, but we need to look at what the alternatives are, and why we prefer them. Here are the sorts of human skills Spencer references: confusion, productive struggle, slower learning, divergent thinking, one's own voice, empathy, contextual understanding, wisdom, and extended focus. Sure, these are all human traits. Some of them could probably be accomplished by an AI, while others we probably wouldn't bother (for example, it's probably hokum that slower learning produces 'lasting knowledge'). I don't think humans are unique, or especially excel, in any part of the cognitive domain. Rather, what we bring to the table is embodied human experience. But we don't see any of the 'how to adapt to AI' literature talking about 'how to have experiences'
Web: [Direct Link] [This Post][Share]
Be Redemptive
Josh Brake,
The Absent-Minded Professor,
2026/01/14
I think there are some good points to be made in this longish post ruminating on how to decide what needs to be made and what needs to be done in the world. The main advice is in the title, where 'redemptive' is defined as "I sacrifice, we win" and contrasted with 'exploitive' ("I win, you lose") and 'ethical' ("I win, you win"). This is more than just 'catering to the desires of your users' but instead "seeking to understand their deepest needs and to seek their good, even if that means that we cannot maximize our returns or profit margin." This is had because 'their good' is often not seen as also 'my good'. The same post also references Kurt Vonnegut Jr.'s novel Player Piano, which is probably my favourite of all the Vonnegut novels.
Web: [Direct Link] [This Post][Share]
Don't let AI change what it means to teach
Allison Littlejohn,
National Institute of Education (NIE),
2026/01/14
"If we want to know where AI belongs in schools, we have to be honest about what teaching is," writes Allison Littlejohn in this Singapore publication. "Teaching isn't a bundle of tasks. It's a demanding set of cognitive, emotional and social practices that machines can assist with but not replicate." The article looks at a number of things she argues only teachers can do: interpreting "subtle cues such as shifts in attention, hesitation, confusion or sudden insight"; sequencing "concepts and ideas, anticipate misconceptions, frame productive questions and construct sequences"; and shaping "the emotional climate in which learning happens." There's also a plug for Navigo Game, developed to teach children learning English as a foreign language. This tool "demonstrates that teachers, students and their parents are important stakeholders who must be co-creators if the technology is to address their needs." Well, it actually does no such thing, and as important as the three sets of things she describes, there isn't a good reason to believe that non-teachers, or even non-humans, can't or won't be able to perform these functions. Image: Wikipedia.
Web: [Direct Link] [This Post][Share]
Canadian framework for microcredential meta-data (Technical Specification)
CSA Group,
2026/01/13
The CSA Group (formerly the Canadian Standards Association; CSA) has released for comment the Canadian framework for microcredential meta-data. For no good reason you have to enter your email address in order to access it; there's no other charge. The proposed framework "applies to all entities involved in developing microcredential offerings, issuing microcredentials, and conducting associated management or governance." I don't know whether this is an open standard; David Porter has asked. He also asked whether there's anything new or changed in the CSA proposal "given it largely replicates the 20 data fields developed in the Australian National Microcredentials Framework (2021). The comment period is open until February 1. Via Lena Patterson.
Web: [Direct Link] [This Post][Share]
CA AI Periodic Table Explained: Mapping LLMs, RAG & AI Agent Frameworks
Martin Keen,
2026/01/13
This is a video from IBM presenting a list of terms and concepts related to AI in the form of a periodic. "Once you understand it you can basically decode any AI architecture." The 16 minute Khan-style video moves along at a good clip and is easy to follow. Via David Wiley. Also, in this post, Suraj Bhardwaj offers a slightly expanded version of the table and a longish (18 page PDF) explainer document. It's a good addition, though it probably defeats the purpose of the video, which is to be simple and accessible. Related: Ian O'Byrne, why we need better language for AI.
Web: [Direct Link] [This Post][Share]
Why Learner Wallets Will Fail (And How to Make Sure They Don’t)
Mason Pashia, Beth Ardner,
Getting Smart,
2026/01/14
This is quite an interesting post about learner wallets (which I will take to be what we're calling 'personal learning environments' (PLE) now, and are called 'learner employment records' (LERs)). It says, in essence, that developers are focusing on what industry needs by treating them as summaries of accomplishments (ie, 'summarizing identity') rather than what individual learners need, which is a way to build accomplishments (ie., 'building identity'). Successful apps don't sell themselves as "preparation for your professional future." Instead, "they work because they tap into something deeper: the fundamental human need to understand, craft, and articulate who we are, all while being within full control of the user. This means radical control over privacy settings, data sovereignty and more." The article builds on this idea with a scenario describing "Leo: The Storm Chaser". I could quibble about the details of this, but not with the core idea. There's some good discussion of key principles and a list of sample applications. This is probably two article combined into one, as two separate authors are listed; I'll credit both.
Web: [Direct Link] [This Post][Share]
Levers for Living the Portrait of a Graduate (a 7-part series)
Abby Benedetto, Kimberly Erickson,
Getting Smart,
2026/01/13
This article is the first in a series of seven looking at the concept of Portrait of a Graduate (PoG), a construct that defines "a specific set of skills that will equip their learners with what they need to thrive in their lives beyond the walls of school." The focus is on an implementation by Norwalk Public Schools (NPS) in Norwalk, Connecticut. "Based on the rollout of this first competency, Norwalk adopted a Portrait of a Graduate Competency Launch Framework that provided the system with a blueprint for the rollout of future competencies." If you look at various portraits across different systems (and I looked at a bunch) there's a lot of similarities - graduates should be 'critical thinkers', 'world citizens', etc. Honestly, they look almost exactly like the numerous definitions of "21st century skills" that were popular a couple of decades ago.
Web: [Direct Link] [This Post][Share]
In The Beginning There Was Slop
Jim Nielsen,
2026/01/13
This is a really short post with an important message: Many blogs were slop. Many Geocities sites were slop. A lot of pop art was slop. Advertising is slop. B-movies were slop. Many pulp novels were slop. Pamphlets were slop. "You don't need AI to produce slop because slop isn't made by AI. It's made by humans - AI is just the popular tool of choice for making it right now. Slop existed long before LLMs came onto the scene." Image: Wikipedia.
Web: [Direct Link] [This Post][Share]
AI and the Next Economy
Tim O'Reilly,
O'Reilly Media,
2026/01/13
Again, from the perspective of the role of educational institutions, consider the following: "We may be building the engines of extraordinary productivity, but we are not yet building the social machinery that will make that productivity broadly usable and broadly beneficial. We are just hoping that they somehow evolve." What do we (as educators) need to do to redefine ourselves to address this? Tim O'Reilly argues, "decentralized architectures are more innovative and more competitive than those that are centralized. Decentralization creates value; centralization captures it." As money becomes tighter, institutions are beginning to specialize and centralize. That strikes me as exactly the wrong response in a world in which centralization is pushing us increasingly toward precarity.
Web: [Direct Link] [This Post][Share]
How to know if that job will crush your soul
Anil Dash,
2026/01/13
Education is a dance where you navigate from where you are to some sort of idea of a better opportunity. But sometimes it feels more like walking off a cliff. If you're looking at being employed after education (and I suspect there will be fewer and fewer of those) Anil Dash has recommendations to help you find the right landing place. Some of them are practical: does it pay enough, and can I continue to grow? Some of them are more global: how does it earn money, and will it do more good than harm? And others speak to working conditions, which can be the hardest to judge. In the end, you can't really know unless you know someone on the inside, and if you know someone on the inside you probably don't need this list.
Web: [Direct Link] [This Post][Share]
Working Towards Ethical Engagement of GenAI in Higher Education: Insights and Recommendations for Post-Secondary Educators
Ki Wight, Leah Burns, Mia Portelance,
BCcampus,
2026/01/12
This isn't a bad article as these articles go, so I hope the authors forgive me for using it as a stalking horse for some of my ongoing complaints about articles dealing with ethics and AI. First of these is the almost near-universal focus on 'issues' and 'concerns' and general attitude of making sure Nothing Bad Happens. That's not what ethics is; the lawyers have robbed us of our spirit. Second is the generic nature of the recommendations. You could substitute almost anything for 'AI' and get the same recommendations. Eg., "Provide clear instructions and methods for using emojis as part of the creative or communication process." Or "Integrate critical emoji and media literacy skills into course design." An article on 'AI ethics' should focus on what's unique about AI, and not just be a stand-in for all our previous thoughts about ethics.
Web: [Direct Link] [This Post][Share]
The University, the Chatbot, and a Call for a New Mission for Higher Education
Tanya Gamby, David Kil, Rachel Koblic, Paul LeBlanc, Mihnea Moldoveanu, George Siemens,
EDUCAUSE Review,
2026/01/12
The authors write, "In an age when information is overwhelmingly abundant and AI can perform most of the cognitive heavy lifting—from computation to writing—more efficiently than any person, the transmission of knowledge is no longer the defining advantage of colleges and universities; it is their greatest vulnerability if it remains the central mission of their work." It's hard to dispute this. But is this the right response? "Higher education institutions must move beyond simply transmitting knowledge and instead prioritize holistic human development, integrating mental health, social-emotional learning, and ethical reasoning into academic structures to prepare students for meaningful lives and responsible citizenship." Just switching from one kind of content to another kind of content doesn't really respond to the crisis.
Web: [Direct Link] [This Post][Share]
Frankie Egan: Brains Blog precis of Deflating Mental Representation
Frances Egan,
The Brains Blog,
2026/01/12
This is a precis of Deflating Mental Representation, which is unfortunately behind a paywall on MIT Press. That said, this short article gives us a good sense of what I think is really the right way to think of (what people call) mental representations. There are three major elements: first, we should not suppose there is a special relationship between the brain (or mental) state and what the representation is about; second, the same mental state may have different 'content', or none at all; and third, we attribute 'content' to mental states for purely pragmatic purposes. In other words, we can talk about mental states as though they represent the real world, but we should make the mistake of actually asserting that. It's just a gloss (see the comment on the meanings of 'gloss'). Though credited to Dan Burnston on the blog post, I'm going to assume this is written by Frances (Frankie) Egan.
Web: [Direct Link] [This Post][Share]
Evolving Education: A Manifesto to Reimagine Higher Education
Juliana Gonçalves, Trivik Verma, Jing Spaaij,
TU Delft,
2026/01/12
This is a book (354 page PDF) of manifestos written individually or collectively by participants involved in a workshop held at TU Delft in which academics, educators, students, and alumni reflected the concept of "open education". The authors are writing from an engineering, architectural and urbanist perspective and produced a wide range of visions. But there are themes that frequently recur. For example, Mehmet Ali Gasseloglu and S. Zeynep Yılmaz Kılıç argue "Education should... be structured as a dynamic, reciprocal process of inquiry and engagement, one that empowers students as co-producers of knowledge." Juri Mets and Erna Engelbrecht argue that "Educational innovation must move beyond traditional, linear models of learning and assessment to prioritize deep intellectual engagement and complex problem-solving." And Wander M. van Baalen, Thijs Heijmeskamp, and Steven Flipse argue "as transdisciplinary discourse continuously teaches us, this work is not something that should, or even can be done in isolation. It is about a recognition of the interdependencies of various issues in the world." Image: Juliette Cortes-Arevalo and Camilo Benitez-Avila. Via Olga Ioannou.
Web: [Direct Link] [This Post][Share]
I saw someone gatekeep their "SEO Blog System" behind a paywall… so I built my own (and it’s better)
AI With Sohail, Reddit,
2026/01/12
As usual with Reddit, ignore the comments and focus on the main post. As for the main post, it's less important to understand in detail than to think about the sort of thing it entails. What it is is a description of a search-engine optimized (SEO) content development and publication system to generate what amounts to an AI-authored blog. It uses an application called n8n to connect various services that handle things like keyword suggestions (for content ideas), Perplexity and Wikipedia research (for content), content authoring, image generation (with Nano Banana) and publication. According to the author, the total cost is roughly $50/month for 12-20 Articles. Sure, we can (and should) criticize the pollution of the blogosphere, and it points to the importance of cultivating a curated list of sources, but what if it's generating better content than you are? What then?
Web: [Direct Link] [This Post][Share]
Under the Hood: Universal Commerce Protocol
Amit Handa, Ashish Gupta,
Google for Developers,
2026/01/12
I can see the benefit the Universal Commerce Protocol (UCP), but the smarmy tone of this announcement turns me off. "By establishing a common language and functional primitives, UCP enables seamless commerce journeys between consumer surfaces, businesses, and payment providers." Ick. Anyhow, the intent of the standard is to support AI agents through a process of discovery, shopping cat, authentication (euphemistically called 'identity linking'), checkout (ie., credit card transactions), and order (which presumably includes fulfillment). It was developed by Shopify, Etsy, Wayfair, Target, and Walmart and endorsed by companies like Adyen, American Express, Best Buy, Flipkart, Macy's Inc, Mastercard, Stripe, The Home Depot, Visa, Zalando and many more.
Web: [Direct Link] [This Post][Share]
Killed, Not Starved: Deliberate Neglect of the OERF a Failure of Institutional Duty to Open Education
Wayne Mackintosh,
Open Education NZ,
2026/01/12
Wayne Mackintosh talks in detail about the events that let to the shuttering of the Open Educational Resources Foundation by 'the shareholder', specifically, Te Pūkenga (New Zealand Institute of Skills and Technology). Mackintosh is remarkably restrained throughout, focusing on facts of the matter, and not speculating at all about why Te Pūkenga would have rescinded the 'letter of comfort' or failed to renew board memberships. Though OERF was in fact self-sustaining, it was prevented from raising money to support itself. I'm hoping the global community can find a way to reconstitute OERF. In the meantime, as New Zealand enjoys its summer, I hope the foundation's staff can enjoy a well-earned vacation.
Web: [Direct Link] [This Post][Share]
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2026 Stephen Downes Contact: stephen@downes.ca
This work is licensed under a Creative Commons License.