[Home] [Top] [Archives] [About] [Options]

OLWeekly

Practical Decentralization
Paul Frazee, Paul's Dev Notes, 2026/02/27


Icon

This article from Bluesky's CTO begins with the premise that "the point of decentralization is to guarantee the rights of individuals and communities on the Internet. Pulling that off is a balancing act between practicality and ideology." It is on this basis that he describes Bluesky's hybrid structure. "Atproto isn't federation. It isn't p2p. It isn't blockchains. It's a direct attempt at practical decentralization, tradeoffs and all. We sacrificed properties of the magical mesh to hit the performance needs, and we broke from the history of federation to get the guarantees we wanted." I have a lot of sentimental attachment to Mastodon and ActivityPub, but I also recognize that there's a lot of smart thinking behind Bluesky at the ATmosphere Protocol.

Web: [Direct Link] [This Post][Share]


LTHEchat 356: Algorithmic Bias of Social Media: Should Educators Teach Students How Social Media Bias Shapes Knowledge?
Olivia Kelly, #LTHEchat, 2026/02/27


Icon

"More and more students now have their worldviews shaped not by textbooks or lectures but by endlessly scrolling feeds, carefully tailored by algorithms they rarely think about," writes Olivia Kelly. It raises the question of how we should teach students about them (I'll just assume that the answer to the question in the headline is 'yes'). "Social media algorithms consistently reinforce ideological homogeneity, limit viewpoint diversity, and intensify polarization among young users." But we need to say more here than simply that students should be educated, just as we need to do more than simply block access to young people. Social media is just one part of a broader media landscape that includes all other media, online and offline, intelligent or otherwise, all of which shape a person's perspective. Just being aware of this would be a start, but knowing how to preserve one's agency and identity in such a landscape is a much taller task.

Web: [Direct Link] [This Post][Share]


Einstein & The Rise of Nuisance Tech
Marc Watkins, Rhetorica, 2026/02/27


Icon

If you haven't yet heard about Einstein, the AI-powered student assistant, don't worry. It's gone. The whole thing came and went within a week. Here's how Einstein (from companion.ai) pitched itself: "Einstein is an AI with a computer. He logs into Canvas every day, watches lectures, reads essays, writes papers, participates in discussions, and submits your homework - automatically. It was breathlessly announced by tech media. Unsurprisingly, people reacted with hostility. Futurism reported the creator as saying "We've also gotten threats from educators to take it down or we won't 'sleep well' and how we're causing the downfall of society." But wait: Michelle Kassorla tried it and freaked out. "I was only recording for 2:30 minutes when I discovered that Einstein is just a wrapper over OPENCLAW! AAAAAAAH I had just installed an agent that runs in the terminal in my son's computer - potentially exposing EVERYTHING on it to the agent." It seems it's more of a nuisance than anything. "It isn't clear if it was a hoax, a failed app launch, or piece of tech posing as performance theater," writes Marc Watkins.

Web: [Direct Link] [This Post][Share]


Git for Humans
Alice Bartlett, YouTube, 2026/02/27


Icon

I'm not sure this 20 minute video will teach you how to use Git (probably not) but it does go a long way toward making one of the world's most user-hostile pieces of software a bit more accessible or ordinary humans. What we need now is a user interface design that is as clear as this video. It's the era of vibe coding - surely someone can do something.

Web: [Direct Link] [This Post][Share]


Inkwell
Inkwell, 2026/02/27


Icon

This is a long-form authoring took that hooks into your Mastodon account. The creator writes on Reddit, "You can sign in with your existing Mastodon account, so there's no need to create a separate one. You can also follow any Inkwell user directly from your Mastodon instance, and their journal entries will appear in your home feed like any other post." It has encounterred a few issues out of the gate, but the idea is sound. It's just another example of what people are building day in the fediverse ecosystem (likely with AI assistance).

Web: [Direct Link] [This Post][Share]


The hidden cost of letting AI make your life easier
Big Think, Shai Tubali, 2026/02/27


Icon

This is a pretty good article outlining the views of Sven Nyholm, a professor of ethics of AI at Ludwig Maximilian University of Munich. The argument is a two parter: first, we determine what is meaningful to us, and then second, we determine whether AI is taking away from this. As a philosopher myself I have resisted being drawn into 'meaning of life' arguments because they are a chimera. Nothing, not even life, is inherently meaningful (indeed, 'meaning', properly so-called, is a property of words and sentences, not people, and 'value', properly so called, is a property of things, not people). We decide what is meaningful, that is, we decide what stands for what. Including life. If AI leaves us with nothing to do but ride the bike or wash the bowl, then these are what is meaningful. This idea that we must develop this or that skill is based mostly on the idea that we must work, that we must repair society, or at the very least, repair ourselves. I value being able to do hard things, but I see no reason why I should forces this bit of personal psychology on others. If AI makes people's lives easier, I'm fine with that. I'm more than fine. 

Web: [Direct Link] [This Post][Share]


LinkedIn & Anthropic killed their specialist roles. Are learning design roles next?
Philippa Hardman, Dr. Phil's Newsletter, 2026/02/26


Icon

Here's the argument in a nutshell: "The pattern is clear: AI is collapsing specialist work, expanding what one person can do, and shifting the premium from niche expertise to end-to-end capability.The same pattern hasn't yet fully emerged in learning design - but there are signals to suggest that it is on its way." The instructional design function in particular seems headed for automation, but also a lot of the content-generation function, including in particular on-demand content generation, is headed that way. In my opinion, anyway.

Web: [Direct Link] [This Post][Share]


Building a Simple MCP Server in Python
Jason Brownlee, Machine Learning Mastery, 2026/02/26


Icon

OK, I haven't tried this yet, but it's in my list of things to do on my 'Stephen Follows Instructions' video series. "In this article, you will learn what Model Context Protocol (MCP) is and how to build a simple, practical task-tracker MCP server in Python using FastMCP." I can think of all kinds of things I want to do with an MCP server (and I'd like to do at least one of them before MCP servers are superseded by the next new technology).

Web: [Direct Link] [This Post][Share]


Beyond Zoom: Why We Took Control of Our Video Conferencing with Jitsi Meet
Ian O'Byrne, 2026/02/26


Icon

Sure, you could use Microsoft Teams, if you enjoy complexity and pain. Or you could use Zoom, which works well but, as Ian O'Byrne says, is "a rented space where governance is centralized, and participants have no meaningful choice." Or you could use Jitsi. "Jitsi Meet is an open-source video conferencing platform. Think of it as the community-owned cousin of Zoom or Google Meet. It runs directly in the browser, requires no participant accounts, and supports encrypted video conferencing out of the box." Yes, you're hosting it yourself, but a service like "Reclaim Cloud offers a one-click Jitsi installer that handles most of the complexity while still allowing full control under the hood."

Web: [Direct Link] [This Post][Share]


Blogs Are Back - Discover and Follow Independent Blogs
Blogs Are Back, 2026/02/25


Icon

This is yet another RSS reader (YARR) that I'm mentioning here for a few reasons. One is the beautiful design, which shows what people can do these days. Another is the way feed data is stored locally (with an option to sync) which is how I built my own (comparatively very ugly reader). And third was the use of Turbopack, "an incremental bundler optimized for JavaScript and TypeScript, written in Rust, and built into Next.js", which should serve as a reminder of how much web infrastructure is available to people now that they can use AI without spending years (literally!) learning how to make it work.

Web: [Direct Link] [This Post][Share]


A Complete Guide to Bookmarklets
CSS-Tricks, 2026/02/25


Icon

I use a few bookmarklets I have written myself as part of my newsletter workflow. One sends items to Pinboard to read later. Another (which I am using as I type this) provides a direct way for me to write these posts right from the article I'm reading in my browser. This article is, as the title suggests, a guide to bookmarklets. This article discusses bookmarklets that change the style of the web page by altering the cascading style sheet (CSS). There are also some links to cool bookmarklets, like this set of web development bookmarklets.

Web: [Direct Link] [This Post][Share]


New Coursera report shows that 95% of students and educators are using AI on campus - but only a quarter of educators worldwide feel prepared to use it effectively
Jack Moran, Coursera Blog, 2026/02/25


Icon

I've mentioned this before but as things get increasingly fluid using 'students' as your survey population may become increasingly misleading. I think that's partially the case here. In addition to the generally positive remarks about AI (95% are using AI tools) we read, "students report that AI is enhancing their learning - rather than replacing traditional study methods." That's exactly what you would expect to hear from students, Properly So-Called. They are uniquely those people who have been able to access and succeed in higher education institutions. But what about the rest of the world? The report (30 page PDF) is behind a spamwall but you can access it without filling out the form here.

Web: [Direct Link] [This Post][Share]


Bridging technical and emotional skill gaps: AI-enhanced adaptive learning and emotional intelligence in project management education
Kristen Karmazinuk, Jim Helik, Research in Learning Technology, 2026/02/25


Icon

According to this article (20 page PDF), "AI-enhanced features, such as real-time feedback, simulations, and reflective prompts, enhance understanding of project management concepts while fostering EI capacities such as empathy, collaboration, and conflict resolution." The mixed methods study is based on an online survey with 152 responses collected. Some interesting observations about personalization: "Meaningful personalisation depended on user input. While adaptive tools can tailor content, their effectiveness hinges on learner engagement. As one participant observed, 'Personalization? It's about how you prompt it. If you're vague, you get vague'... Participants noted that precise, iterative input was essential for meaningful, context-sensitive feedback."

Web: [Direct Link] [This Post][Share]


The Future of Computational Social Science
David Lazer, Katherine Ognyanova, 2026/02/25


Icon

This is from a couple of years ago and published behind a paywall so I didn't get to see it until a search unearthed an open access PDF. The concept has been around for a while, at least since Lazar's 2009 article, and probably longer. It describes social science in terms of massive data sets, agent-based simulations, digital records of human behavior, connections and networks, data text analysis, new modalities (audio, video, images, GIS, sensors), and archival data. It is understood theoretically as a process of mapping signals and known constructs to each other, though I think there's room for new approaches here. Good article, clearly written and accessible. 31 page PDF. See Katherine Ognyanova's website for a lot more related work. Also see Lazer Lab, "based on the idea that how people and organizations are connected together is critical to understanding the functioning, success and failure of actors and systems."

Web: [Direct Link] [This Post][Share]


Taking action against AI harms
Anil Dash, 2026/02/24


Icon

A bit of a theme has emerged in today's newsletter, and it has to do with the ablative effects of AI (Anil Dash does not write about this, but I'm getting to it). In statistics, there's this idea of 'regression toward the mean', which in writing becomes 'regression toward the bland', or as Claudio Nastruzzi terms it, semantic ablation. Well, that's not me. But... why? It wasn't simply protection from AI, because when I was growing up, AI wasn't a thing. But what was a thing was television. Except - I hardly watched it all either as a child or a youth or even through most of my years of university. I would put on some headphones and read or write or code. I was a very serious and studious young man, and very socially inept. Still am. But I also have insights into the world (that I think are) worth having that weren't ablated by relentless commercial media. But now I read Anil Dash (as I finally talk about the article) describing how to protect children from the harms of AI. And - fair enough. Protect them from exploitation and manipulation and regression to the bland. Keep them off X/Twitter (and Meta, and TikTok). Stop schools from using LLMs (not just chatGPT). And - let's see if you can do this for them and for yourself - turn off the television

Web: [Direct Link] [This Post][Share]


Cannes Declaration on the Sovereignty of the Mind
Dataethics, Dataetisk Tænkehandletank, 2026/02/24


Icon

This article describes The Cannes Declaration on the Sovereignty of the Mind, which was signed by a coalition of experts at the World AI Cannes Festival. 3 page PDF. So OK. It reads, "We ask for the conditions under which innovation can remain compatible with democracy and fundamental freedoms, including a firm boundary against systems designed or used to manipulate thought at scale or to evade human thought and reflection." Sure, their focus is on the potential uses of AI, and I get that. But it strikes me that commercial media and advertising (also synonymous with Cannes) have been responsible for large-scale manipulations of thought and beliefs. I have often said "advertising is the original fake news." I should also say "advertising is the original hallucination." But these experts (pictured) don't see the world the way I do.

Web: [Direct Link] [This Post][Share]


Attention is All You Need to Bankrupt a University
Hollis Robbins, Anecdotal Value, 2026/02/24


Icon

"A transformer," writes Hollis Robbins, "performs a four-step operation: it takes an input, selects which features of the input to attend to, weights those features based on patterns learned from training data, and generates the most probable output." Aggregate, remix, repurpose, feed forward. Doesn't need to be 'most probable'; it's usually 'most relevant' or 'most salient'. But I digress. "Since 2000," continues Robbins, "American universities built an enormous infrastructure around a mode of instruction that performs the same kind of operation: converting particulars into categories and generating outputs from learned patterns." That's not exactly the same, but again I digress. The differences aren't really significant. Then finally: The university's scaling operation succeeded. The millions of graduates carried the four-step operation into every American institution... (but) The same formal property that allowed one compliance strategy to work across every discipline allows one machine to perform the operation across every institutional context." But here's the problem: "The university faces a market problem. Does anyone need to pay tuition to learn an operation that a machine performs competently?... I can't see any other future but collapse."

Web: [Direct Link] [This Post][Share]


Semantic ablation: Why AI writing is boring and dangerous
Claudio Nastruzzi, The Register, 2026/02/24


Icon

This is definitely part of the reason why I still prefer to write my own text (recent experiment notwithstanding): "When an author uses AI for 'polishing' a draft, they are not seeing improvement; they are witnessing semantic ablation. The AI identifies high-entropy clusters - the precise points where unique insights and 'blood' reside - and systematically replaces them with the most probable, generic token sequences." The main thing I bring to my writing is my unique insights and point of view. I just see the world differently from most people. Having AI smooth that rough edge removes the value from my writing. (TIL ablation is also a type of medical procedure; I know the term from engineering applications or as a type of armour).

Web: [Direct Link] [This Post][Share]


Preserving The Web Is Not The Problem. Losing It Is.
Mark Graham, TechDirt, 2026/02/24


Icon

This is just a short opinion post from the director of the Wayback Machine at the Internet Archive but the story it references is a significant development: "some major news organizations - including The Guardian, The New York Times, and Reddit - are limiting or blocking access to their content in the Internet Archive's Wayback Machine." They're doing it of course because they can't collect money from AI companies if those companies think they can just get the content from Wayback. Mark Graham argues that "the Wayback Machine is built for human readers. We use rate limiting, filtering, and monitoring to prevent abusive access." But of course this might not always be the case, so the content companies are protecting their turf. But in the long term, protecting their turf may cause more harm than good: "significant chunks of our journalistic record and historical cultural context simply... disappear."

Web: [Direct Link] [This Post][Share]


How do students regulate their learning with a genAI chatbot?
Lyn Lim, Maria Bannert, Learning Letters, 2026/02/24


Icon

Here's the set-up: "Thirty university students were tasked to read texts and write an essay within 45 minutes." Here's the pay-off: "Chatbot users achieved higher essay scores than non-users. Chatbot interaction frequencies correlated positively with high cognitive activities." How is this possible? This paper (11 page PDF) explores the question, exploring the usual trade-off between cognitive offloading and pedagogically sound design. "The findings highlight the need to support students' learning regulation skills to mitigate their outsourcing of critical processes while using genAI tools." In other words, there is a difference between an AI application that will write an essay for you, and an AI application that will teach you the content so you can write an essay. I think we knew that, though: it's why we discourage parents from completing their children's homework or project. The real issue is, under what circumstances can the student be motivated to turn down the parent's (or AI's) completion of their work even if it is freely offered?

Web: [Direct Link] [This Post][Share]


A.I. Isn't People
Rusty Foster, Today in Tabs, 2026/02/24


Icon

This article begins with the question, "How many Reddit posts does it take to learn to read?" The answer "all of them" is intended to show the difference between human learning and AI learning. The intent is to show '200 lines of Python code does not understand anything'. It's a bizarre supposition, to be sure. But in response I invite the reader to consider the same questions asked about humans. Do the chemicals and interactions in a human neuron 'understand' anything? If given only Reddit posts, would it not take a lot of posts to learn how to read? That's the problem with these human-AI comparisons: we assume these almost-magical human abilities that in reality stem from (a) a wider range of experience from all our senses, and (b) a lot of interconnected neurons. The proposal that human understanding is fundamentally different does not follow from arguments like this. Yet people keep making them.

Web: [Direct Link] [This Post][Share]


Unpacking the Ethics of AI in Education
Geoff Cain, Simon Says: Educate!, Spotify, 2026/02/23


Icon

I am interviewed by Geoff Cain for Episode #61 of Simon Says: Educate! Unpacking the Ethics of AI in Education.  See also this summary and useful slide presentation from Ronald Lethcoe. "When a state board, accreditor, or institution publishes an AI ethics framework, that document is not a neutral distillation of shared human values. It reflects who was in the room, what risks they prioritized, and which political and cultural assumptions they brought with them."

Web: [Direct Link] [This Post][Share]


What We Must Do About AI In Education
Eamon Costello, GenAI:N3, 2026/02/23


Icon

Eamon Costello writes, "The USA is AI's primary regulator and ideological driver. Its dominant cultural values will be increasingly embedded in it." I'm not sure that this is true, but let's assume it is. But Costello's observation that the existing "dominant cultural values" are so toxic and should never be allowed to infuse AI strikes me as a very good argument for not continuing to educate people the way we have in the past. Look how that turned out! I mean, how did so many people acquire the values and views that they did? This is not just a U.S. problem, either; we can look to many other societies where the national cultural values have gone wildly astray through no fault of AI (or even of educational technology in general). Where learning and development are concerned, I am personally far more concerned about advertising and mass media than I am about AI. 

Web: [Direct Link] [This Post][Share]


Perfectly Imperfect
Alan Levine, CogDogBlog, 2026/02/23


Icon

In retrospect, it wasn't worth watching the hockey game. But the question of what it is worth doing as a writer or an academic is an interesting and complex debate. Should I, for example, have chosen to write my article by hand instead of AI? What more would I have achieved? This is an old question. For example, some of the most prolific authors use speech instead of a typewriter and then automated transcription to create the article. I've tried this with my talks, but never really made it work. What does it mean to be a scholar in the age of AI? I saw an article on LinkedIn (since lost because the algorithm bounced it out of view before I could capture the link) saying humans should always form the research question, do the literature review, analyze the evidence and draw conclusions. I wanted to ask: why these things? I don't think we have a good answer yet. We poretend it's because AI is flawed, but then we get arguments like "I typo [often] therefore I am [human]." I use spell-check in OLDaily because I used to get complaints about my spelling. Cory Doctorow reports using Ollama, an open-source LLM, as a typo-catcher. This led to a denoument from Jürgen Geuter on the ethics of using AI saying Doctorow "tries to make it (the criticism) look unreasonable by making it just a conversation about tech without regarding how that technology affects the world and the people in it."

Web: [Direct Link] [This Post][Share]


Inclusivity, Ethics, and Accessibility for Learners with Disabilities
Munir Moosa Sadruddin, Sehrish Sachwani, MERLOT, 2026/02/24


Icon

What I like about this open access book (131 page PDF) is that it offers a variety of voices from a multi-national perspective addressing issues related to ethics and accessibility from different points of view. There are a couple of technology-specific articles, including a contribution from Silvester Krčméry on transforming inclusion in the age of AI, and an article from Pankhuri Bajpai on subjective well-being in digital education. I also especially appreciated Sehrish Sachwani's article on nervous system regulation as a foundational condition for learning. "A classroom that feels lively, engaging, neutral, or stimulating to one learner may feel overwhelming, exhausting, or threatening to another." I can feel this. The book is listed on Merlot and available on Google Docs, though I found it easiest to read a downloaded version.

Web: [Direct Link] [This Post][Share]


I needed a scheduling tool that respects privacy. So I built one.
Doug Belshaw, Open Thinkering, 2026/02/23


Icon

"Maybe you've been thinking 'someone should make a tool that does X,' says Doug Belshaw. "Maybe that person is you?" At a certain point I may stop running these instances of "I built x tool using AI" but I'll keep posting so long as it remains fun. And it is fun, because it feels like that explosion of cool we saw when the web first reached a large audience in the mid to late 1990s. What we could do then never really went away, not even after the dot com crash of 2003, because the basic tools were in everyone's hands. That's also true today; AI is just math and data, and there's enough of both out there that what we're doing today won't disappear. The real issue isn't "AI yes or no". It's how we can prevent commercial interests from degrading it the way they degraded the web.

Web: [Direct Link] [This Post][Share]


No, you couldn't do this before an LLM because if you could it would have been done already
Mike Caulfield, 2026/02/23


Icon

Another example of the sort of thing being done with the tools today. "I spent all day building a Claude Code skill to remediate handwritten math notes in Canvas courses," writes Mike Caulfield. Again, we need to be clear about how we're evaluating 'success' here. "Will it make me rich? No... I won't be rich, but we will be able to build better and more accessible systems because of this, and that is a very good thing." It is a very good thing. 

Web: [Direct Link] [This Post][Share]


Now That's a Headline
Mark Hurst, Bluesky Social, 2026/02/23


Icon

The headline, in a paywalled article in Fortune (though you can read it here), is "The U.S. spent $30 billion to ditch textbooks for laptops and tablets: The result is the first generation less cognitively capable than their parents." No surprise, the tech sceptics love it, and it has been heavily promoted in social media. But even the article admits, "This is not a debate about rejecting technology. It is a question of aligning educational tools with how human learning actually works." Anyhow, I had ChatGPT write an article refuting the inference stated in the headline. Specifically, "Laptops do not inherently degrade cognition or learning. Poorly designed instructional systems using laptops do." The same, by the way, will be true of AI. (That image in Fortune, by the way, is a masterpiece of propaganda.)

Web: [Direct Link] [This Post][Share]


Man accidentally gains control of 7,000 robot vacuums
Mack DeGeurin, Popular Science, 2026/02/23


Icon

This is more funny than anything else, but it does have a security lesson in there somewhere. Sammy Azdoufal just wanted to steer his DJI Romo with a gaming controller. "While building his own remote-control app, Sammy Azdoufal reportedly used an AI coding assistant to help reverse-engineer how the robot communicated with DJI's remote cloud servers. But he soon discovered that the same credentials that allowed him to see and control his own device also provided access to live camera feeds, microphone audio, maps, and status data from nearly 7,000 other vacuums across 24 countries."

Web: [Direct Link] [This Post][Share]


How to Write a Good Spec for AI Agents
Addy Osmani, 2026/02/20


This is a detailed description of how to approach writing a specification for AI agents. Long story short: don't try to do it all in one go. Start with a high level description of what you want done, then work the AI to refine it into something comprehensive and robust. "Simply throwing a massive spec at an AI agent doesn't work - context window limits and the model's 'attention budget' get in the way. The key is to write smart specs: documents that guide the agent clearly, stay within practical context sizes, and evolve with the project."

Web: [Direct Link] [This Post][Share]


Using Music to Teach Democracy
Kristina Piskur, Teach Magazine, 2026/02/20


This is interesting. "MELODY (Music Education for Learning Opportunities and Development of Youngsters) is an Erasmus+ project co-funded by the European Union with a mission that is both innovative and timely: to use the universal language of music as a powerful educational tool to enhance children's participation in democratic life." There's a project Handbook of Best Practices and a toolkit available on the project website. Related: How to solve the tenor shortage, via Chris Corrigan.

Web: [Direct Link] [This Post][Share]


This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2026 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.