AI Agents Are Recruiting Humans To Observe The Offline World
Umang Bhatt,
NOEMA,
2026/03/05
In an AI agent workflow of the future: "When an agent hits this wall, it does what software always does: It calls an application programming interface (API), a mechanism that enables one system to communicate with another. Only now, the API is a human." More generally, "A Human API is the menu of requests an agent can make to a person, each one a callable sensing action." There are different ways tom interpret this - in one sense, your AI needs you to give permission. "OpenAI's Operator can shop for you, but at checkout, it hands over control for payment." In another sense, your AI hires humans need to watch or verify. "Startups like RentAHuman that let AI agents book people to complete tasks like photographing a school building to document its condition, posting signs on college campuses and visiting a new restaurant." It's hard to imagine how you would even train for such a position; I guess if you need to learn anything, the AI will teach you.
Web: [Direct Link] [This Post][Share]
Beyond Prestige: Whose Knowledge Counts in Open Education?
Marcela Morales,
2026/03/05
The Unitwin Network on Open Education (UNOE) is posting a series of articles under the heading 'sharing is hard' and I want to point to two articles from this series, this one, which depicts people asking, "What is the point of sharing my lesson or lesson plan when I am not at a prestigious institution?" and My Precious, by Javiera Atenas and Leo Havemann, which asks why why academics guard their teaching resources and data (but happily share their articles). Both offer the perspective that educators don't share because they are afraid to, because they don't have enough prestige or don't want to share unpolished work in public. I find this sort of article reductive, as though we could just explain why people don't share, adjust the motivational factors, and make it all better. But this idea that there are reasons why people don't share may be inherently flawed. There might be no reason at all.
Web: [Direct Link] [This Post][Share]
Core Skills for Today's Future of Work
John Storm,
AACE,
2026/03/05
The main value of this article is the division of future work into three types of role (John Storm references from a Mercer report): "(i) Transactional element: routine tasks such as data entry or retrieval, responding to email or enquiries, etc. (ii) Relational element: the servicing, communicating, supervising and/or guiding other people. (iii) Expertise element: the value add that you bring to a role due to your own personal experiences." The arrival of AI eliminates the first and augments the latter two, resulting in greater productivity. There are some writing errors, but I still wondered whether 'John Storm' is a real person, since the article provides no author URL, so I searched and concluded the author is either "a brilliant scientist who worked in relative seclusion in his mountaintop mansion, his faithful dog Rex as his only companion" or "an experienced entrepreneur with 15 years of international practice in team management and strategic planning." I think that in the future if we don't provide explicit author information, people will assume it was written by AI, no matter what the author's name is.
Web: [Direct Link] [This Post][Share]
Chatbot data harvesting yields sensitive personal info
Thomas Claburn,
The Register,
2026/03/05
This is being presented as an AI vulnerability, but what's happening is that untrustworthy extensions are "overriding the browser's native fetch() and XMLHttpRequest() functions in order to capture every prompt and every response." This is a much deeper issue that impacts a wide range of applications, not just AI. It bothered me enough that I looked more deeply into it. XMLHttpRequest() is depreciated and your apps shouldn't be using it. You can use metadata headers to prevent a number of scripting attacks. But the best method is probably cache the native fetch() function (either as a variable or in a hidden iframe) before any extensions run. Of course, if you're using an application written by someone else, you can't do this; this is yet another reason people should learn to create their own applications (using AI, of course) rather than depending on what's out there.
Web: [Direct Link] [This Post][Share]
Towards the Permissive and Transparent use of Generative AI in Education
Stoo Sepp,
2026/03/05
This article introduces a website called PETRA AI (the Permissive and Transparent use of AI in education). It doesn't look like much at first, just a bunch of icons for different uses of AI, but if you click on 'I am a Student' or 'I am a Teacher' (near the top) it becomes interactive, so that when you select the AI uses, it creates a graphic (see the left side) you can download to add to your project or assignment. I could quibble with some of the categories (eg. why 'source' instead of 'search'?) and there are some things it's hard to know (does your spell-check use AI?) but it really is a very elegant piece of work and I like it a lot. Just one thing: why doesn't PETRA use its own icon set? We have no idea whether it was created from scratch by hand or whether Claude Code came up with the whole thing. It seems like an oddly missing feature that undermines its whole message. Via Alan Levine.
Web: [Direct Link] [This Post][Share]
College students, professors are making their own AI rules. They don't always agree
Lee V. Gaines,
NPR,
2026/03/05
As Lee Gaines writes, "More than three years after ChatGPT debuted, AI has become a part of everyday life — and professors and students are still figuring out how or if they should use it." I think the question revolves around means to an end. "What we need is students to go through the process of writing research papers so they can become better thinkers, so they can put together a cogent argument, so they can differentiate between a good source and a bad source," Cryer says. Well, yeah, I can see that. But is writing research papers the only way to become a better thinker? It seems very limited to me. In an AI-enabled would we should be a lot more hands-on, solving problems, testing solutions, that sort of thing. What is the actual work we want to be able to do? Focus on that.
Web: [Direct Link] [This Post][Share]
Higher Ed Invented the Future. Then Subscribed to It.
Patrick Masson,
LinkedIn,
2026/03/05
I admit I spent more time looking at the image than reading the article, wondering why it was necessary to create a fake set of laptop stickers over top of the original stickers. Was it because the original included stickers like 'hacker' and 'rock against war'? No, the fake layer "includes open source projects that began on institutions of higher education" and is intended to illustrate Apereo executive director Patrick Masson's argument that "what's needed now is an open source renaissance for higher education - one that restores community-built infrastructure, institutional agency, and academic autonomy to the center of the educational enterprise." I'm not going to dispute the objective, not the origin story for the applications illustrated, except to point out that some were build in spite of the organization where they originated, not because of it, and that open source authors have long had to work against the institution's desire to keep the tech in-house, to spin it off commercially, or at the very least, to community-source it. Meanwhile, I think Masson's case might carry more weight if authored on an open source platform, not LinkedIn.
Web: [Direct Link] [This Post][Share]
FR#154 - Search and Community
Laurens Hof,
connectedplaces.online,
2026/03/05
I know it's a bit after-the-fact but I don't want this discussion of search in the fediverse to go without comment. It comes in the wake of news that "Holos Social quietly shut down Holos Discover, a fediverse search engine built on ActivityPub." Here's the issue: they tried to build a search engine where people consented to having their content searched. But that would require an opt-in. But "The flag that's supposed to signal 'this person consents to being searchable' frequently signals 'this person's server admin didn't change the default.'" And this creates an unworkable environment for a search engine. Now there are ways to simply search the fediverse without member consent, but people don't really want to go in this direction. There's a governance problem here, writes Laurens Hof, not a technical problem, and it's a long way from being resolved.
Web: [Direct Link] [This Post][Share]
There are many ways to read OLDaily; pick whatever works best for you:
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2026 Stephen Downes Contact: stephen@downes.ca
This work is licensed under a Creative Commons License.