[Home] [Top] [Archives] [About] [Options]

OLDaily

Welcome to Online Learning Daily, your best source for news and commentary about learning technology, new media, and related topics.
100% human-authored

OpenAI Academy
OpenAI Academy, 2025/03/28


Icon

OpenAI has launched something called the OpenAI Academy, which Siya Raj Purohit writes is "our community-first approach to making AI literacy accessible, inclusive, and global." Sign-up is free and open to all (within, I assume, the context of U.S. restrictions on people from certain countries) though you don't appear to need to register to access the content, which consists of fairly basic introductory videos. Some events are also on offer. OpenAI may be good at AI, but I don't think they've mastered online learning yet.

Web: [Direct Link] [This Post][Share]


Learning Design in the Era of Agentic AI
Philippa Hardman, Dr Phil's Newsletter, Powered by DOMS™️ AI Dr Phil's Newsletter, Powered by DOMS AI, 2025/03/28


Icon

According to Philippa Hardman, "The rapid emergence of agentic AI has forced the learning and development field to confront a long-standing truth: most asynchronous online learning is not well designed." This article three major shifts that will be required to adapt: first, from organizational to learner-centered learning goals; second, from passive to active information consumption; and third, from measuring learning activity (like clicks) to learning outcomes. It's worth noting that all of these were needed long before we got agentic AI. It's only now we're seeing how urgent they are.

Web: [Direct Link] [This Post][Share]


AI is transforming university teaching, but are we ready for it? - University Affairs
Loleen Berdahl, University Affairs, 2025/03/28


Icon

The questions here are what's interesting and significant: "Are faculty going to be leaders – or, barring that, engaged participants – in determining how AI shapes the future of higher education? Or will they leave this to those outside the sector?" Also, "AI is already changing how we teach... But will it also change what we teach?" 

Web: [Direct Link] [This Post][Share]


Model context protocol (MCP) - OpenAI Agents SDK
OpenAI, 2025/03/28


Icon

The model context protocol (MCP) was introduced last November by Anthropic and has spread across the large language model community. This page is OpenAI's documentation describing how it too now supports MCP, which essentially cements the importance of the protocol for developers. What it does is to allow an LLM to access a 'context' in the form of information and services from local or remote systems; an MPC server would allow chatGPT to access a database, for example. Image: Norah Sakal.

Web: [Direct Link] [This Post][Share]


Artificial neurons organize themselves
Abdullah Makkeh, Marcel Graetz, Andreas C. Schneider, Michael Wibral, IDW, 2025/03/28


Icon

Interesting discussion of new types of artificial neural networks, which would build on the simpler networks in use today to produce such things as chatGPT. "The new artificial neurons, known as infomorphic neurons, are capable of learning independently and self-organized among their neighboring neurons. This means that the smallest unit in the network has to be controlled no longer from the outside, but decides itself which input is relevant and which is not." Here's the full paper on PNAS. The neurons in these new networks pursue more specifically defined learning objectives, resulting in "infomorphic" neural networks.

Web: [Direct Link] [This Post][Share]


On the Biology of a Large Language Model
Jack Lindsey, et al., Transformer Circuits Thread, 2025/03/28


Icon

You may have seen a recent MIT Technology Review article (archive) on 'circuit tracing'. This (and a companion paper) is the actual research, and unlike the MIT article, it's not behind a paywall. The idea is to reverse engineer the model created and used by Anthropic to find collections of 'circuits' that correspond to specific functions or processes. You might think of it as cognitive psychology for machines. To be clear: these 'circuits' aren't programmed into the model; they emerge as a result of the training. And they're not just a specific set of neural connections. You have to extract or 'trace' the circuit because "model neurons are often polysemantic - representing a mixture of many unrelated concepts," as illustrated in the image.

Web: [Direct Link] [This Post][Share]


We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:

This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2025 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.