[Home] [Top] [Archives] [About] [Options]

OLWeekly

OpenAI Academy
OpenAI Academy, 2025/03/28


Icon

OpenAI has launched something called the OpenAI Academy, which Siya Raj Purohit writes is "our community-first approach to making AI literacy accessible, inclusive, and global." Sign-up is free and open to all (within, I assume, the context of U.S. restrictions on people from certain countries) though you don't appear to need to register to access the content, which consists of fairly basic introductory videos. Some events are also on offer. OpenAI may be good at AI, but I don't think they've mastered online learning yet.

Web: [Direct Link] [This Post][Share]


Learning Design in the Era of Agentic AI
Philippa Hardman, Dr Phil's Newsletter, Powered by DOMS™️ AI Dr Phil's Newsletter, Powered by DOMS AI, 2025/03/28


Icon

According to Philippa Hardman, "The rapid emergence of agentic AI has forced the learning and development field to confront a long-standing truth: most asynchronous online learning is not well designed." This article three major shifts that will be required to adapt: first, from organizational to learner-centered learning goals; second, from passive to active information consumption; and third, from measuring learning activity (like clicks) to learning outcomes. It's worth noting that all of these were needed long before we got agentic AI. It's only now we're seeing how urgent they are.

Web: [Direct Link] [This Post][Share]


AI is transforming university teaching, but are we ready for it? - University Affairs
Loleen Berdahl, University Affairs, 2025/03/28


Icon

The questions here are what's interesting and significant: "Are faculty going to be leaders – or, barring that, engaged participants – in determining how AI shapes the future of higher education? Or will they leave this to those outside the sector?" Also, "AI is already changing how we teach... But will it also change what we teach?" 

Web: [Direct Link] [This Post][Share]


Model context protocol (MCP) - OpenAI Agents SDK
OpenAI, 2025/03/28


Icon

The model context protocol (MCP) was introduced last November by Anthropic and has spread across the large language model community. This page is OpenAI's documentation describing how it too now supports MCP, which essentially cements the importance of the protocol for developers. What it does is to allow an LLM to access a 'context' in the form of information and services from local or remote systems; an MPC server would allow chatGPT to access a database, for example. Image: Norah Sakal.

Web: [Direct Link] [This Post][Share]


Artificial neurons organize themselves
Abdullah Makkeh, Marcel Graetz, Andreas C. Schneider, Michael Wibral, IDW, 2025/03/28


Icon

Interesting discussion of new types of artificial neural networks, which would build on the simpler networks in use today to produce such things as chatGPT. "The new artificial neurons, known as infomorphic neurons, are capable of learning independently and self-organized among their neighboring neurons. This means that the smallest unit in the network has to be controlled no longer from the outside, but decides itself which input is relevant and which is not." Here's the full paper on PNAS. The neurons in these new networks pursue more specifically defined learning objectives, resulting in "infomorphic" neural networks.

Web: [Direct Link] [This Post][Share]


On the Biology of a Large Language Model
Jack Lindsey, et al., Transformer Circuits Thread, 2025/03/28


Icon

You may have seen a recent MIT Technology Review article (archive) on 'circuit tracing'. This (and a companion paper) is the actual research, and unlike the MIT article, it's not behind a paywall. The idea is to reverse engineer the model created and used by Anthropic to find collections of 'circuits' that correspond to specific functions or processes. You might think of it as cognitive psychology for machines. To be clear: these 'circuits' aren't programmed into the model; they emerge as a result of the training. And they're not just a specific set of neural connections. You have to extract or 'trace' the circuit because "model neurons are often polysemantic - representing a mixture of many unrelated concepts," as illustrated in the image.

Web: [Direct Link] [This Post][Share]


AI for Educators
University of Sydney, 2025/03/27


Icon

From the website: "This site is a resource designed for educators by educators, created to support the productive and responsible use of generative artificial intelligence in teaching and learning at the University of Sydney and beyond. This site complements our 'AI in Education' site which is designed for students by students, and contains resources and advice." The content all appears to be open access (and licensed CC-by NC) but you'll need a university ID to login to any of the course services. Via Danny Liu.

Web: [Direct Link] [This Post][Share]


Rediscovering Education's Purpose in the AI Age
Carlo Iacono, 2025/03/27


Icon

"While we're busy worrying about AI detection," writes Carlo Iacono, "we're forgetting what education is actually for." That's fair enough, but I'm not sure there exactly widespread agreement on this. Iacono writes, "Education at its best has always been about transformation." Things like "learning how to learn", "frameworks for ethical decision-making", "understanding oneself," and so on. But do the people looking for graduates to fill jobs really think this? Do students simply seeking a better life believe this?

Web: [Direct Link] [This Post][Share]


41 Startup Ideas Blueprinting the Future of Learning and Work
Sarah Morin, Ben Kornell, Alex Sarlin, Jen Lapaz, Edtech Insiders, 2025/03/27


Icon

OK, it's a nice list, neatly summarized in the image, but it's generally just 'AI this' and 'AI that'. And sure, some of these will be successful, and many will fail. Where the real innovation will happen in the future, though, is where the startups aren't based essentially on AI. That's partially because of practical economics - most of the money in AI will be made by big AI companies - and partially because the best opportunities are always 'blue ocean' opportunities - fields that are not crowded with competitors.

Web: [Direct Link] [This Post][Share]


CommonsDB
CommonsDB, 2025/03/27


Icon

Received via email: "The CommonsDB initiative is building a public registry for Public Domain and openly licensed works, and its website is now live. The site outlines how the registry will be developed over the next 18 months." This is all future work; the registry doesn't exist yet, but there is a blog. I'm a bit curious to learn how a registry for these works improves upon simply licensing them, and how works will be selected for inclusion (I can't imaging them taking my 43,493 openly licensed photos, for example)

Web: [Direct Link] [This Post][Share]


From My to Me
Olia Lialina, INTERFACECRITIQUE, 2025/03/27


Icon

This is a long reflection on how websites have evolved from their 'golden age' in 1996. The transition in the title refers to a change from where websites were about something - about Star Trek, about pets, about writing Javascript - to where the the subject of a website is 'me' (as reflected in the rise to prominence of the 'About Me' link over the years. So the call to action is two-fold: first, leave the platforms that won't let you post links and turn your browsing experience into an algorithmic timeline; and second, "get away from Me, from the idea that you are the centre of your online presence," by linking to other people and talking about other things. Via Jim Groom.

Web: [Direct Link] [This Post][Share]


forall x: Calgary. A Free and Open Introduction to Formal Logic
Richard Zach, Tim Button, P. D. Magnus, Aaron Thomas-Bolduc, Open Logic Project, 2025/03/26


Icon

It's a bit of a weird title but this open textbook (438 page PDF) is a good introduction to formal logic (so far as I can determine by reading through the first few chapters and looking randomly at other sections). I'm passing it along because if there is a secret manual to academic success, this is it (and so, if you're working with some really promising young students, you should pass this on to them and whisper "this isn't for everyone"). More seriously, though there are some well-known limitations to formal logic, there is no discipline that does not rely on it, and facility with formal logic will make less stringent tasks - writing, for example - a lot easier to do well. Via Open Education Global, and hence, Alan Levine.

Web: [Direct Link] [This Post][Share]


Gabriele Scheler reflects on the interplay between language, thought and AI
Paul Middlebrooks, The Transmitter, Brain Inspired, 2025/03/26


Icon

This is a wide-ranging podcast interview (read the transcript) that really first well with today's topics of discussion. Gabriele Scheler talks about the relation between language and thought and the shortcomings of AI in this regard. The flow is informal and there are insights scattered throughout. Like this, for example: "The idea was the neuron has its connections to other neurons, and there's calculations going on in a network. You could call it a horizontal network for calculating information. Then on the other hand, the neuron has, as I said, 15,000 or up to 70,000 different proteins. They interact with each other in an internal signal network. There is a lot of-- There is the metabolic network, of course, which overlaps with it." When AI uses complex neurons like humans do - I wonder what will be the result.

Web: [Direct Link] [This Post][Share]


Connected Belonging: A relational and identity-based approach to schools' role in promoting child wellbeing
Ceri Brown, Alison Douthwaite, Michael Donnelly, Marnee Shay, British Educational Research Journal, 2025/03/26


Icon

Interesting article that finds a thread woven through several different studies on student wellbeing and community that leads to this: "Allen et al. (2021) identify four components to the notion of belonging: competencies, opportunities, motivations and perceptions. In line with this framework, our Connected Belonging approach includes these four key aspects, including the applied (competencies), resources (opportunities), psychological (motivation) and experiential (perceptions) in emphasising that while belonging might be felt individualistically (i.e., through perceptions and motivations), it is nonetheless experienced relationally (i.e., through opportunities and the resources generated), which can provide tangible skills and benefits for the individual (e.g., competencies)." 

Web: [Direct Link] [This Post][Share]


Video catches microglia in the act of synaptic pruning
RJ Mackenzie, The Transmitter: Neuroscience News and Perspectives, 2025/03/26


Icon

This is really cool, but some background helps. There has been a longstanding discussion about how neural networks form: do we start with a densely connected brain and then prune connections that are not being used, or do we start with a loosely connected brain and grow new connections as we gain experience? There's a (very loose) alignment between cognitivists (such as Fodor and Chomsky) with the former, and connectionists (such as Hebb and Rosenblatt) with the latter. This article describes actual video recording the former: an immune cell called a microglia actually pruning a synapse between two neural cells. "A microglia cell expressing a green fluorescent protein clearly reaches out a ghostly green tentacle to a budding presynapse on a neuron and lifts it away, leaving the neighboring blue axon untouched." 

Web: [Direct Link] [This Post][Share]


‘How can I know what I think till I see what I say?’: How AI is changing education and writing
Nick Hillman, HEPI, 2025/03/26


Icon

This is a review of John Warner's More Than Words: How to Think about Writing in the Age of AI. I haven't read the book because, well, books cost money, and I'm still working on Mark Carney's Value(s) (which is really good, by the way). Anyhow, Werner's thesis is that "ChatGPT cannot write" because "writing is thinking". I get the concept (which probably required less than 300 pages to explain). "Writing, Warner says, is a process in which 'the idea may change based on our attempts to capture it.'" This happens with me less often with writing (except for these 100 word tidbits I write in OLDaily) and more often with speaking (which is why I consider my talks my primary academic output). What chatGPT doesn't do when writing, that I and other humans do, is to interpret experience. That, though, may change in the future when chatGPT has more than just human text as input.

Web: [Direct Link] [This Post][Share]


For this unsung philosopher, metaphors make life an adventure
Sue Curry Jansen, Jeff Pooley, Psyche, 2025/03/26


Icon

This is an overview of Susanne K Langer's contribution to our understanding of language. It's one with which I am largely in agreement. "Language is built up bit by bit, word by word, through slow, metaphoric accretion... Thus, to perceive, for our ancestors, was to conceive: to see one thing (the sensory onrush) in another (the perceptual shape given by the mind and primordial cultures). That is, our predecessors were already abstracting... By abstraction she means the process of recognising something in common, something shared, in more than one thing." 

Web: [Direct Link] [This Post][Share]


AI bots are destroying Open Access
Eric Hellman, Go To Hellman, 2025/03/25


Icon

"There's a war going on on the Internet," writes Eric Hellman. "AI companies with billions to burn are hard at work destroying the websites of libraries, archives, non-profit organizations, and scholarly publishers, anyone who is working to make quality information universally available on the internet." I personally have had my issues keeping my sites running while being hit by these AI bots. "The current generation of bots is mindless. They use as many connections as you have room for. If you add capacity, they just ramp up their requests. They use randomly generated user-agent strings. They come from large blocks of IP addresses. They get trapped in endless hallways." 

Web: [Direct Link] [This Post][Share]


Higher education grapples with AI
Bryan Alexander, 2025/03/25


Icon

Bryan Alexander offers examples of the debate around and adoption of artificial intelligence through a series of examples drawn mostly from the U.S. (there's one reference to a Chinese example). He cites one trend, "Large majorities of these leaders cite specific hindrances to GenAI adoption and integration at their schools. The challenges most often mentioned include faculty unfamiliarity with or resistance to GenAI, distrust of GenAI tools and their outputs, and concerns about diminished student learning outcomes." 

Web: [Direct Link] [This Post][Share]


EDI and the Measurement of Merit
Alex Usher, HESA, 2025/03/25


Icon

I'm often critical of Alex Usher but there's a lot to like in this article on equity, diversity and inclusion (EDI) in academia. What he explicitly recognizes here is that "our definition of merit, which disproportionately rewards people of certain backgrounds, is a deficient one." Where I would push back is twofold: first, he argues that EDI addresses this deficiency "by proxy" and that it's employed only because it's cheaper and easier than assessing each person individually. Not exactly. EDI addresses the "immutable factors" of race, gender, etc. because there are systemic barriers based precisely on such immutable factors. If the sign says "no girls" you're not responding "by proxy" by removing the sign; you're directly addressing a cause. Second, there seems to be a 'zero sum' assumption in the article that suggests some sort competition (and hence, measurement of merit and disadvantage is necessary). Maybe it's not 100% inescapable, but in a world of abundance (as Dave Cormier would say) we can certainly open up learning to far far more people than we do today. Image: McMaster's EDI Action Plan.

Web: [Direct Link] [This Post][Share]


22 Lessons from the GenAI Shadows
Carlo Iacono, Hybrid Horizons: Exploring Human-AI Collaboration Hybrid Horizons: Exploring Human-AI Collaboration, 2025/03/25


Icon

There are too many lessons here to summarize in a single post, but by and large I agree with them, as they parallel my own experience and what I've been seeing elsewhere. For example, "Students develop AI literacy through direct experimentation with its capabilities and limitations, not through theoretical instruction." And, "Measuring individual knowledge acquisition makes little sense in an era of AI-human cognitive partnership." The funny thing is that I think these 22 observations have always been true, even if our formal systems of education have not recognized them. Image: Zhai, et al. (a bit dated but worth a read in its own right).

Web: [Direct Link] [This Post][Share]


Counteract #AI Cynicism with Philosophy-Inspired CARES Model #EduSky
Miguel Guhlin, Another Think Coming, 2025/03/25


Icon

Miguel Guhlin finds himself "fascinated by Xunzi, whom I never read or heard about in school and only found thanks to AI analysis of a draft of my writing, and its relevance to our AI moment." Based on this, he comes up with "a 'Xunzian' approach to AI in education created with Perplexity." It's the CARES model (because educators can't resist a mnemonic):

"Xunzi... focused on humanity's part in creating the roles and practices of an orderly society, and gave a much smaller role to Heaven or Nature as a source of order or morality than most other thinkers of the time."

Web: [Direct Link] [This Post][Share]


Quickly prototyping a Career Discovery Tool
Doug Belshaw, Open Thinkering, 2025/03/25


Icon

I'm including this mostly for my own reference, as I live in hope that my official workload will ease up enough to allow me to continue working on CList and including stuff like this: "Recently, I 'vibe coded' a career discovery tool which asks you some questions, suggests some jobs, and tags them based on how likely to be automated in the future. It uses the Perplexity and Lightcast APIs. You can try the tool for yourself here."

Web: [Direct Link] [This Post][Share]


Observations and Suggestions about Boards of Governors
Alex Usher, HESA, 2025/03/25


Icon

I spent some time on university Boards of Governors (two years on the University of Alberta's, and one on Athabasca University's Governing Council). What I recognize in this article is the 'by the book' description of Canadian board members' duties and challenges, a description members would endorse, even while knowing it's not exactly accurate. For example, Alex Usher writes "Boards are expected to act as a conduit of information from the community to the university." Well, yeah, except that 'community' members do this by governing the institution (that's why they're always in the majority), and by 'community' we mean government, because that's who selects the members. And yes, Board members are volunteers, but they are typically from a (business) demographic that expects some return for their volunteerism, generally nothing overt like outright graft, but, you know, friendly relations between the business community and the government. Image: CAUT

Web: [Direct Link] [This Post][Share]


Making AI a More Effective Teacher: Lessons from TPACK
David Wiley, improving learning, 2025/03/24


Icon

The interesting part of this post isn't the TPACK model at all but the "TRaining AI to be a Teacher" (TRAIT) hypothesis introduced later in the post. It "is something like this: the effectiveness with which a generative AI model supports student learning will be proportional to the extent to which it has the skills and knowledge of an appropriately trained human teacher." Now it's a nice starting point, and we can see the relevance of Technological Pedagogical Content Knowledge (TPACK) to this. But really, the skill sets are very different for AI and human teachers. The AI is teaching one-to-one, the human is teaching a cohort. The AI is following the student's lead, the human is teaching a specific curriculum. The AI doesn't care about behaviour, the human manages behaviour and a host of other social factors over and above learning. But Wiley's main point holds: an AI teacher will require specific competencies we might not expect an AI to have out of the box.

Web: [Direct Link] [This Post][Share]


The New Analytics-Industrial Complex in Higher Education: Data, Governance, and Power
Chris R. Glass, Gerardo Blanco, International Higher Education, 2025/03/24


Icon

According to this article, "private data firms are quietly seizing control of global higher education, creating a power shift more profound than universities realize." I'm not sure how real these assertions are, but there's enough evidence out there to suggest that they might be. "The analytics-industrial complex has effectively achieved a form of regulatory capture over global higher education.... This represents a profound shift from public to private governance, where commercial entities increasingly determine educational standards and priorities without democratic accountability." I'd like to think that this isn't the case here in Canada, where universities are (mostly) public institutions, but in the U.S. and the U.K. these concerns may be more valid. I don't know.

Web: [Direct Link] [This Post][Share]


5 questions for Natali Helberger
Gabby Miller, Politico, 2025/03/24


Icon

This article is an interview with Natali Helberger, a law and digital technology professor at the University of Amsterdam, and I really like the "one underrated big idea" that leads the post: "AI is as much about society as it is about technology. AI systems are integrated into complex social, cultural, economic, political and institutional contexts that shape the data the systems are trained on, the prompts that drive them, the values tech is evaluated against, the way the technology changes workflows and how its decisions affect the life of people." That is all.

Web: [Direct Link] [This Post][Share]


2025 EDUCAUSE Horizon Action Plan: Supporting Agency, Trust, Transparency, and Involvement
Jenay Robert, EDUCAUSE Library, 2025/03/24


Icon

This report (11 page PDF) begins by describing "their preferred state of higher education 10 years from now" and then outlines actions to help educators prepare for it. It's hard to imagine such a world. Every trend in the U.S. is pointing in the opposite direction right now. That may seem harsh, but consider what they expect: increased enrollment, increased technology budgets, distributed and collaborative data governance, ethical and responsible AI use, federally supported identity system architecture, a national standard for cybersecurity and privacy, and more.

Web: [Direct Link] [This Post][Share]


Product Market Fit Collapse: Why Your Company Could Be Next
Brian Balfour, et al., Reforge, 2025/03/24


Icon

Product Market Fit (PMF) is the idea that a given product serves an identifiable (and profitable) market. This market is defined by a window of consumer expectations and needs, and it varies over time as these increase and as external events - such as the introduction of new technology - change them. PMF collapse happens when this change is sudden and complete; the market moves in such a way that PMF is lost and unrecoverage. Arguably, AI has truggered PMF collapse in a number of cases, for example, which is seeing the value of its homework tutor service reduced to zero as AI takes over; or Stack Overflow, which is in decline for similar reasons (and also, Google basically dropping them in favour of Reddit didn't help). What will change in the future? This article hazards some guesses (auto-creation, custom products, task automation) but I don't think it will be so clear-cut. Anyhow, good article. Via Mark Oehlert.

Web: [Direct Link] [This Post][Share]


Stop Trying to Make AGi Happen, It’s Not Going to Happen!
Matt Crosslin, EduGeek Journal, 2025/03/21


Icon

Matt Crosslin responds to George Siemens. He first crtiticizes one of the core points: "a system made up of many individuals has more intellectual capacity than individuals? Well… duh. Hasn't it been that way since the beginning of human intellect? Adding more people gives you more capacity." I don't think this captures the nuance between simply stacking capacity (ie, adding volume) as opposed to combining capacity (ie, adding complexity). And when Siemens writes, "Personal knowledge is comprised of a network, which feeds into organizations and institutions, which in turn feed back into the network, and then continue to provide learning to individual. This cycle of knowledge development (personal to network to organization) allows learners to remain current in their field through the connections they have formed," Crosslin responds, "None of this is artificial." But I don't think we've seen a naturally occurring organization or institution through all of history. But Crosslin's main point is in the title: "I am pretty skeptical that AGi will ever happen – but people keep trying to see it where it doesn't exist." I'd defer to his deep knowledge here, but if he makes basic errors, I think it might just be an opinion based on an intuition, and nothing more.

Web: [Direct Link] [This Post][Share]


I've seen amazing mixed reality experiences, but a new update will unlock the next generation
Nicholas Sutrich, Android Central, 2025/03/21


Icon

The gist: "this is one of the most convincing uses of mixed reality to date. Most people don't have access to an empty airplane cockpit in their garage, but being able to turn your favorite chair into a convincing airplane cockpit - complete with its texture accurately positioned inside the virtual cockpit - would make an incredibly convincing game of Flight Simulator." Maybe. But it's still a device that will only be used occasionally, and never in combination with other things (like, you won't be watching a movie while playing mixed reality games, though I do that a lot with regular video games).

Web: [Direct Link] [This Post][Share]


Equation Editor for online mathematics - create, integrate and download
CodeCogs, Zyba Ltd, 2025/03/21


Icon

This is pretty interesting, a "LaTeX equation editor that creates lightweight graphical equations (gif, png, swf, pdf, emf) and produces code to quickly and directly embedding equations into HTML websites, forums or blogs." Via Alan Levine.

Web: [Direct Link] [This Post][Share]


What do we mean when we talk about ‘openness’ in (generative) AI?
Doug Belshaw, Open Thinkering, 2025/03/21


Icon

The main value of this post is its introduction of the Open Weight Definition, which is more of a technical definition of openness than a cultural one (to paraphrase Belshaw's own phrasing). I think the article could use a third section, and could talk about open data - is it important? (I get the sense Belsaw thinks not) and what role does it play into open AI?

Web: [Direct Link] [This Post][Share]


This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2025 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.