[Home] [Top] [Archives] [About] [Options]

OLWeekly

Where Open Education Meets Generative AI: OELMs
David Wiley, improving learning, 2024/12/13


Icon

David Wiley has advanced his thinking of open education and artificial intelligence beyond 'open prompts' to a more broad-based Open Educational Language Models (OELMs) that "bring together a collection of openly licensed components that allow an openly licensed language model to be used easily and effectively in support of teaching and learning." He doesn't go deeply enough, however. He includes in his proposed stack 'open weights', which are the so-called 'open LLMs' offered by Meta, Mistral and IBM, followed by 'fine-tuning with OER' (whatever that might mean exactly) and retrieval augmented generation (RAG) with OER. But there's (at least) two more crucial layers, and so I've modified his diagram: open data, which is the content used to train the AI (a recent example is the ); and open algorithms, the part nobody (not even OSI) includes as part of 'open', which is the design of the core AI architecture. And because we must must must have a new algorithm, I call it Open Educational Artificial Intelligence (OEAI), pronounced 'aye'.

Web: [Direct Link] [This Post][Share]


Surveillance and Datafication in Higher Education: Documentation of the Human
Leslie Gourlay, Postdigital Science and Education, 2024/12/13


Icon

Interesting commentary comparing two 'datafying technologies' in higher education and academia: learning analytics and the h-index metric. The author argues that "despite important differences, both of these technologies share features discussed in the literature." Specifically, they create what might be called an 'imaginarium' of the subject - a picture of a characteristic successful model of the person in question, a model that can be seen as discriminating against identifiable groups. The H-Index, for example, "favours a large number of papers which have garnered roughly equal numbers of citations, arising relatively quickly after a paper has been published," and posits "an implicitly privileged scholar working full-time in a position of high professional security and prestige, shielded from the pressures and blocks to success listed above," a caricature that the Dall-e image captures perfectly. Via Linda Castañeda.

Web: [Direct Link] [This Post][Share]


I analyzed 15 years of testimonials from users of 750words.com to learn how their private journaling habits have helped them | by Buster Benson | 750 Words | Dec, 2024 | Medium
Buster Benson, Medium, 2024/12/13


Icon

You can stop reading about a third of the way through. This is where the testimonials start, and it's nothing but testimonials through to the end. But the first part of the article is a fun story about using ChatGPT to sort through a large spreadsheet extracting the right testimonials for the author's purpose. "It was only after several hours of work that I realized ChatGPT was coming up with its own testimonials that matched what I was looking for." ChatGPT's short-term memory is not good (and its long-term memory is almost non-existent). It's easy to forget that when working with it. Via Laura Hilliger.

Web: [Direct Link] [This Post][Share]


Summary execution
Martin Weller, The Ed Techie, 2024/12/12


Icon

"Beyond the whole 'summary fetishisation' there is a more worrying trend," writes Martin Weller. "That is when you can only get access to the AI summary. Tom Scocca reports on how The Washington Post has (or is experimenting with) removing its archives, and instead giving access to AI trained on its archive. So much more efficient, right?" I don't think I can access the Washington Post archive anyways - I just don't have enough money for that - so this particular example doesn't worry me. Nor does losing access to carefully rewritten press releases published in much of the commercial press. But I do want access to original research, not for things like Weller's stellar wit, but for access to the actual facts and data (assuming they exist) being used to draw inferences.

Web: [Direct Link] [This Post][Share]


View of Virtual Reality in Elementary Education: A Scientometric Review | Journal of Learning for Development
Ika Maryani, Amir Karimi, Kourosh Fathi, Journal of Learning for Development, 2024/12/12


Icon

I don't have a lot to say about this article, which reviews publications on the use of virtual reality in education since 2012. There's one interesting trend: "developed countries contribute a large number of publications and citations, mainly research and development themes... Developing countries contribute publications in the context of technology applications." I suppose that makes sense. I guess the main thing, though, is that even though everyone is talking about something else, interest in VR remains high.

Web: [Direct Link] [This Post][Share]


Roll Your Own Mastodon Starter Kits
Alan Levine, CogDogBlog, 2024/12/12


Icon

I've tested this myself and can confirm that the 'starter pack' method for Mastodon described here does indeed work. If you're on Mastodon and saw me follow you a few days ago, that was as a result of Alan Levine's EDTech list. I've seen a post from Mastodon saying lists will get a lot easier to manage shortly, so this is a way to get a bit of a head start on that.

Web: [Direct Link] [This Post][Share]


In Praise of Print: Why Reading Remains Essential in an Era of Epistemological Collapse
Ed Simon, Literary Hub, 2024/12/12


Icon

Some people may appreciate this defense of traditional reading in print books, including especially science fiction. As someone who read thousands of science fiction books before the digital age arrived in force, I can attest to their value - but can I say that without reading print novels I wouldn't have arrived at 'interiority'? "I read novels to indulge in a concentrated and directed inner activity that parallels - and thereby tunes up, accentuates - my own inner life," writes Ed Simon. Novels do work - but, I think, so do the rich content experiences I have in a digital world. And I am very wary of denying that others have the sort of inner life I do, as it seems to me to be a way of dehumanizing them.

Web: [Direct Link] [This Post][Share]


Uniplay Unveils the World’s First Game-Based, AI-Powered LMS
CHECK.point eLearning, 2024/12/12


Icon

According to this article, "Uniplay's AI engine analyzes existing training content and matches it with game templates like quizzes, challenges, or timed scenarios. In the backend, AI then customizes each game for users based on data, learning goals, and past interactions." It doesn't really sound like any of the games I play; I think there should be a requirement that game-based LMSs be designed by actual game designers, or at least, gamers. Here's the original PR.

Web: [Direct Link] [This Post][Share]


I just saw the future of the web - Google's new Deep Research tool unleashes swarms of AI agents to do in-depth research for you
Ryan Morrison, Tom's Guide, 2024/12/12


Icon

Here's the gist: "Google Gemini is getting an incredible new feature that will allow it to create mini clones of itself and send them off around the web to find information for you based on a prompt. It can then come back and create a complex, detailed report with links to the information it found." We're just a hop, skip and a jump from having it draft the literature review, then design an experiment for 45 midwest students, recruit them, run the survey, and send the results for publication. Se also: the Verge.

Web: [Direct Link] [This Post][Share]


Inside Netflix’s Distributed Counter: Scalable, Accurate, and Real-Time Counting at Global Scale
Eran Stiller, InfoQ, 2024/12/12


Icon

This is a really interesting engineering challenge: how do you count when the people doing the counting are scattered around the world? For Netflix, it's a practical problem: each time someone views a Netflix video, Netflix wants to increment the 'views' counter by one. But how do you do that without the many flaws that might make the actual count inaccurate? This article describes their recently published "deep dive into their Distributed Counter Abstraction." Idempotency - the idea that the same REST request should return the same result - plays a key role. It allows remote sites to retry failed requests, for example, without double counting. This may seem to some like a pretty trivial problem, but as we enter the era of distributed computing, answering questions like this will be crucial.

Web: [Direct Link] [This Post][Share]


The Google Willow thing
Scott Aaronson, Shtetl-Optimized, 2024/12/11


Icon

The news here is Google's fault-tolerance milestone in its Willow quantum computing chip, the publication stamp (paywalled on Nature) having been placed on the preprint on arXiv from August. "Scientifically," writes Scott Aaronson, "the headline result is that, as they increase the size of their surface code, from 3×3 to 5×5 to 7×7, Google finds that their encoded logical qubit stays alive for longer rather than shorter." But the most interesting bit to me is this: "it would also take ~10^25 years for a classical computer to directly verify the quantum computer's results" Hence, "all validation of Google's new supremacy experiment is indirect." 

Web: [Direct Link] [This Post][Share]


The Meaning of 2025 | HESA
Alex Usher, HESA, 2024/12/11


Icon

There are many places where I disagree with Alex Usher, but I think we're on the same page on this one. First, "we've spent 80 years building a system of higher education that is simply more expensive to run than the public is willing to support."Second, "Think about the consequences of reducing those cross-subsidies within universities at the exact moment when advances in technology are opening up huge potential advances in energy, materials science, and health." The cost of not supporting the system is huge. Usher argues that government is not coming to save the system. probably true. But I counter with the obvious: industry isn't going to save the system either. And so we agree that the higher education sector "is going to have to work out solutions on its own." I've spent a lifetime working on digital technologies for learning to try to help make that happen. But like a light bulb, the sector is going to have to want to change.

Web: [Direct Link] [This Post][Share]


These technologies are complex…. – Taccle AI
Graham Attwell, Taccle AI, 2024/12/11


Icon

I'm sort of over the whole 'education versus AI' debate that Graham Attwell describes here. There's only so many times people like Ben Williamson can make the same point, and only so much speed with which AI companies can roll out new models to counter the sceptics. I'm especially tired of the debate being framed as 'education versus giant corporate capitalism', partially because education has been part of giant corporate capitalism for as long as I can remember, and partially because AI was developed, in the first place, in educational institutions. None of the us-versus-them debates can be properly applied to either AI or educational institutions. And that's why I'm over it.

Web: [Direct Link] [This Post][Share]


Century-Scale Storage
Maxwell Neely-Cohen, Harvard Law School, 2024/12/11


Icon

This is a nice (though long) article by Maxwell Neely-Cohen asking the basic question, "If you had to store something for 100 years, how would you do it?" He runs through all the likely answers, including dispersal and decentralized storage, before reaching the inevidable conclusion that "the success of century-scale storage comes down to the same thing that storage and preservation of any duration does: maintenance." Neely-Cohen also warns that we might be entering a 'dark age' where most of what we produce is lost to the future. "On the internet, Alexandria burns daily." Via Molly White, who gives us a long thread of relevant quotes from the article.

Web: [Direct Link] [This Post][Share]


Striking a Balance: Navigating the Ethical Dilemmas of AI in Higher Education
Katalin Wargo, Brier Anderson, EDUCAUSE Review, 2024/12/11


Icon

According to this article, "Responsible AI integration in higher education requires striking a balance between riding the wave of AI advancements and upholding ethical principles." I don't think the idea of a 'balance' is at all the right way to think of this. Ethics and AI aren't some sort of opposites you have to 'balance'. And 'balance' itself is not a viable ethical principle; I can think of many things I would not like to 'balance' with something else. This higher level criticism also applies to many of the individual points. For example, the very first suggestion is to "create generative AI training materials to support faculty, staff, and students aimed toward combatting the digital divide." Clearly, an effort to 'balance' is being made here. But the balance here makes no sense; how exactly are these 'generative AI training materials' supposed to 'combat the divital divide'?

Web: [Direct Link] [This Post][Share]


What Happens When SMEs are Building All the Courses?
Mark Oehlert, 2024/12/11


Icon

OK, let's leave aside that the prediction (that everyone will be a full stack engineer in 2025) is absurd. Real developers aren't using Vercel or Replit, and there are still serious limitations to what Claude or ChatGPT will code. What about Marc Oehlert's prediction in this shirt article? "What if the systems get smart enough so that we don't need anyone creating compliance training content anymore? I mean we will but it'll just be Brad in HR bringing up his AI assistant to create the new course on ethics, or sexual harassment, or information awareness." Take it one step further - why do we need Brad in HR to be doing anything? What can't (potential) students just create their own courses? This will happen. Not by 2025. But soon.

Web: [Direct Link] [This Post][Share]


Beyond Pattern Matching: The Complex Journey Toward World Models, Critical AI Literacy, and Ethical Innovation
Carlo Iacono, Hybrid Horizons, 2024/12/11


Icon

This article takes some very different articles and weaves them into one coherent narrative. The fabric stretches a bit thin at points but the article overall succeeds. One is Yann LeCun's argument that after language models we will need 'world models' for reliable AI. "These systems would need to learn through direct interaction with the world, much like humans do." Quite right. Another is Tiera Tanksley's proposal for a critical race algorithmic literacy (CRAL) that leads Carlo Iacono to propose a framework for more ethical AI development, which would combine these. The article also mentions Steve Solomon on environmental concerns and Hartmut Neven on Google's 'Willow' Quantum computing chip, all (I think) from the 2024 Hudson Forum. Image: Insta.

Web: [Direct Link] [This Post][Share]


Beware of metacognitive laziness: Effects of generative artificial intelligence on learning motivation, processes, and performance
Yizhou Fan, et al., British Journal of Educational Technology, 2024/12/10


Icon

This is what Wikipedia would call a stub - it introduces an article with enough content to make it register on search engines, but with the full text hidden away behind a paywall. It's also a very lazy article - "a randomised experimental study in the lab setting" - suggesting  "ChatGPT can significantly improve short-term task performance, but it may not boost intrinsic motivation and knowledge gain and transfer." Sure, students do better - but what about these vague and unmeasurable concerns like 'motivation' and 'metacognitive laziness'. I'd hide the paper behind a paywall too, lest anyone actually see it. I passed over an equally lazy article earlier today warning that AI agents may result in 'cognitive laziness'. It was about as sound as this formal academic publication. 

Web: [Direct Link] [This Post][Share]


I can now run a GPT-4 class model on my laptop
Simon Willison, 2024/12/10


Icon

As a rule, I don't run large language models on my laptop, even though I could. I also don't run web servers on my laptop, for the same reason. Not that either would not be a nice feature to have, but we're not there yet. Still, as Simon Willison reports that "Meta's new Llama 3.3 70B is a genuinely GPT-4 class Large Language Model that runs on my laptop," we can certainly see the potential. And, he adds, as these powerful models get smaller and smaller, the worry that there may be an 'AI plateau' seems to be misplaced."

Web: [Direct Link] [This Post][Share]


nnnnnnnn.co
brian crabtree, izzzzi, 2024/12/09


I'm not sure whether this post will be here tomorrow so I'll quote it at some length: "izzzzi is an experiment which might be called "slow social media" where we are exploring a multitude of constraints imposed on the standard mechanism of people making posts:

it's become a sort of collaborative daily newspaper written by friends. on the surface the parameters feel antithetical: it's ephemeral, it only changes daily, hardly anyone sees it. this is precisely what makes it interesting." It's one of these things that may flare up for a bit and then disappear. Like its posts. Via Ed Summers.

Web: [Direct Link] [This Post][Share]


Dimensions of AI Literacies
Angela Gunder, Opened Culture, 2024/12/09


Icon

This is a presentation of multiple AI literacies, influenced by Doug Belshaw's work, describing them as: cultural AI literacies, cognitive, constructive, communicative, confident, creative, critical, and civic AI literacies. Each of these describes a specific aspect of a person's skills or competencies, for example, 'constructive AI literacies describe "understanding what it means to construct, build, or make something within AI-enabled environments". Similarly, 'cognitive AI literacies' involve "developing the skills necessary to navigate various AI environments to build knowledge and understanding effectively'. It's a useful way to look at AI literacies and makes me rethink again my approach to literacies generally.

Web: [Direct Link] [This Post][Share]


Providing performance information of peers as a management tool?
Lisa Wolf, IDW, 2024/12/09


Icon

It's not formative evaluation and it's not summative evaluation. We'll call it competitive evaluation. "More and more companies are gathering performance data on their employees and sharing that data among their staff. Their hope: that those employees who receive feedback that their own performance is lower than their colleagues' will be more motivated to change the way they work and are more motivated to explore." Here's the complete publication (20 page PDF). The authors found that the effect varies based on the data; learning about high-performing peers will encourage people to explore better ways to do their work, but learning about low-performers won't. Also, if the person receiving the data already has a high tendency to self-enhance, learning about high-performers won't help them at all; "Presumably, those with a high tendency to self-enhance did not increase their propensity to explore because they saw the information about their high-performing peer as a threat to their self-image and therefore, found ways to ignore or discount it."

Web: [Direct Link] [This Post][Share]


The Collapse of Self-Worth in the Digital Age
Thea Lim, The Walrus, 2024/12/06


Icon

This article laments the 'ludic loops' that define us in the algorithmic age, that is, the design of the internet stops us from ever switching it off, "It moves at the speed of light, with constantly changing metrics, fuelled by ... repeated cycles of uncertainty, anticipation and feedback" - in other words, it works exactly like a Jackpot 6000 slot machine." But instead of tokens or dollars, what we're playing for is "your sense of self". But external validation is as old as the concept of 'self-worth' itself. Michelangelo didn't paint for free. Da Vinci had to earn a living. Any time we depend on such externalized criteria, whether an author waiting for the reviews or the big price shortlist, or the Reddit reply guy playing for 'likes', we run this risk. The cure, though, is to get over ourselves. Am I another Descartes? As popular as Wittgenstein? Why would it matter? If we want, we can play the game, but the only way to win is to stop playing.

Web: [Direct Link] [This Post][Share]


A New Infrastructure for Learning Credentials
Mike Flanagan, Getting Smart, 2024/12/06


Icon

This item is the latest in a series of posts that didn't really catch my attention until now. It's based on the idea of "Horizon Three" or "H3", which is the name for the "future-ready" educational system that follows in sort of a third wave that follows after 'traditional' (H1) and 'efficiency' (H2) learning. The concept is described more fully in this Google Drive PDF document (24 pages). What is it, exactly? It's a lot of stuff we've seen elsewhere, a mix of good and bad, including learner-driven school models, business and community partnerships, national competition for models, publicly funded supplemental and enrichment programs, mastery-based and embedded assessments, student ownership of credentials data, and more (see specifically pages 18 and 19 of the document). This approach is framed in terms of defining philanthropic support for education initiatives (which is why it flew beneath my radar; I usually just think of these as policy advocacy initiatives to find ways to support privatized education systems). Anyhow, worth knowing about.

Web: [Direct Link] [This Post][Share]


This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2024 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.