2025 may be the year AI bots takes over Meta
Iain Thomson,
The Register,
2025/01/09
Although the big news this week has been Meta's decision to kill fact-checking, the more significant news may be the revival of a plan to "to roll out interactive AI agents created by users that other folks can interact with." Or maybe other bots will interact with them; it doesn't really matter. "It's going to be interesting to see how convinced the financial community is on this – whether having people interact with bots counts as real engagement likely to prop up advertising spend." I think AI is great, but I think it's still a long way from having the sort of 'presence' that would make be care what it learns or feels or whatnot. Image: Facebook.
Web: [Direct Link] [This Post][Share]
Best AI research assistant
AnswerThis,
2025/01/09
There's a pretty bad typo on the main page (bonus points if you spot it) and that's why I would consider this to be of questionable utility. But for others, the main objection seems to be 'cognitive offloading' (which I guess is our new anti-AI buzzword of the week). The more pressing question is why we even have literature reviews in academic publications anyways - after all, we live in an era where we can (in theory) just link to the relevant research. If only the publishers would allow us. Anyhow, my all-round round of cynicism here is prompted by Louis-François Bouchard, who writes on LinkedIn, "I would've loved that in the PhD" (the post was 'suggested' by LinkedIn, which tells me it wasn't getting traction the proper way; Bouchard, like me, is an "ex-PhD Student" but not an actual PhD). I'm to the point where I don't really believe anyone on AI these days - not the people promoting it, not the people writing about it, nor the people criticizing it.
Web: [Direct Link] [This Post][Share]
Universities and Colleges in Crisis: 5. Reducing the costs of courses and programs | Tony Bates
Tony Bates,
2025/01/09
It has taken Tony Bates four previous posts to get to this, the meat of what he has to say about the institutional response to the coming cost and enrollment crisis. This isn't the space for a detailed critique, and in any case, it looks like there will be more posts. But in a nutshell my response is essentially: to agree with his solutions you have to agree with his assessment of what a university (and a university professor) does. And I don't think I agree. For example, he proposes "a new classification of faculty, full-time teachers who have no research responsibilities" (this somehow to be done "within existing collective agreements", as specified in part 4). Similarly, the "the integrated design of whole programs" might be less ideal as an objective than may be supposed. Learning "how to find, assess, and appropriately use knowledge, and in particular, how to develop core skills such as critical thinking, independent learning, knowledge management, and problem-solving" is much less a matter of being taught these things as it is of experiencing them.
Web: [Direct Link] [This Post][Share]
Google Home hubs can now work locally thanks to Matter
Jennifer Pattison Tuohy,
The Verge,
2025/01/09
This is actually a more important story than it seems. 'Matter' is the name of an Internet of Things (IoT) interoperability standard. It has been in roll-out phase for years. It's important because it allows devices in your home to communicate directly, without going through a cloud interface. That why this announcement is significant: "Google updated Google Nest hubs and speakers, Chromecasts, Google TV devices on Android 14, and some LG TVs so they can connect to and control Matter devices locally." And now, "if the internet is down and you ask Google Assistant to turn on the lights, it should actually be able to follow through." I mean, it's the least it could do, right?
Web: [Direct Link] [This Post][Share]
The Teaching of Writing in the AI Era
Justin Weinberg,
Daily Nous,
2025/01/09
This article references a Times Literary Supplement article (you can only read the first five paragraphs before it hits you with paywall) saying "Every professor I know wasted countless hours of 2024 in the prevention or detection of AI-powered cheating. It is a miserable war of attrition that seems doomed to defeat. Perhaps the time has come, then, to declare a strategic withdrawal from writing as pedagogy?" There is probably a point there - I mean, we used writing for millennia mostly because we had no other alternative; it's not like there are medieval audio recordings or Roman empire videos available. On the other hand, there's the whole concept of thinking that seems to require reading and writing, that teachers try to discern in their students - "trying to get inside a student's head and guess why they used that inappropriate word or missed this obvious argumentative strategy" - but it begs the question, how much of cognition requires the form and structure of language, and how much of it is done that way just because we had no alternative?
Web: [Direct Link] [This Post][Share]
Conversing with chatbots: what influences trust?
IDW,
2025/01/09
According to the study referenced here, "integrity is a more important factor for trust than benevolence. For this reason, it is important to develop the technology to prioritize integrity above all else." In other words, it doesn't matter if you have good intentions, if you lie to me I'm not going to trust you. Also, "If an AI chatbot is too uncritical and simply agrees with everything a user says, it fails to provide reality checks and runs the risk of creating an echo chamber that, in the worst case, can isolate people from their social environment." The original study is unfortunately paywalled, which I might add here undermines its integrity. See also: Machine Prejudice.
Web: [Direct Link] [This Post][Share]
From AI Tutor to AI Student
Philippa Hardman,
Dr Phil's Newsletter, Powered by DOMS AI,
2025/01/09
One lesson you learn reading John Stuart Mill's autobiography is how effective it can be when you learn by teaching others. As a sometime teacher, I can personally attest to the value in it. So it makes sense that using ChatGPT is a student, rather than a teacher, might help people learn that much better. That's the proposal Philippa Hardman discusses in this article. I have just one issue with it: why would I. as a student, want to teach a computer? What's my motivation? If I'm teaching my little brother, like Mill, I can get why I would need to do it. But ChatGPT doesn't need me to teach it, and wouldn't care whether I did or didn't.
Web: [Direct Link] [This Post][Share]
Why World Foundation Models Will Be Key to Advancing Physical AI
Noah Kravitz,
NVIDIA Blog,
2025/01/09
A 'world foundation model' (WFM), as used in this article, is a "model that can accurately simulate and predict outcomes in physical, real-world environments to enable the next generation of physical AI systems." Think of a WFM as similar to a large language model (LLM) except that instead of modeling patterns in language, these model patterns in real-world environments. Unlike in language, causation matters in real world environments, and so WFMs are expected to be able to predict real-world events much more accurately. "They can imagine many different environments and can simulate the future, so we can make good decisions based on this simulation." It's a pretty lightweight article for a pretty important concept, but I guess we have to start somewhere.
Web: [Direct Link] [This Post][Share]
There are many ways to read OLDaily; pick whatever works best for you:
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2025 Stephen Downes Contact: stephen@downes.ca
This work is licensed under a Creative Commons License.