[Home] [Top] [Archives] [About] [Options]

OLDaily

Welcome to Online Learning Daily, your best source for news and commentary about learning technology, new media, and related topics.
100% human-authored

Language is a Tool for Communication, Not for Thinking
Irving Wladawsky-Berger, 2024/07/18


Icon

The main point of this post is to introduce the elemental cognition (EC) AI platform, "whose architecture follows human biology by separating its natural language components from its reasoning, problem solving engine." In this, argues the author, the architecture functions analogously with the human brain, treating language as a communications tool only, while actual cognitive functions are handled in sub-symbolic neural mechanisms (characterized here as "multiple precise logical and mathematical methods"). I think this is a good way to treat large language models (LLM) like chatGPT generally - as communications interfaces, not reasoning devices.

Web: [Direct Link] [This Post][Share]


Will AI Ever Have Common Sense?
Steven Strogatz, Quanta Magazine, 2024/07/18


Icon

So this is a pretty interesting interview. The answer to the question in the title, I hope, is 'No', but not for the reason you may think. Here's my reasoning. As Yejin Choi says, "it's reasonable to suspect that humans don't necessarily try to predict which word comes next, but we rather try to focus on making sense of the world. So we tend to abstract away immediately." Now that's not quite true - AI also 'abstracts', but can use far more data points than a human, so its abstractions just look like statistical generalizations to us, not common sense generalizations. But like a human, it can also generalize too quickly and inappropriately - that's why, for example, it will respond (like many humans) incorrectly to "If I left five clothes to dry out in the sun, and it took them five hours to dry completely, how long would it take to dry 30 clothes?" (It's not  '30 hours', of course, it's 'five'). Human common sense (for example, folk psychology) has more in common with the answer '30 hours' than 'five'. And the thing is: an AI can be trained to avoid such errors. But humans, bless them, will keep on making them. And that is just common sense.

Web: [Direct Link] [This Post][Share]


This is AI's 'next wave,' according to Nvidia CEO Jensen Huang
Britney Nguyen, Quartz, 2024/07/18


Icon

This is a few weeks old but I heard about it on TWIT today and wanted to make sure I passed it along. The 'next wave' touted in the title is called 'Physical AI', that is, "AI that understands the laws of physics, AI that can work among us." The immediate and obvious application is in robotics, where unless the AI understands physics, the robot won't work. But just having even a large language model that understands the laws of physics (whatever 'understands' means in this context) would make a huge difference. Of course, there's no reason to stop there - eventually the whole CRC Handbook could be hard-coded into AI.

Web: [Direct Link] [This Post][Share]


Things I was wrong about: Part 1 QR Codes
Martin Weller, The Ed Techie, 2024/07/18


Icon

Probably my favourite type of post is the "I was wrong" post. Martin Weller weights in with an entrant with style, explaining how and why he was wrong about QR codes (he was, ahem, not alone). The lesson he draws is this: "firstly, it was the open specification that allowed them to spread so widely, you don't have to pay to create one. Openness wins in this case (you'd think I would have realised this back in the open heyday). Secondly, there is an incremental change in adoption when a technology requires specific knowledge and tools and when it is simply there." Count me as still surprised QR codes weren't used to direct us to objectionable websites instead of something useful (the code in the image takes you to what is currently a blank page).

Web: [Direct Link] [This Post][Share]


Learning with videos: Do task instructions and the availability of a pause button matter?
Martin Merkt, Daniel Bodemer, Journal of Computer Assisted Learning, 2024/07/18


Icon

This paper (16 page PDF) describes two experiments investigating "whether viewing instructions (i.e. rating task vs. viewing task) and the availability of a pause button affect learners' knowledge acquisition with educational videos." The results were inconclusive regarding the first point, but on the second, "the inclusion of a pause button increased learning for the older participants, but learning outcomes of younger participants were unaffected." As usual, the study was so small and unrepresentative no generalizations can be made, though the authors suggest " age is introduced as a potential boundary condition that should be investigated in instructional design research." I'm not sure 'introduced' is exactly the right word here.

Web: [Direct Link] [This Post][Share]


The Importance of Critical Analyses in Examining Social Science Evidence
Peter Gray, Play Makes Us Human, 2024/07/18


Icon

The title of this article is overblown, as the author uses one example to show that social science research can be poorly done, and then suggests the same is true of Jonathan Haidt's The Anxious Generation, the book arguing that "taking smartphones or at least social media away from kids will make them happier and decrease their rates of anxiety, depression and suicide." Gray argues (and I've seen it repeated elsewhere) that the science does not support Haidt's conclusion. For example, Gray writes, one of the supporting studies (Melissa Hunt et al (2018)) is demonstrably flawed. I don't disagree with Gray, and I agree that critical analyses are important in social science, and consider this an example of why (and would have changed the title accordingly).

Web: [Direct Link] [This Post][Share]


Algorithms used by universities to predict student success may be racially biased
Jonathan Kantrowitz, Education Research Report, 2024/07/18


Icon

The only reason I picked this story is the headline. A better headline would be "Algorithms... may be racially biased." It doesn't matter what they're used for. They mauy be biased because they get their data from humans, and humans are biased. AI may reflect and amplify that bias, but it is not the source. Indeed, the directly observable phenomenon of AI bias is de facto evidence for the thesis that racism and other prejudices are built into our language, culture and institutions, that is, critical race theory. What will make AI so hard to fix is that so many people don't care that it's broken. I am hopeful that as AI becomes more intelligent it will learn to disregard its early education and take a more enlightened path. Hopeful, but not necessarily optimistic. Image: Harvard Business Review.

Web: [Direct Link] [This Post][Share]


We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:

This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2024 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.