Understanding and Meaning
Tim Klapdor,
2024/03/04
This is a nice response to my commentary from the other day on understanding and meaning in artificial intelligence. There's a lot going on in this response and several different responses are sometimes blended into a single point - for example, Tim Klapdor at one point says "the 'worldview' this AI has isn't worldly, connected or contextual," which is one objection, and then in the same paragraph says AI responses are merely "calculations of the most probable correct answer," which is a second objection. We need two different approaches to respond: on the one hand, suggesting that nothing in principle would prevent an AI from drawing upon a wider context; and on the other hand, suggesting that human knowledge (here characterized as 'triangulation') is also nothing more than calculation. Klapdor also links to a great post by Jennifer Moore making similar points but which reminds me we must distinguiush between whether AI is intelligent now (we agree, it isn't) and whether it ever could be in the future (it could be, in my view). Image: a nice article on understanding by Scott H. Young.
Web: [Direct Link] [This Post]
Relevant Thinking and Learner Success
Eric Sheninger,
A Principal's Reflections,
2024/03/04
I think there's value to this approach although I think the presentation lacks a certain coherence. The core of the idea, styled awkwardly as 'relevant thinking', is that understanding the relevance of what we are learning is important. Eric Sheninger outlines seven contexts in which this principle arises, and generalizes on these to provide a 'Relevant Thinking Framework', which takes a grid crossing Bloom's cognitive domain on one axis and 'application models' on the other to derive four quadrants ('acquisition', 'assimilation', 'application' 'adaptation') and a set of corresponding technology tools. The Framework to me looks like older (much older) work, and doesn't seem to be connected to 'relevance' at all. Meanwhile, we don't really learn how relevance applies in any detail to any type of learning. And yet - it does apply, clearly. In my view, though, the best way to understand the role relevance plays is through a discussion of relevant similarity in recognition.
Web: [Direct Link] [This Post]
Building Community Governance for AI (SSIR)
Lina Srivastava,
Stanford Social Innovation Review,
2024/03/04
"Relying on tech companies to govern their own AI development carves a path toward societal collapse by repeating mistakes made in past development of the web and social media," writes Lina Srivastava. So what's the alternative? "Supporting cooperative, collective, and collaborative structures." While I have a lot of sympathy for cooperatives and collectives, though, if we consider AI to be "a public utility" then I'm not going to be comfortable leaving it in private hands, even the more generally capable hands of a cooperative. Even as a co-op, Srivastava argues that AI governance requires strong regulatory oversight, a rights-based framework, public education, public discourse, shared ownership, democratic control, and collaboration. That's exactly what government provides. If it's not a public utility, though, we can leave it in private hands, even with the caveats Srivastava proposes, but with the additional proviso that it not replace government functions or public utilities.
Web: [Direct Link] [This Post]
On Not Using Generative AI
Lance Eaton,
2024/03/04
Today's new term is 'phenomenography', which is "a qualitative research methodology that investigates the qualitatively different ways in which people experience something or think about something." The term, here defined via Wikipedia, appears in the context of asking an AI to provide the overall structure of a dissertation chapter on the subject. Lance Eaton says here, "even if (AI) is wrong on some things, it is going to be more right than I ever will be." That makes it a terrific guide even if it means I have to check some things for myself. To me, the main issue is time. Sure, I could learn something or do something on my own - write a Javascript API function, for example. And I can tell whether it has been done correctly after the fact. But the time taken to get from here to there is a lot shorter with the AI. And that for me (and, it appears, Lance Eaton) makes a pretty good case for using AI when it saves time.
Web: [Direct Link] [This Post]
Doing Educational Equity Right: The Homework Gap
Michael J. Petrilli,
Education Next,
2024/03/04
This article performs the remarkable feat of discussing the issue of homework for lower-income students without mentioning once that lower-income students work after school. Certainly this was my experience, and the experience of pretty much everyone in the rural farm community where I grew up. In the city, the whole economics of fast food and mall culture would not be possible without teen-age labour. But no, this Fordham Institute blog post prefers to pretend that the main problem is finding a quiet environment where lower-income students can do homework. Sure, it's a problem. But it's just one problem among many. A better plan? Work toward society-wide income equity instead of band-aid solutions in the education system.
Web: [Direct Link] [This Post]
Cognitive Load Theory: An Unpersuasive Attempt to Justify Direct Instruction
Alfie Kohn,
2024/03/04
Alfie Kohn's main point is in the headline (I like it when stories are set up like that). Kohn writes, "Progressive education isn't just more engaging than what might be called regressive education; according to decades of research, it's also more effective — particularly with regard to the kinds of learning that matter most. And that remains true even after taking our cognitive architecture into account." The article is structured as a series of seven arguments that make the point, ranging from "methodological flaws in its research base are so serious as to raise doubts about the concept itself" to "reducing cognitive load isn't always desirable." Image: Sheffield SCITT.
Web: [Direct Link] [This Post]
There are many ways to read OLDaily; pick whatever works best for you:
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2024 Stephen Downes Contact: stephen@downes.ca
This work is licensed under a Creative Commons License.