AI ... and the rhinoceros in the room redux
Lorcan Dempsey,
LorcanDempsey.net,
2023/12/04
This is an interesting post the gist of which is that if we want to understand AI then we should get hands-on experience with it. "It is important to build up some tacit knowledge of how these tools behave, and how they can surprise." The focus is mostly on generative AI, but I think it would be useful to try for a broader range of experience - for example, I've spent a lot of time using AI, from training Feedly's Leo to prioritize posts, using it to transcribe audio recordings, and using it to help improve the quality of my photos (not to alter them, but to do things like improve colour balance, sharpen the image, and de-speckle), and more recently, generating code snippets. I think this sort of learning provides a much better grounding that relying on stories. As it always has.
Web: [Direct Link] [This Post]
True voyage is return
Martin Weller,
The Ed Techie,
2023/12/04
The most interesting part of this post is the title, which I'll get to. In the post, Martin Weller talks about his PhD in expert systems back in the days of good old fashioned artificial intelligence (GOFAI) based on symbol systems and rule based approaches. By contrast, says Weller, "Machine learning says sod all that, lets chuck vast amounts of data and let the system derive patterns." But recently we've seen weakness in that approach, suggesting "adding in a representation of a field can tweak or filter the results of these data driven models, for example a mixing of symbolic AI with the power of LLM (large language models)." Would that work? Maybe - but it may create a mess if we simply assume the representation is the knowledge. Which brings us back to the title. The phrase 'True voyage is return' is from Ursula LeGuin and it expresses a Taoist idea that "at the end of a journey when we come back to where we began, it's only then that we can see how much we've changed." Have the proponents of GOFAI learned the lessons of their years outside the mainstream, or do they believe they are triumphantly unchanged?
Web: [Direct Link] [This Post]
Does the new AI Framework serve schools or edtech?
Lucinda McKnight, Leon Furze,
EduResearch Matters,
2023/12/04
"On 30 November, 2023 (ie., five days ago), the Australian federal government released its Australian Framework for Generative AI in Schools.," write the authors. "However, in this fast-moving space, the policy may already be out of date." That's a catchy intro (I'll admit I was hooked) but in fairness the article focuses more on what the authors feel is incomplete or wrong, not what's out of date. For example, they argue the Framework "suggests schools and teachers can use tools that are inherently flawed, biased, mysterious and insecure, in ways that are sound, un-biased, transparent and ethical." There's a list of the things the authors think should be changed in the next iteration. I'm not exactly opposed to any of these, but I think our approach to AI should be more nuanced.
For example, instead of saying "AI is biased", we should be saying "AI is more biased than P", where P is some other thing doing that thing now (like, say, reporting the news, grading papers, summarizing articles, predicting crime, etc). Similarly, instead of saying that because AI is so dangerous, 'it should be transparent', 'there should be a right of appeal,' etc. we should apply these criteria to existing programs and services (eg., police services should be transparent; the decisions of airlines should have a right of appeal, etc). I oppose all the things AI sceptics do, it's just I don't think AI is a unique (or even particularly bad) source of them.
Web: [Direct Link] [This Post]
Inside America's School Internet Censorship Machine
Dhruv Mehrotra, Todd Feathers,
Wired,
2023/12/04
You might be able to see this Wired article if you haven't tried to access the site recently. Normally I wouldn't like to it but I'll make an exception in this case because it documents pervasive and obtrusive website blocking in the U.S. school system (I imagine similar blocking occurs internationally, but I don't know; I do know that in my own workplace there's pervasive blocking of innocuous contents, which is yet another reason why I prefer to work at home). The point of the Wired article is that the blocking results in more harm to school children than good, as information about disease control, school shootings, and other facts of day-to-day life are inaccessible. If you can't read the article, there's not much discussion of it yet online, but this background information from Wikipedia may be useful. Here's a (probably inaccurate) global censorship map.
Web: [Direct Link] [This Post]
Undergraduate completion rates stabilize; one third of students don't finish college in under six years
Bryan Alexander,
2023/12/04
My comment on this was, "If one third of the cars sold in the country didn't work, there would be a national outcry." Bryan Alexander says, "First, I'm struck by how the COVID pandemic didn't adjust this cohort's progress, at least compared to its predecessors. Or perhaps completion rates would have been higher without the crisis?" Or perhaps if we stayed online we could get past the 'natural' limit we seem to see in in-person learning.
Web: [Direct Link] [This Post]
There are many ways to read OLDaily; pick whatever works best for you:
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2023 Stephen Downes Contact: stephen@downes.ca
This work is licensed under a Creative Commons License.