Generative AI and the Potential for (Anti) Social Learning
Julian Stodd,
Julian Stodd's Learning Blog,
2023/03/15
This is a good article and should be read regardless of what I may have to say about it, but for me what's interesting is that it shines a light on where I would disagree most (I think) with Julian Stodd (and no doubt many others who have a similar perspective), and that is with respect to the role of stories in learning, "a view that some learning is fully formal (a story told to us by the Organisation), some is fully social (stories constructed and held in our tribal communities), and some may land in the middle space – essentially tribal in origin (so grounded, authentic, lived, experiential, owned), but accessible globally (if we earn the right)." To be sure, this is an oversimplification. But it seems to me the only sense in which learning is a 'story' is the sense in which human experience is linear; we can't get outside the constraints of time. But if we're going to use simple metaphors for learning, then it's much more like a map than a story. "Social Learning," as Stodd says, "is essentially dialogue based," but learning properly so-called, importantly, is not.
Web: [Direct Link] [This Post]
Embracing AI for student and staff productivity
Lynnae Venaruzzo, Kate Ames, Steve Leichtweis,
Australasian Council on Open Distance and eLearning,
2023/03/15
This is a report and white paper (8 page PDF) drafted as an outcome of the ACODE 88 workshop and round table held early March in Australia; you can find recordings (the first of which reminded me of the opening titles on the IT Crowd) and more documentation on the web site. Participants were mostly from senior administration from Australasian colleges and universities. Of the eight recommendations, the first seven are completely generic and could be applied to any technology; only the last, that "institutions priorities assessment redesign," applied specifically to AI. Still, the discussion of affordances and priorities is pretty good, not as deep as it could be, but better than a lot I've read on this. Via George Siemens's Sensemaking, AI, and Learning (SAIL) newsletter.
Web: [Direct Link] [This Post]
Just over 1 in 10 faculty say their college has set classroom ChatGPT guidance, survey finds
Laura Spitalniak,
Higher Ed Dive,
2023/03/15
This article offers a nice teaching point in the use of loaded language to make a point that is never explicitly asserted. The article states that 'Only about 14% of faculty members say their colleges' administration has set guidelines for how professors and students should use ChatGPT' (my italics) with the use of the word 'only' signifying that it should be higher, and therefore, that a centrally managed classroom chatGPT policy is desirable. The use of the word 'just' in the headline serves the same purpose. Why would the article be manipulative like this? It is perhaps because the survey results referenced aren't accessible; they are for sale via an associated website. And a good way to generate sales is to create evidence for some sort of 'concern' - especially one that (as we read here) private colleges are putatively hangling better than community college faculty.
Web: [Direct Link] [This Post]
Microsoft lays off team that taught employees how to make AI tools responsibly
Casey Newton, Zoe Schiffer,
The Verge,
2023/03/15
I can't say this fills me with confidence: "Microsoft laid off its entire ethics and society team within the artificial intelligence organization as part of recent layoffs that affected 10,000 employees across the company... The move leaves Microsoft without a dedicated team to ensure its AI principles are closely tied to product design at a time when the company is leading the charge to make AI tools available to the mainstream."
Web: [Direct Link] [This Post]
Can Large Language Models (LLMs) like ChatGPT help create OERs in a more sustainable way?
Encore+ Project,
2023/03/15
This is a super-brief post reporting that "Teachers are using ChatGPT for schools more than students, says a recent survey carried out by the Walton Family Foundation." I don't actually trust foundation research like this, because it often comes with an agenda (like, say, promoting charter schools). I'm not sure why the European Network for Catalysing Open Resources in Education is covering this item. The article also references a recent post (January) from David Wiley arguing for "the need for the future instructional designer to immediately update their curriculum to leverage the existence of these (AI) tools". Maybe so. But more to the point: this all feels like it was written by AI. None of the content relates to the headline, and it was the headline that caught my eye, because I've been predicting the possibility of custom on-demand AI-authored OER for many years now. If a human wrote this, let then step forward and take accountability for really phoning it in on this one.
Web: [Direct Link] [This Post]
The Role of AI in Online Reading Comprehension: A New Literacies Perspective
Ian O'Byrne,
2023/03/15
Yet another comment on how AI will support online learning assistants. "This post discusses advances in machine learning, artificial intelligence, and microinteractions as we read online." The interesting remark (and I actually remember this) was when people were asking, in the early days, whether online reading even counted as 'reading'. But my main point centres around this remark: "One thing appears to be true at this early point, AI will not substitute for the human connections and emotional aspects of learning. AI will not replace teachers and educators." How does this 'appear to be true'? Is there even a shred of evidence that supports this? I ask this especially in view of the fact that not all human interactions are benign. Emotional abuse, unfair grading, discrimination and prejudice - all these and more are common experiences for students. Don't be so sure AI won't replace 'human connections'. A lot of people out there are wishing it would. Related: Khanmigo, Khan Academy's AI-powered guide.
Web: [Direct Link] [This Post]
Doing More with Moore: Biotech's Tech Moment
Jorge Conde, Jay Rughani,
Andreessen Horowitz,
2023/03/15
This is not an endorsement of the venture capitalist (VC) approach to things, nor is biotech directly related to edtech - yet - but it's always a good idea to keep an eye on the future, even if viewed from outside the usual frame of reference. And this article is an important glimpse into the future. As it reports, "From the molecular to the mundane, the biotech industry's wide-ranging appetite for science-native SaaS tools is accelerating and will modernize how medicines are discovered, developed, and distributed." So why does this matter? Education is fundamentally about the human being, which is the focus of biotech.
I'm not proposing that biotech will produce 'instant knowledge' drugs a la the Matrix. The outcomes will be much more subtle and interesting. For example: we know that certain drugs can change your personality, for better or worse, though calibration is very crude and imprecise. But imagine being able to change personality cheaply, predictably, and without side effects. You could put on your business face along with your power suit, shift to relaxed and accepting along with your meditation robes, and become analytical and persistent when you put on your lab coat. This is one small glimpse of what may be possible in, say, 20 to 40 years. If I were in high school today, this is one of the fields of the future I'd be looking at.
Web: [Direct Link] [This Post]
There are many ways to read OLDaily; pick whatever works best for you:
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2023 Stephen Downes Contact: stephen@downes.ca
This work is licensed under a Creative Commons License.