Does it matter if we call talking to algorithms 'Prompt Engineering'?
Dave Cormier,
Dave's Educational Blog,
2023/08/28
This post touches on a number of key issues centred around the question of how best to help students get good results from their web searches. Direct instruction? "I don't care about the kind of learning research that is based on an increase in student 'retention of information' or an increase of 'test scores'," writes Cormier. But if "I'm moving 'getting stuff from the Internet in effective and ethical ways' up my list to 'core literacy'" then what's the best way to do it? Does what I'm calling it help or hinder my ambition? "If everyone else calls it prompt engineering and I don't, then participants wont be able to follow up with other research." Hmm. But "I do worry, however, about the implication of the word engineering." P.S. I wonder how many educators are still using Twitter, despite everything.
Web: [Direct Link] [This Post]
The Emergence of Presentation Software and the Prehistory of PowerPoint
Geoffrey Rockwell,
Theoreti.ca,
2023/08/28
Short post that looks at how PowerPoint shapes thought and asks about a future with AI-designed (and delivered) presentations. "Presentation tools have a cognitive style that encourages us to break complex ideas into chunks and then show one chunk at a time in a linear sequence." This is a natural design for an instructor-led presentation because it ensures that everyone stays on message. By contrast, as Edward Tufte's pamphlet The Cognitive Style of PowerPoint suggests, "a well designed handout or pamphlet can present a lot more information in a way that doesn't hide the connections." True. But the price you pay for that capacity is control. People interpret a visual presentation in their own way. Personally, I think this is better. But then, I don't seek to control what people know and learn.
Web: [Direct Link] [This Post]
David is friends with Lazarus. Lazarus is an AI chatbot
Padraig Moran,
CBC,
2023/08/28
I introduced Replika last November. Today CBC aired this report on people forming attachments with the AI software, "prompting concerns among experts who study the ethics around the rapidly evolving technology." What was most interesting to me was the impact on people when the software was degraded. "They compared that to their loved one or their partner undergoing a lobotomy … and people were grieving that loss." When we talk about the ethics of AI, I think, we should include discussion about the ethics of manipulating people's emotions, especially in learning and educational settings.
Web: [Direct Link] [This Post]
Legacy Schools Program
The Gord Downie & Chanie Wenjack Fund -,
2023/08/28
From the website: "The Legacy Schools program is a free national initiative to engage, empower and connect students and educators to further reconciliation through awareness, education and action (#reconciliACTION)... We provide educational resources and program development for Legacy Schools to help ensure that the unique interests, rights, and perspectives of Indigenous peoples are recognized and implemented in schools and communities throughout Canada."
Web: [Direct Link] [This Post]
The current (digital) Dark Age
Dave Lane,
Dave Lane,
2023/08/28
Dave Lane finds his calling. "Here's hoping humanity can surprise us all by again reaching a tipping point, and initiating an age of digital enlightenment, characterised by enabled communities, where self-reliance and empowerment are championed ("the user is the developer) rather than sneered at by the fearful, insecure masses. That goal is my calling." It is a noble venture, well worth spending a lifetime on.
Web: [Direct Link] [This Post]
Trust in Artificial Intelligence: A global study
Nicole Gillespie, Steve Lockey,, Caitlin Curtis, Javad Pool, Ali Akbari,
University of Queensland, KPMG Australia,
2023/08/28
This is a good study (82 page PDF) on trust in AI conducted by KPMG and the University of Queensland. Surveying some 17,000 people from 17 countries covering all global regions, the survey finds "strong global endorsement for the principles of trustworthy AI," which include things like privacy, accuracy, fairness, and human agency. It outlines four major 'pathways' to the development of trust in AI: safeguards, regulations, and laws; perceived benefits of AI; a way to address concerns and risks; and people's understanding of AI use and efficacy. So, the usual pathways for any technology adoption.
Web: [Direct Link] [This Post]
A Meta-Analysis of the Efficacy of Trigger Warnings, Content Warnings, and Content Notes
Victoria M. E. Bridgland, Payton J. Jones, Benjamin W. Bellet,
Clinical Psychological Science,
2023/08/28
One oft-discussed feature of Mastodon as compared to (say) Twitter or Facebook is the use of content warnings (aka trigger warnings) to allow people to avoid content they don't wish to see. I find it valuable when news programs state in advance that they will show disturbing violent images, as these images haunt me and can give me nightmares. This study is a meta-analysis of the effects of trigger warnings. Overall, the researchers conclude, the effects are negligible. It's easy to misread this, as (say) Donald Clark does. "(They) not only have no effect on affective responses to negative material or on educational outcomes (t)hey do, however, reliably increase anticipatory affect... In other words, stop doing it." Sure, overall, the effect is negligible. But it doesn't matter what the overall effect is, what matters is whether trigger warnings help the people who need help. These studies make no attempt to differentiate (relying either on Mechanical Turk or undergraduate students), thus trivializing the impact, and leading to what is ultimately uncaring advice to "stop doing it".
Web: [Direct Link] [This Post]
There are many ways to read OLDaily; pick whatever works best for you:
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2023 Stephen Downes Contact: stephen@downes.ca
This work is licensed under a Creative Commons License.