Feature Article
My Conversation with Pi, the AI, on the Ethics of AI
Stephen Downes,
Half an Hour,
2023/06/19
My conversation with an AI that looked into the issue of ethics for AI systems. We considered the idea of an AI having its own ethical system, the question of where it would come from, and the risk of it being a corporate or legalistic rules-based system.
[Link] [Local copy]
How do we respond to generative AI in education? Open educational practices give us a framework for an ongoing process
Anna Mills, Maha Bali, Lance Eaton,
Journal of Applied Learning & Teaching,
2023/06/19
The authors argue (15 page PDF) that "a set of open educational practices (OEP), inspired by both the Open Educational Resources (OER) movement and digital collaboration practices popularized in the pandemic, can help educators cope and perhaps thrive in an era of rapidly evolving AI." Diverse communities, sharing and openness, crowdsourcing, and cooperation - all these foster a robust and resilient response to the challenges posed by the new technologies, acting as a 'shock absorber', if you will. The bulk of the paper is composed of the three authors' responses to rapid change though the mechanisms of OEP, and while there's a satisfying "how I did it" aspect to these, the resulting discussion tends to the superficial rather than an developing an understanding of why OEP produce this result (a question examined in parts in today's newsletter as a whole), referencing only the PICRAT "technology integration model" and the idea of "entangled pedagogy". Image: New America.
Web: [Direct Link] [This Post]
For whom do we write as the world burns?
Richard Hall,
Richard Hall's Space,
2023/06/19
We haven't heard much from Richard Hall recently, but this is as poignant as anything he has written. "For whom do we write as the world burns?" he asks. For as we write in environments where "subjects and institutions discipline us epistemologically, ontologically and methodologically to perform in particular ways, we might ask whether our being, doing, knowing and writing are simply reproducing, in and through the text, a collective life that is becoming more efficiently unsustainable." It's a question I wrestle with myself a lot. Like anyone else working in an institution, I am expected to present certain types of work, in certain ways. These are expectations I have mostly resisted, but how successfully? And what are the long-term costs of such expectations, depending as they do on unsustainable models of society, commerce and governance?
Web: [Direct Link] [This Post]
ChatGPT will make the web toxic for its successors
Ben Dickson,
TechTalks,
2023/06/19
As this article reports, "Machine learning models trained on content created by generative AI will suffer from model collapse, according to a new study" (18 page PDF). Specifically, "What happens when the internet becomes flooded with AI-generated content? That content will eventually be collected and used to train the next iterations of generative models." The article describes in detail how the re-use of AI-generated content corrupts the data. Basically, probable events are over-estimated, and improbable events are underestimated. But if everybody agrees that this is a bad idea (as they appear to) then why would they do it? But also: isn't this what happens with human networks as well: when people just talk among each other, without any new evidence being introduced, they abstract more and more until they end up with just a few polarized points of view? Without openness, networks tend toward model collapse.
Web: [Direct Link] [This Post]
Updates (31)
Ian Linkletter,
GoFundMe,
2023/06/19
Ian Linkletter is taking his case against Proctorio to the Supreme Court, asking three key questions:
The answer to all three of these should, of course, be 'no', for any other answer, as Linkletter argues, changes some fundamental tenets of Canadian internet and copyright law. Anyhow, he'll need more money to take his case forward.
Web: [Direct Link] [This Post]
Eastern philosophy says there is no "self." Science agrees
Big Think,
2023/06/19
This is a very superficial presentation of a very deep idea that can be summarized with a simple (though misleading) catchphrase: there is no self. I would be inclines to say that there is a self, but that it is very different from this idea of self as 'pilot' or 'executive function' that prevails in western folk psychology. To me, it is evident that there is no such self because the idea of such a self is incoherent. There is no 'view from nowhere' from which we can look at our experiences as though our experiences are not us. What there is, though, is an inner voice that lies to us. This article's discussion of that voice should remind us of chatGPT - when it doesn't know why, say, we stood up, our inner voice makes up an explanation. But this voice isn't the 'self', it's only an experience. "We have mistaken the process of thinking as a genuine thing." Similarly (as I've argued elsewhere), our consciousness is simply a process, not a 'thing' that 'is conscious'.
Web: [Direct Link] [This Post]
There are many ways to read OLDaily; pick whatever works best for you:
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2023 Stephen Downes Contact: stephen@downes.ca
This work is licensed under a Creative Commons License.