Tech Elite's AI Ideologies Have Racist Foundations, Say AI Ethicists
Samara Linton,
People of Color in Tech,
2023/06/07
Here's the argument: "More and more prominent tech figures are voicing concerns about superintelligent AI and risks to the future of humanity. But as leading AI ethicist Timnit Gebru and researcher Émile P Torres point out, these ideologies have deeply racist foundations." The concept of TESCREAL stands for Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism. "These overlapping emergent belief systems all trace their lineage back to the first-wave Anglo-American eugenics tradition, and underlying all is a kind of techno-utopianism and a sense that one is genuinely saving the world." What it means in practice, they argue, is that on this view, some lives are worth more than others, some cultures deserve more protection than others. It's important, argues Linton, "that the fantasies of the (white male) tech elite, which are built on racist foundations, don't eclipse the need for transparency, accountability, and preventing harm to actual present-day people."
Web: [Direct Link] [This Post]
Student Perceptions of AI-Generated Avatars in Teaching Business Ethics: We Might not be Impressed
Carmen Vallis, Stephanie Wilson, Daniel Gozman, John Buchanan,
Postdigital Science and Education,
2023/06/07
There's the overt messaging in this article and then there's the more interesting subtext. The overt messaging is that the authors created an AI avatar to present content in a business ethics course and then asked studets what they thought. It being business ethics, students were unsurprisingly find with it. The subtext was based around the idea that course authors and institutions are already using a lot of AI in course design and delivery, but that it's hidden behind the scenes, and that the use of an AI avatar makes it overtly clear that the course in managed in large part by an AI. So - in such a case - is it more ethical to use the AI? It's not exactly what the authors were studying, but it does come up as a topic of discussion.
Web: [Direct Link] [This Post]
OpenAI API
2023/06/07
This is a best-practices guide for GPT. It's something we could probably have used last fall, before any of this hit the internet. It might have tempered expectations somewhat. For example, this should have bee posted very clearly everywhere GPT is used (including chatGPT): " GPTs can confidently invent fake answers, especially when asked about esoteric topics or for citations and URLs." *sigh* See also this Hacker News thread on the best-practices guide.
Web: [Direct Link] [This Post]
generative AI learning path | Google Cloud Skills Boost
Qwiklabs,
Google,
2023/06/07
As the website says, "This learning path guides you through a curated collection of content on generative AI products and technologies, from the fundamentals of Large Language Models to how to create and deploy generative AI solutions on Google Cloud." I haven't tried it so I can't vouch for it either way, but if I get a change I'll run through some of the modules. It all appears to be free.
Web: [Direct Link] [This Post]
There are many ways to read OLDaily; pick whatever works best for you:
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2023 Stephen Downes Contact: stephen@downes.ca
This work is licensed under a Creative Commons License.