Outraged Over Admissions Policies at Harvard? Take a Look at the Public Schools
Tim DeRoche,
The 74,
2023/09/11
This is a classic case of 'whataboutism', which "denotes in a pejorative sense a procedure in which a critical question or argument is not answered or discussed, but retorted with a critical counter-question." The question, in this case, concerns Harvard's legacy admissions (here represented simply as 'selective admission') and the retort addresses the way public schools "operate under an archaic and discriminatory assignment system that sorts kids into schools based on government-drawn maps." Now I agree: it's ridiculous that living in the wrong district can mean attending a vastly inferior school. But at least more than 3% of people get into public schools, and you can change your address in a way that you can change your non-legacy parents. But still: there should be equity in public school funding and institutional quality. We agree. And that underlines why there should also be equity in university funding and institutional quality. It's not about the 'opening' of elite institutions to more people. It's about whether elite institutions should exist at all, draining as they do the much needed resources that could be spend providing a quality education for everyone, even if they live on the wrong side of the tracks.
Web: [Direct Link] [This Post]
11 AI Educators to Follow, Teaching in the Age of AI, + numerous other items
Daniel S. Christian,
Learning Ecosystems,
2023/09/11
With the arrival of any new technology we see a host of newly minted experts in 'education and that technology' emerge. Just so with generative AI. Here Daniel Christian provides a list of people to follow (mostly via their LinkedIn pages). Are they really experts? I couldn't say. What is it to be an expert in generative AI (now about 8 months old) and education? Based on this list, it would seem that presenting yourself as one on LinkedIn is a minimum requirement.
Web: [Direct Link] [This Post]
In their quest for AGI safety, researchers are turning to mathematical proof
Maximilian Schreiner,
THE DECODER,
2023/09/11
This post summarizes what is quite a good paper (17 page PDF) by Max Tegmark and Steve Omohundro called "Provably safe systems: the only path to controllable AGI'. As the authors note, a strong approach is needed not because AI is inherently dangerous but because potential misuse by humans will likely override less secure safeguards. But there's an approach that can be followed: "Before worrying about how to formally specify complex requirements such as 'don't drive humanity extinct', it's worth noting that there's a large suite of unsolved yet easier and very well-specified challenges" such as provable cybersecurity, a secure blockchain, secure privacy, and secure critical infrastructure. The approach (as I read it) focuses less on the AI itself and more on the AI's output, for example, by creating a specification that generated code must satisfy and then generating a proof that the generated code meets the specification. Via Martin Dougiamas.
Web: [Direct Link] [This Post]
Teaching with AI checklist
Dave Cormier,
Dave's Educational Blog,
2023/09/11
Dave Cormier offers this useful checklist of questions to ask as you prepare to teach your course in this new AI era, including such questions as "have I made ethical decisions about student data such that the assignments and activities in my class?" and "have I reviewed how my field is being affected by the web and AI generation?" I can't imagine questions like this haven't been asked by instructors, but this lick organizes them and ensures nothing important is left out. Each question also includes some discussion and links to relevant resources. Image: CSU.
Web: [Direct Link] [This Post]
Pondering the fall AI semester with the Chronicle of Higher Ed
Bryan Alexander,
Bryan's Substack,
2023/09/11
We read, "The vast majority of colleges had no formal policy on the use of AI tools in the spring, according to a survey by Tyton Partners… Colleges seem to be deferring to faculty members to set their own classroom policies on appropriate use." Of course we can't view the "very thoughtful article on how AI might play out in academia for the rest of 2024" because the Chronicle is behind a paywall, though as Bryan Alexander comments here, "We're already in the academic year, so changing up class offerings or introducing new automation is challenging. Rethinking class fundamentals - assessment, curriculum - is similarly difficult in mid-stream."
Web: [Direct Link] [This Post]
Guide on the use of Generative AI
Government of Canada,
2023/09/11
This is a guide released a few days ago by the Government of Canada. From the intro: "This document provides preliminary guidance to federal institutions on their use of generative AI tools. This includes instances where these tools are deployed by federal institutions. It provides an overview of generative AI, identifies challenges and concerns relating to its use, puts forward principles for using it responsibly, and offers policy considerations and best practices." It's a pretty good set of best practices and would be useful as a template for institutions considering their own sets of guidelines.
Web: [Direct Link] [This Post]
Blockchain and Micro-credentials in Education
Rory McGreal,
International Journal of E-Learning & Distance Education,
2023/09/11
Relatively short article describing blockchain and its use to record microcredentials. Most of the article is a straightforward outline of the technology and its uses. The most useful bit for me was the 'state of the art' section outlining a number of recent and current initiatives, including Stacks, formerly Blockstack, which adopted the Proof of Transfer Protocol (PoX). "Blockchain can also ensure the sustainability and accessibility of official credentials," writes Rory McGreal. "All the records are secured and permanent. This can become very important if an institution changes its name or even disappears." Image: PWD, via Rogers.
Web: [Direct Link] [This Post]
There are many ways to read OLDaily; pick whatever works best for you:
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2023 Stephen Downes Contact: stephen@downes.ca
This work is licensed under a Creative Commons License.