"On 30 November, 2023 (ie., five days ago), the Australian federal government released its Australian Framework for Generative AI in Schools.," write the authors. "However, in this fast-moving space, the policy may already be out of date." That's a catchy intro (I'll admit I was hooked) but in fairness the article focuses more on what the authors feel is incomplete or wrong, not what's out of date. For example, they argue the Framework "suggests schools and teachers can use tools that are inherently flawed, biased, mysterious and insecure, in ways that are sound, un-biased, transparent and ethical." There's a list of the things the authors think should be changed in the next iteration. I'm not exactly opposed to any of these, but I think our approach to AI should be more nuanced.
For example, instead of saying "AI is biased", we should be saying "AI is more biased than P", where P is some other thing doing that thing now (like, say, reporting the news, grading papers, summarizing articles, predicting crime, etc). Similarly, instead of saying that because AI is so dangerous, 'it should be transparent', 'there should be a right of appeal,' etc. we should apply these criteria to existing programs and services (eg., police services should be transparent; the decisions of airlines should have a right of appeal, etc). I oppose all the things AI sceptics do, it's just I don't think AI is a unique (or even particularly bad) source of them.
Today: 0 Total: 87 [Share]
] [