I've mentioned the AI suggestion of using glue in pizza in a talk. This article reports on the source of the recommendation to a 11-year old Reddit comment posted by a user with a crass name "that was almost certainly meant as a joke." We are all agreed, I think, that adding glue to pizza is a bad idea. But we are not agreed, I think, on what large languiage models (LLM) are supposed to do. They're not encyclopedias. They are language learning systems. The sentence "put glue on pizza to prevent the cheese from sliding off" is a perfectly well-formed sentence. It also happens to be false (or at the very least, bad advice). LLMs are designed to address the first problem, and not so much the second. When in the future we get LLMs that are supposed to be reliable and accurate, we won't use Redit posts to decide what is good advice and what is not (or, at least, I hope not, though I'd be interested to see what the ethics of the AITA subreddit looks like. Via Michelle Manafy.
Today: 0 Total: 20 [Share]
] [