If you're saving precious MIT Technology Review reads (because you only get to read three articles free a month (*)) then don't bother with this one, because it tells us what we already know: even GPT-3 doesn't know what it's talking about, so it makes a number of amusing errors. But I wouldn't expect it to; nobody is claiming GPT-3 has completed the project of AI. But I'm more interested in the way this is being phrased. "It has only the dimmest sense of what those words mean, and no sense whatsoever about how those words relate to the world." Well, sure. But all of the mistakes being made in the example are linguistic mistakes (that just happen to correspond to factual errors). Give GPT-3 another order of magnitude more data and these linguistic errors would diminish - at which point we would find it making the same sort of errors less well educated people do. (*) I read them on Feedly, which doesn't care how many articles I've read.
Today: 0 Total: 15 [Share]
] [