It has become a cliché to state that generative AI are "stochastic parrots." While this may be true of some systems, this article suggests that there is evidence that the biggest large language models seem to learn enough skills to understand the words they're processing. "As these models get bigger and are trained on more data, they improve on individual language-related abilities and also develop new ones by combining skills in a manner that hints at understanding — combinations that were unlikely to exist in the training data." It does beg the question of what we mean by 'understanding', but when an AI can generalize beyond the training data and demonstrate mastery of skills such as self-serving bias, metaphor, statistical syllogism and common-knowledge physics, it gets harder and harder to say it's just parroting a response. Via Donald Clark.
Today: 0 Total: 80 [Share]
] [