This probably doesn't have to be said, but there are co doubt many people still following the language of thought hypothesis. "The fact that language models work at all is proof that something resembling language learning can happen without any of the specialized machinery Chomsky proposed. Systems based on neural networks have been wildly successful at many tasks that are totally unrelated to language processing, and their training procedure ignores everything linguists have learned about the intricate structure of sentences." But that's still sommthing that needs to be tested - and this article explores whether it's harder (or even possible) for a large language model to learn a non-human 'impossible' language. "Kallini and her co-authors pinpointed a simple principle called "information locality (opens a new tab)" that explains why their models found some of the impossible languages harder than others. That principle might also be relevant to human language acquisition." Fascinating.
Today: 60 Total: 305 [Share]
] [