Large language models have a reasoning problem
Ben Dickson,
TechTalks,
Jun 28, 2022
The question being considered here is, "can large language models (LLM) do logical reasoning like humans?" The answer, according to the research paper summarized here, is that is that they find clever ways to learn statistical features that inherently exist in the reasoning problems, rather than doing actual reasoning themselves. But I think it's worth asking whether humans do logical reasoning. We see the same sort of errors in basic courses on logic or probability that the AI systems seem to make. So, yes, while "Caution should be taken when we seek to train neural models end-to-end to solve NLP tasks that involve both logical reasoning and prior knowledge and are presented with language variance," the same holds for human learners. It takes a lot to train humans to perform higher-order functions like logic, math and language. Years, even.
Today: 6 Total: 104 [Share]
] [