Large language models (LLM) are the types of artificial intelligence that do things like " generate articles, write software code, and engage in conversations about sentience and life." However, according this article, they are unable to engage in what's known as System 2 thinking. This is the type of thinking that involves methodical planning and analysis. The article describes a new set of tests "that can serve as a benchmark as people improve LLMs via finetuning and other approaches to customize/improve their performance to/on reasoning tasks." The tasks are long and involved and "can't be cheated using pattern recognition". This article makes me wonder whether you need to learn to reason by interacting with the physical world - writing out things and referring back to them, for example.
Today: 0 Total: 21 [Share]
] [