Content-type: text/html Downes.ca ~ Stephen's Web ~ To understand language models, we must separate 'language' from 'thought'

Stephen Downes

Knowledge, Learning, Community

This article summarizes a paper titled 'Dissociating language and thought in large language models: a cognitive perspective.' According to the paper, "to understand the power and limits of large language models (LLM), we must separate 'formal' from 'functional' linguistic competence." We are presented with what the researchers call two common fallacies related to language and thought:

  • "the first claims that an entity that is good at language is also good at thinking"
  • the second claims "if a language model can't fully capture the richness and sophistication of human thought, then it is not a good model of human language."

As you can see, you commit one of the other fallacy if you think language is thought. If nothing else, the large language models used in AI such as chatGPT should be seen as proving that. And "Although it is tempting to move the goalposts and focus on what these models are still unable to do… we argue that the remarkable advances in LLMs' ability to capture various linguistic phenomena should not be overlooked," write the authors. Related: Stephen Wolfram explains what a large language model does.

 

Today: 5 Total: 5 [Direct link] [Share]


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Nov 04, 2024 8:12 p.m.

Canadian Flag Creative Commons License.

Force:yes