Content-type: text/html Downes.ca ~ Stephen's Web ~ Large Language Models (LLMs): Mistaking Engineering Achievements for Human Linguistic Competence

Stephen Downes

Knowledge, Learning, Community
I think this article is really wrong, but wrong in some interesting ways. The main point warns of “mistaking the impressive engineering achievements of LLMs for the mastering of human language, language understanding, and linguistic acts." The argument is that "the linguistic capabilities of LLMs... (is based on) a computational theory of mind, which views the human mind as an information processing systems. By contrast, Abeba Birhane and Marek McGann propose an enactive view of cognition based on embodiment, participation and precariousness, which on this account are non-computational. I don't know anyone who would argue large language models (LLM) are either language-complete or data-complete, and so they're not 'computational' in the way being describe here (which would imply the linguistic representation is composed of (say) a Turing-complete rule set). The dispositional analysis I offer elsewhere today is, for example, a counter-example.

Today: 4 Total: 23 [Direct link] [Share]


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Oct 16, 2024 08:20 a.m.

Canadian Flag Creative Commons License.

Force:yes