I have remarked in the past that large language models (LLMs) like BERT, GPT-3 and ChatGPT show us that 'knowledge and thought' are not the same as 'knowing a language'. As linguistic professors Emiliy Bender and Alexander Koller wrote in 2020, "Communicative intents are about something that is outside of language." But if not language, then what? They continue, "When we say Open the window! or When was Malala Yousafzai born?, the communicative intent is grounded in the real world the speaker and listener inhabit together." Or as Irving Wladawsky-Berger adds, it's something like a "communicative intent, model of the world, or model of the reader's state of mind." I don't agree. A 'model' suggests that knowledge and thought are representational (with all the overhead and baggage that entails). What LLMs lack aren't models; they're experiences. We see this when similar systems learn to play chess or go (real or virtual, it doesn't matter). Actually playing games is what allows them to learn how to play the game.
Today: 0 Total: 23 [Share]
] [