Content-type: text/html Downes.ca ~ Stephen's Web ~ How Does a Large Language Model Really Work?

Stephen Downes

Knowledge, Learning, Community

This is a pretty good post and will help readers hone their intuitions about what a large language model (LLM) like chatGPT actually does. But I want to focus on one tiny little statement: "ChatGPT doesn't really 'know' anything. It has no self-awareness or consciousness." Whoa, whoa, whoa, wait a minute now. Since when does 'knowledge' consist of 'self-awareness' or even 'consciousness'? When we look at current and historical accounts of knowledge (such as the widely discussed definition of knowledge as 'justified true belief') the critical elements seem to be (a) an assertion that some proposition is true or false (ie., a belief), that is is true (eg. according to Tarski's theorem), and that there are grounds for our assertion (based on evidence, confirmation, or any number of other proposals). If you want to say consciousness or self-awareness are necessary, that's fine, but you also need to tell us what consciousness or self-awareness bring to the table. Via Alan Levine.

Today: 1 Total: 100 [Direct link] [Share]


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Nov 21, 2024 06:47 a.m.

Canadian Flag Creative Commons License.

Force:yes