Could a Large Language Model be Conscious?
PhilPapers,
Feb 02, 2023
David Chalmers asks, what would count as evidence that large language models are or could be conscious? That doesn't mean they're sentient or aware of their own existence, just that there is some sense in which we can say what it's like to be an AI (just as Nagel asks, "what is like to be a bat?"). There isn't an operational definition of consciousness; that is, there are no benchmarks for measuring it in a machine. We're not going to believe it just because it says it is conscious. At the same time, it's not obvious that it lacks anything it needs to be conscious. Do we say it has to have something like a 'world-model' over and above mere statistical feature recognition? Maybe, but future AI are likely to have that capacity. Ultimately, says Chalmers, the problem is two-fold: we don't understand consciousness, and we don't understand what's going on inside an AI.
Today: 5 Total: 110 [Share]
] [