Content-type: text/html Downes.ca ~ Stephen's Web ~ The Talking of the Bot with Itself: Language Models for Inner Speech

Stephen Downes

Knowledge, Learning, Community

In a sense, neural networks have always been able to 'talk to themselves'. There is back propagation, for example, where feedback flows back through the layers of a neural network, correcting weights as it does. Or there are recurrent neural networks, where neural output is saved to become neural input, creating in effect cognitive loops. But 'talking to ourselves' or the idea of an 'inner voice' has always been thought to be something more abstract, definable only in terms of lexicons and syntax, like a formal system. This article (34 page PDF) grapples with the idea, considering it from a conceptual, theoretical and then practical perspective, running us through Smolensky's argument against Fodor and Pylyshyn through to things like the 'Inner Monologue Agent' from Google Robotics and Colas's language enhanced 'autotelic agent architecture'. "Instead of viewing LLMs like ChatGPTs as general intelligences themselves, we should perhaps view them as crucial components of general intelligences, with the LLMs playing roles attributed to inner speech in traditional accounts in philosophy and psychology."

Today: 18 Total: 397 [Direct link] [Share]


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2025
Last Updated: Jan 08, 2025 2:47 p.m.

Canadian Flag Creative Commons License.

Force:yes