Tim Klapdor's main point is both the challenge and the issue: "intelligence requires understanding & meaning. Therefore, if you want to call something intelligent, then it must be able to exhibit understanding and meaning." OK, sounds great, until you push a bit on what counts exactly as 'understanding' and 'meaning'. What is 'understanding'? Knowledge of causal principles? Not robust enough. General laws and principles? Too inflexible. A model or world view? Sure, now we're getting closer. But that's what AIs do! Kids learning 'why' - what sort of answer do you give them? Cause, principles, theory. Right? So what is there to 'understanding' that AIs don't do. The same sort of questions arise around 'meaning'. Do we mean 'intentionality'? 'Intensionality?' "emotions or tone" Aw, we already know AI can respond to these. It's too easy to simply say "understanding & meaning." Image: University of Tennessee, Multiple Intelligences Theory (not a big leap from where we are in this post).
Today: 1 Total: 107 [Share]
] [