A neural conversation model
Adrian Colyer,
The Morning Paper,
Jun 29, 2016
One of the key questions in learning and technology, from my perspective, is whether a neural network needs domain knowledge in order to function effectively. This article summarizes a paper describing an effort to create an effective conversational tool that operates without domain knowledge, "a bot that is trained on conversational data, and only conversational data: no programmed understanding of the domain at all, just lots and lots of sample conversations." As we see from the examples, "The surprising thing is just how well it works." It's far enough from reliable, though, that the author concludes "any real service is going to need to some more complex logic wrapped around it."
You might be asking, why is this question so important? The answer is complex, but in a nutshell, if we require domain knowledge in order to learn, then we require memorization; by contrast, if learning can be accomplished without domain knowledge, then it can be accomplished by practice alone, without memorization. You might say "so who cares? Just memorize some stuff." You could do this, but this makes it a lot harder for the learner to correct memorized stuff that is wrong, and makes them less able to learn on their own or think critically. The learner's knowledge becomes based more on their pre-constructed model or representation of the world, not experience or evidence. So if you can get to the same place without rote memorization, that would be preferable.
Today: 4 Total: 132 [Share]
] [View full size