Toronto Deep Learning Demos
Yichuan Tang, Tianwei Liu,
University of Toronto,
Jan 16, 2015
'Deep Learning' is a form of machine learning that generates clusters or categorizations without the aid of a training set - the machine learns to recognize things by itself. This set of demonstrations from Toronto apply descriptions and captions to images. Most of the results are quite good, though you can fool it still with specific examples, like the Taj Mahal. Deep learning is important for a couple of reasons: it demonstrates that neural networks can learn abstractions without a priori knowledge, and it creates a set of applications that can be useful for e-learning analytics, such as resource classification for intelligent recommendation systems. The Toronto site has other resources that are equally applicable to e-learning. I've talked about Boltzmann machines in the past; Multimodal Deep Learning With Boltzmann Machines illustrates aspects of this. Also: Quantitative Structure-Activity/Property Relationship (QSAR/QSPR). And Multimodal Neural Language Models.
Today: 0 Total: 17 [Share]
] [View full size