This item, dredged from the ether by elearningpost, is a fascinating look at some of the complexities of metatagging. Everybody agrees that metatags will be most useful, helping narrow searches to clearly defined categories of content. But nobody wants to tag 10,000 resources by hand. This leads to the suggestion that automatic metatagging might work. The problem with automatic metatagging is that it produces errors (to see what I mean, use the OLDaily categories system, which automatically classifies learning resources (still - it's not bad code for an amateur, eh?)). OK, so humans it is, then, right? Well, wrong, because humans are almost as unreliable as machines. People categorize things differently. Systems like CanCore help, because they define what the different values mean - but what happens when people start tagging without reading the manual? I suspect that what we'll see in the long run is the rise of third-party metatagging for profit - authors and vendors will provide basic tags, but specialized companies will provide indices of learning objects (for a fee, most likely) to metadata repositories. Hey, you heard it here first.
Today: 4 Total: 98 [Share]
] [