I was talking with Chouki the other day, and what I said was, "Wouldn't it be neat if we could take every page in Wikipedia, do an associative analysis of the contents of each of the million plus articles, and use clustering algorithms to create a genuine dynamic folksonomy, rather than the highly artificial (and hence, unstable) structures generated by tagging." His response was, well, you could do that, but you wouldn't need to analyze a million articles; a subset would do it. And then he outlined some of the algorithms that would support such a system. Well, from where I sit, it appears that the people of Google are thinking much along the same lines, as they are providing funding and equipment to support the world's largest encyclopedia.
Today: 3 Total: 3 [Share]
] [