
Stan wrote earlier this morning on what Berners-Lee told the BBC in an interview over the veracity of information on the Web:
Talking to BBC News Sir Tim Berners-Lee said he was increasingly worried about the way the web has been used to spread disinformation…Sir Tim told BBC News that there needed to be new systems that would give websites a label for trustworthiness once they had been proved reliable sources.
Stan seemed to think that we as humans are capable of judging for ourselves whether or not information should be trusted or not simply based on the brands associated with the information we're looking at. There are some serious problems with that assumption, but perhaps even more problematic was the dismissal provided by Andy Beal over at Marketing Pilgrim today (emphasis added):
Do I have to keep repeating myself on this stuff? Why does the web need labeling? And, who’s to say which site is authoritative and which is not? Why can’t the web simply exist, grow, and morph into what masses decide? What happened to the “wisdom of crowds” deciding what’s credible?
Berners-Lee is absolutely right in his premise - there is a glut of disinformation continually growing on the Web, and I think that the root cause of the issue are certain aspects amongst the social nature of a lot of the Web-based network structures we've been growing in the last year or so.
Wisdom of crowds is a highly over-used phrase, though, and this is one of the many cases where it simply doesn't apply to the types of situations Berners-Lee is talking about. In the article I wrote on this topic last Wednesday, one of the commenters suggested I pick up a copy of The Wisdom of Crowds by James Surowiecki. I've actually got that book on my shelf, so when I saw the Berners-Lee topic come up today, I went and dragged it down to get some direct quotes on some pertinent parts.
When Wisdom = Stupidity
In case you aren't familiar with the Wisdom of Crowds principal, I outlined the whole concept in pretty big detail in a piece I wrote for Mashable last October, as I looked at the philosophy and architecture of Digg and analyzed whether or not I thought it to be utilizing Wisdom of the Crowds or just simply a social voting mechanism loosely based on the concept:
The anecdote that is the genesis for the concept of wisdom of crowds I've heard many times over is the story of scientist and statistician Francis Galton from the late 1800's, who was surprised that the crowd at a county fair accurately guessed the butchered weight of an ox.
What made it interesting was not that any one individual came close to guessing the actual weight, but that the crowd did. When their individual guesses were calculated to the median, the resulting number was much closer to the ox's true butchered weight than the estimates of most individual crowd members, and perhaps most surprisingly also closer than any of the estimates made by cattle experts.
I surmised that for the specific case of Digg, you couldn't say that this was a working example of Wisdom, since instead of looking for objectively verifiable answers out of crowd wisdom, the social news promotion system at Digg is based on hypothetical question with only subjective answers (Which items are news, and what spin should dominate?).
This is one example of the type of situations where Surowiecki says the Wisdom of Crowds will fail. His list of problematic structures include where the crowd is too homogeneous, too centralized, too divided, too imitative, or too emotional.

As I re-read the section having to do with emotions and crowd wisdom, I was struck by the parallels to my earlier piece about politics in the social web, and how almost no real information was being conveyed in the bulk of the debates I saw taking place. When you look at the larger framework of debate in America, though, you start to see the root causes to this, and see how they're a major contributing factor to the breakdown in real communication and the proliferation of disinformation and misinformation.
On the one hand, you have a politician who's buzzwords are "hope" and "change," both being rooted either directly in emotion, or an emotive response to the previous administration. The other side has run a very emotional campaign thus far rooted in images of previous disaster or fear of how the other side might run the country into the ground.
With these as starting points for the debate, everything that flows forth is defensive against that, and very quickly devolves to the realm of "well, my candidate could beat up your candidate," and never really touches the issues that each candidate represents and brings to the table.
Social media, though, built partly on the precepts of Wisdom of Crowds should be able to counteract that. We've certainly seen that in the past. As I mentioned the other day, 2004's hallmark online debates were all factually based and truth was derived from bi-partisan cooperation.
Why Don't Existing SocNets Do the Job?
From what I can see, whether it's from the overbearing need for everything to be "open" or because we're just at cutting edge of the social web and no one's really thought this out yet, but none of the popular tools really have the implicit catches in place to compensate for these issues.
Take for instance discussion facilitating networks like FriendFeed and Twitter. They are the very epitomizing of Suroweicki's failure points of homogenization and imitativeness. Follow and friend groupings take place primarily between people of like-mindedness, and the peer pressure to think alike is very high. Those few that do speak out with contrary opinions are easily ganged up on, and once the pile-on starts, the perceived need for those with the majority-held position to back up their words with logic or facts is minimal.
Systems like Digg and YouTube have worked to mitigate these effects with the voting up or down of comments and the discussion points themselves, but due to the populist nature of the system, only the most controversial of topics tend to rise to the top. Elements of controversy commonly walk hand in hand with the disqualifying factors for the wisdom of crowds to truly be applied in an effective manner: emotional or divisive topics.
Within the Digg and YouTube communities exist sub-communities that make clear an honest debate even harder. There's the long fabled "bury brigades" on Digg that group along ideological lines and will bury anything that contradicts their world-view before it has a chance to be seen by a wider audience. On YouTube, this behavior is best exemplified by the "Twoofer" movement, which will flock to any video with the a keyword relating to 9/11 and effectively dominate any discussion with their talking points and conspiracy theories. Any opposing viewpoints are usually shouted down, buried or marked as spam.
Again, What Can Be Done to Solve It?
I posed the question at the end of my article last Wednesday, and got a number of interesting responses. I agree with Berners-Lee that something must be done, but I'm not particularly convinced that the badge system is the way to go. Berners-Lee's wheelhouse these days is Semantic Web, and he's very big on the idea of using semantics to solve all problems.
The proposed badge system seems very reminiscent of Tipper Gore's music album ratings system - not effective for what it proposes, and very subject to interpretation. Unless applied ultra-carefully, it could spark as much controversy as the content itself, and applied too carefully, the process could be too bureaucratically complicated to be even as effective as the content ratings system that Microsoft tried to make part of Internet Explorer years ago (remember that? Probably not, because no one paid any attention to it or submitted their site for review).
For it to actually work requires thought to be put into the system before it's engineered. I can't imagine the sort of feat it's going to take for Kevin and Jay to put the toothpaste back in the tube with Digg and re-align it to something that isn't simply a more effective tool for spin, PR and propaganda propagation.