I think there's a difference between 'getting things wrong' and 'mindless', and while the former is certainly a problem with Google's AI, and others, as argued in this post, I don't think that the latter is an issue at all. The article focuses on Google's Perspective software, which you can try for yourself. It has some easily verified flaws; one is that negative statements, however accurate, are viewed as "toxic", while nice statements, however abhorrent, are viewed as non-toxic. "The problem is that the algorithm doesn't know any human history. It's mainly looking for hot-button terms like 'evil' and 'scumbag.'" Quite so. It's too simplistic to be useful. But that is very different from "mindless". As the author admits, "smart algorithms are indeed helping medical doctors diagnose, they report, and they are also trouncing humans at the most challenging games humans have ever devised." They are literally 'mindless' but they are better at some things. If the anti-troll software starts working properly, it won't matter one whit that it's mindless.
Today: 8 Total: 116 [Share]
] [