"Probabilistic generalizations based on internet content are not steps toward algorithmic moral personhood," write David McNeill and Emily Tucker. To which I respond: why not? Now let's be clear here: I wouldn't think that the generalizations themselves are instances of consciousness. But just is there is something it is 'like' to be a bat - or a worm, or an octopus, or a human - surely then there could be there is something it is 'like' to be a machine. Now we as humans can't really grasp that, though we can't really grasp any of the other examples either. Sure, "an algorithmic or computational process is a kind of abstract machine we use in our thinking, it is not a thinking machine." Sure. But the machine could be a thinking machine - it is as real as you or I.
Today: Total: [Share]
] [