This is a complex post (and the comment which follows even more so) but it boils down to this: what if there are some biases that humans (and by inference, AIs) have to have in order to be able to learn and know things about the world? One such is the principle of causal invariance, which is the idea that causal relations don't change over time; "if a causal relation changes from the moment one induces it to the moment one applies it, that knowledge would be useless." It's the difference, we could say, between merely searching for phenomenal patterns in the world, and searching for truth. And this, the argument runs, is a good sort of bias to have. There may be others. "Introductory psychology courses teach the many ways intuitive reasoning is riddled with biases. Work on invariance may show that insight can flow in the opposite direction." Image: Causality for Machine Learning.
Today: 0 Total: 11 [Share]
] [