I think it's a good thing that people are becoming more aware of the (current) limitations of artificial intelligence. When we simply train AI based on the toughts and attitudes of, say, Google employees, we get a skewed perception of reality. But it's easy to criticize; the deeper question here is how we validate AI to ensure that it is not skewed. This is especially difficult given that the people who actually have those views will accuse the validation process of being politically correct and of social engineering. I think it wouldn't be too extreme to require that AIs be constrained by a scientifically-grounded knowledge base. That would be a technical challenge, and given today's climate, also a political challenge.
Today: 1 Total: 19 [Share]
] [