I think it's prudent to "explore things like how to determine whether the 'welfare' of an AI model deserves moral consideration." Put it under the heading of risk management. I know, there are sceptics. Mike Cook, for example, says "a model can't 'oppose' a change in its 'values' because models don't have values. To suggest otherwise is us projecting onto the system." But how do we determine whether a human has values? How do we determine whether anything has consciousness?
Today: Total: [Share]
] [