A lot of recent discussion of responsible or ethical artificial intelligence centres around whether it is 'explainable'. For various reasons(*), I think this is the wrong word, and much prefer the concept of 'interprebility'. As Andrew Hoblitzell writes, "This includes understanding the relationships between the input, the model, and the output. Interpretability increases confidence in the model, reduces bias, and ensures that the model is compliant and ethical." The article outlines six "methods of providing interpretability" and follows up with two case studies applying these methods. (* to 'explain ' is to answer a 'why' question; to interpret includes this but is broader, and refers to finding the 'meaning' of various aspects of the AI system.)
Today: 0 Total: 1759 [Share]
] [