The tenet of this article is 'responsible AI' initiatives have overlooked an 'ethical blind spot' related to peace and security. The starting point here is a revalation that researchers "had developed an artificial-intelligence (AI) tool that could invent potential new chemical weapons." Obviously, this is not what the world needs, but it was very straightforward to develop. "The researchers simply adapted a machine-learning model normally used to check for toxicity in new medical drugs." Well, does that mean oversight of AI should be placed into the hands of the military? Goodness no. But authors do recommend AI developers "get on board" with Responsible research and innovation (RRI), "a relatively new concept that has been defined as a transparent, interactive process by which societal actors and innovators become mutually responsible to each other with a view to the (ethical) acceptability, sustainability and societal desirability of the innovation process and its marketable products (in order to allow a proper embedding of scientific and technological advances in our society)."
Today: 4 Total: 99 [Share]
] [