Content-type: text/html Downes.ca ~ Stephen's Web ~ Civilian AI Is Already Being Misused by the Bad Guys

Stephen Downes

Knowledge, Learning, Community

The tenet of this article is 'responsible AI' initiatives have overlooked an 'ethical blind spot' related to peace and security. The starting point here is a revalation that researchers "had developed an artificial-intelligence (AI) tool that could invent potential new chemical weapons." Obviously, this is not what the world needs, but it was very straightforward to develop. "The researchers simply adapted a machine-learning model normally used to check for toxicity in new medical drugs." Well, does that mean oversight of AI should be placed into the hands of the military? Goodness no. But authors do recommend AI developers "get on board" with Responsible research and innovation (RRI), "a relatively new concept that has been defined as a transparent, interactive process by which societal actors and innovators become mutually responsible to each other with a view to the (ethical) acceptability, sustainability and societal desirability of the innovation process and its marketable products (in order to allow a proper embedding of scientific and technological advances in our society)."

Today: 1 Total: 1744 [Direct link] [Share]


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Nov 03, 2024 4:46 p.m.

Canadian Flag Creative Commons License.

Force:yes