The story: "Koko, a mental health nonprofit, found at-risk teens on platforms like Facebook and Tumblr, then tested an unproven intervention on them without obtaining informed consent. 'It's nuanced,' said the founder." I don't think it's particularly nuanced in this instance, and that most people world agree the research was unethical. But it's not exactly like there are ethical protocols for developing AI-enabled products and services. Sure, there are guidelines for the ethical development of AI, but not really for when the AI should be applied in a health, educational or social setting. I'm not talking about whether the AI is ethically sourced, or creative, or bland, or whether it is accurate, but whether it is a good idea to use intervention X in application area Y - we don't have a research protocol for testing that, or at least, I haven't found one.
Today: 1 Total: 1737 [Share]
] [