Content-type: text/html Downes.ca ~ Stephen's Web ~ 'Horribly Unethical': Startup Experimented on Suicidal Teens on Social Media With Chatbot

Stephen Downes

Knowledge, Learning, Community

The story: "Koko, a mental health nonprofit, found at-risk teens on platforms like Facebook and Tumblr, then tested an unproven intervention on them without obtaining informed consent. 'It's nuanced,' said the founder." I don't think it's particularly nuanced in this instance, and that most people world agree the research was unethical. But it's not exactly like there are ethical protocols for developing AI-enabled products and services. Sure, there are guidelines for the ethical development of AI, but not really for when the AI should be applied in a health, educational or social setting. I'm not talking about whether the AI is ethically sourced, or creative, or bland, or whether it is accurate, but whether it is a good idea to use intervention X in application area Y - we don't have a research protocol for testing that, or at least, I haven't found one.

Today: 1 Total: 14 [Direct link] [Share]


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Dec 25, 2024 08:44 a.m.

Canadian Flag Creative Commons License.

Force:yes