Lots of movement on the algorithmic accountability front (this is the idea that companies need to be able to explain, and be accountable for, conclusions their software draws about people). According to this article, Kate Crawford, principal researcher at Microsoft Research, and Meredith Whittaker, founder of Open Research at Google, "announced today the AI Now Institute, a research organization to explore how AI is affecting society at large. AI Now will be cross-disciplinary, bridging the gap between data scientists, lawyers, sociologists, and economists studying the implementation of artificial intelligence." We've been hearing this idea, in this article and elsewhere, for example from Cathy O'Neil in the New York Times, that there's no academic reserach being done in this area. But as pointed out in this Chronicle article, "the piece ignored academics and organizations that study the issues." Said Siva Vaidhyanathan, on Twitter, "There are CS departments and engineering schools that take this very seriously. MIT, Harvard, UVA, CMU, Princeton, GaTech, VaTech, Cornell Tech, UC-Irvine, and others all have faculty and programs devoted to critical and ethical examination of data and algorithms."
Today: 2 Total: 96 [Share]
] [