Miguel Guhlin responds intelligently to an article describing how an Iowa school district is using AI to select books that should be banned. "Can AI be an able assistant to conservative school boards and educators seeking to ban content they consider inappropriate?" he asks. "The answer is, 'Probably. Yes.'" In the end, it's nothing more than a categorization exercise. Identify the themes you want banned, and the AI responds with a list. "It's child's play to use AI to come up with a list of content some find objectionable. That said, there are other ways to use AI for book recommendations." For example, it can be used to find good books on earthworms. Or it can help teachers find material on diversity, equity and inclusion. Now I've seen comments about the AI banning books along the lines of "the people using it don't even read the books they're banning". Quite so. But they didn't read the books in the pre-AI days either. The problem isn't the AI, it's the banning. And not even the banning, per se. After all, we probably don't want children reading Mein Kampf or Protocols of the Elders of Zion. It's the specific banning policy that is objectionable, and criticizing the AI for it totally misses the point.
Today: 4 Total: 15 [Share]
] [