Content-type: text/html Downes.ca ~ Stephen's Web ~ Appropriate/ing Knowledge and Belief Tools?

Stephen Downes

Knowledge, Learning, Community

This post wanders through some compelling thoughts raised by the use of AI software like GPT-3 to write stories, relate facts, give instructions, program computers, and more. One question is: how reliable are they? Tony Hirst comments, "we might see Google as attempting to perform as a knowledge engine, returning facts that are true, and OpenAI as a belief engine, freewheeling a response based on what it's just heard and what it's heard others say before." But more: where does GPT-3 get its information? There's no attribution, no direct way to know whether it has been plagiarized, "is it possible to licence code in a way that forbids its inclusion in machine learning/AI training sets," and should we be prohibiting the use of such tools by students in coding exercises?

Today: 1 Total: 19 [Direct link] [Share]


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Dec 24, 2024 2:07 p.m.

Canadian Flag Creative Commons License.

Force:yes