This article describes an attempt to have participants in a faculty forum work together to create an 'edge case' with an AI where they can experiment with prompts to see how what they submit makes the resulting resource more or less usable. The idea was to provide it with a case study in some discipline, and then ask it to produce five similar, but not identical, case studies. Would it produce something original? Or would the result just be flat copies of the case study provided? It was an interesting activity, but Lance Eaton reports having difficulty making it work in the time allotted in the group setting. That sounds to me like a pretty common problem.
Today: 1 Total: 22 [Share]
] [