This study (9 page PDF) "investigated the validity of an AI-generated score compared to human-driven methods of evaluating student effort." The context is "the AI-supported asynchronous discussion platform Packback, which approaches online discussions using protocols based on the Socratic method of questioning" and the evaluation mechanism is "curiosity score is calculated using a proprietary algorithm that includes weighted measures for different quality factors: depth; credibility and presentation, as determined by word count; sentence structure; citations and formatting." It is later characterized as "automatically calculated for each student post based on post length, sentence structure, lack of repetition, and successful use of citation." The study also looked at the effect of various instructor interventions. Based on the description, this appears to be a very basic type of AI with the inputs being purely mechanical (ie., derived by counting) and not interpretive. But the paper is part of wider trend and the quality of the AI will, as we know, improve. Via Juan Domingo Farnós.
Today: 5 Total: 93 [Share]
] [