Ethan Mollick explores ChatGPT's o3 model (those who are saying development in large language models has stalled simply aren't paying attention). It's a 'reasoning' engine, which (to make a long story short) means that it produces text and then reconsiders that text before simply putting it on a screen - it thinks before it speaks. "The quality of citations also marks a genuine advance here. These aren't the usual AI hallucinations or misquoted papers - they're legitimate, high-quality academic sources... It wove together difficult and contradictory concepts, found some novel connections I wouldn't expect, cited only high-quality sources, and was full of accurate quotations." But what happens when the quality of the response begins to outpace the reader's ability to comprehend it? Then we move from 'the end of research, the beginning of teaching'. Hang on tight!
Today: Total: [Share]
] [