In the past I have talked about the possibility of automatically generated OER. These learning resources would be created by transformers, like BERT from Google and GPT from OpenAI, which (as the article says) "use natural language processing to understand text and produce translations, summaries and interpretations." That's great, but the technology also introduces the potential for the use of OER to spread misinformation. This article suggests that AI-generated misinformation may be good enough to fool experts. It offers some examples where fake reports were used to fool security experts about potential intrusions and threats. Misleading OER ('MOER'?) could create havoc, especially if they pass undetected in peer review. Something to think about.
Today: 7 Total: 96 [Share]
] [