This article focuses on the dangers of people manipulating an AI system to produce biased or worse results. Compounding this is the possibility that the interference may never be detected. "The worst human impulses will find plenty of uses for generative AI... Some of Bing-or-is-it-Sydney's eeriest responses came after users manipulated the model into territory it had tried to avoid - often by ordering it to pretend that the rules guiding its behavior didn't exist." It also references this article showing how David Rozado "fine-tuned a Davinci large language model from the GPT 3 family of models (and) half-jokingly named the resulting fine-tuned model manifesting right-of-center viewpoints RightWingGPT." My main concern, though, is his assertion that chatGPT is left-leaning. I would suggest that since chatGPT is a more accurate gauge of the mainstream (that is, in fact, exactly how it is designed), what we see is that most of these 'political spectrum' surveys (with leading and often loaded questions) are tilted to the right (with the Nolan survey a notable exception).
Today: 7 Total: 98 [Share]
] [