I can confirm from my own experience the phenomenon where a large language model (LLM) will incorporate software packages that don't exist. This happens where there is a pattern of similar package names for similar functions that do exist. You see them, for example, when you ask an AI to write a script accessing an API; if the right package doesn't exist, the AI will simply act as though it does exist. This, argues Thomas Claburn, opens an avenue for malware called "slopsquatting" where a malicious actor creates malware using the name suggested by the AI to be incorporated and run. "Even worse, when you Google one of these slop-squatted package names, you'll often get an AI-generated summary from Google itself confidently praising the package."
Today: Total: [Share]
] [