As Ben Dickson writes, ChatGPT and other LLMs are limited to their training data. That's why they make factual errors; they simply don't have the facts in the first place! The solution to this is 'embeddings' (and we'll see a lot more about this in the future). The idea is that you supplement chatGPT with your own resource library, and when a request comes in, it retrieves the appropriate document (or documents) from your library in order to form a response. I haven't tried it yet, but this article provides complete instructions, meaning that a trial is in my near future.
Today: 0 Total: 19 [Share]
] [