when it is generating something that it wasn’t trained on before, then it could present incorrect answer.
Not could, will. It’s basically guaranteed to start spitting out garbage once it’s extrapolating beyond the training data. Any semblance of correctness is just luck at that point.
This is true for basically all models, everywhere.
Not really, because there were some reports of world model being utilized alongside with LLM. While it’s early and premature, it does indicates that we’re going to see world model being incorporated with the language model so in due time, it could conceptualize the world.
Not could, will. It’s basically guaranteed to start spitting out garbage once it’s extrapolating beyond the training data. Any semblance of correctness is just luck at that point.
This is true for basically all models, everywhere.
Not really, because there were some reports of world model being utilized alongside with LLM. While it’s early and premature, it does indicates that we’re going to see world model being incorporated with the language model so in due time, it could conceptualize the world.
“Well, if we just include other data”, weirdly, isn’t a rebuke to “it can’t extrapolate beyond the data”.