I say that there’s a natural asymptotic limit to the current approach to training generative AI and we are already near it. It may take decades for the next breakthrough to be established.
Since GPT is just trying to predict what words come next, it’s not really thinking of what it’s saying. Adding a thought process to it would be the next big thing. When that happens, stack overflow and other troubleshooting forums like that will be about as useful as a phone book. Sure, you can still use them, but why would you when there’s something so much better available.
Adding that is not something that can be done using the current approach. A new approach is not guaranteed to be found, may not be found anytime soon, could have been found by someone that hasn’t made it public yet.
I say that there’s a natural asymptotic limit to the current approach to training generative AI and we are already near it. It may take decades for the next breakthrough to be established.
Since GPT is just trying to predict what words come next, it’s not really thinking of what it’s saying. Adding a thought process to it would be the next big thing. When that happens, stack overflow and other troubleshooting forums like that will be about as useful as a phone book. Sure, you can still use them, but why would you when there’s something so much better available.
Adding that is not something that can be done using the current approach. A new approach is not guaranteed to be found, may not be found anytime soon, could have been found by someone that hasn’t made it public yet.