Fuzz, I fully realize it's not AGI, I have never claimed it to. What I was doing earlier is called prompt engineering. I am fully aware of what it does. We are nowhere near anything close to AGI. I call it reasoning, when it reality it fakes it, it just does it extremely well. In the end it's still accurate whether it was programmed to lead to that reasoning. This is also why GPT4 is quite slower than its predecessors.
The link you gave was written prior to GPT4's release (and he's using GPT2 as a basis for the thesis). The model has significantly advanced since even 3.5 which is what most of us were introduced to.
I feel you are trying to argue down for the sake of doing so and looking to downplay what it can be capable of.
That GPT4 can be so good from a dataset limited to 2021 and earlier, with no browser capability or ability to correct itself outside of what it already knows (its in alpha which I am on the waitlist for) is a masterpiece. I'm using it daily and I have gotten fairly familiar with its flaws (ever dealt with dreaded hallucinations? Ugh!).
And it's getting better.
Last edited by Firebot; 04-03-2023 at 02:46 PM.
|