Quote:
Originally Posted by GGG
I think people still suffer from hubris here that human intelligence is more than learning what response is appropriate then the summation of all previous responses. I think applying the definitions of AGI to humans would lead to the conclusions that humans are not an AGI.
Also that particular survey is suggesting brute force scaling won’t get to AGI seems reasonable I also don’t think there was ever an argument made that brute force scaling is the solution.
|
Musk obviously believes it, because he keeps claiming we are close. And because he is pouring all his efforts and money into LLM like systems, he must believe that's going to get them there. Unless he's lying and grifting again, which is also a strong possibility. But I think he tends to believe what he says, at least in general.
To your first point, I think we already see the progression. LLM's are bad at math, so the solution was to drop them to other systems when calculations are requested. I suspect we'll get loads of different models tweaked towards being really good at different fields and tasks. They'll all get mashed together, and a decision engine will choose which to use in which case. But they will still be an assembly of processes. Until that system can then receive a new novel field and learn the best way to solve it, they wont be AGI(or equivalent to human thinking). They'll just be a collection of systems that work in specific domains.
None of this means they are useless, but I think just being able to do most of what humans can, and in many cases far better, is not AGI anymore than a TI-82 is a brain.
I think that if we do get AGI, it'll have to be emergent that learns in it's own way on it's own. And no one has figured out how to bootstrap an emergent AI, or we'd have it. So it could be tomorrow, a decade from now, or never. But if it does come, it's going to be incredibly rapid and world changing, in ways an LLM never could.