Thread: The A.I. Thread
View Single Post
Old 05-31-2024, 09:11 AM   #488
Russic
Dances with Wolves
 
Russic's Avatar
 
Join Date: Jun 2006
Location: Section 304
Exp:
Default

Quote:
Originally Posted by psyang View Post
I think of the early machine learning videos where a computer was able to play the arcade game Defender indefinitely. It had an environment where it could play, lose, and learn rapidly, eventually figuring out a strategy that would allow it to survive. It is an environment of discovery, not just being fed/trained on known information.

Could there be an environment like that for AI to "discover" something like the theory of relativity? Possibly. It would require being able to run precise experiments and analyze results along with the application of already known data. It needs a way to rank theories , toss bad ones, and refine good ones. It's basically how genetic algorithms work now.

As Russic said, as AI begins to be able to interact more an more with the real world I think this becomes increasingly possible.

The thing I always think about when comparing humans with AI is every human starts at 0 at birth, and has to start the (relatively) slow process of gathering knowledge/experience to be able to discover new things. Then all of it is gone at death. Obviously not quite gone, but there is no other human with the same set of knowledge and experience, and the next human to be born has to start at 0 again.

With AI, it does not die. New models may have to be retrained, but they could theoretically live forever, continually learning. There may be a time when a new AI can simply ingest the models of previous AIs, and essentially pick up where the last AI left off. Regardless, it's a big advantage.
That sounds like the Q-Star training that OpenAI has been using (maybe? I can't recall where they are with it). It more or less explores an environment and tries a bunch of things, ranking what works best. Pretty sure we'll have to move onto that because I've heard we'll run out of human content to train it on by 2026.

This is where the "but it can't think" gets murkier for me. If an AI learns to play defender by trying 10,000 strategies and ranking them, is that not thinking? Perhaps it's not... does it matter though? Is the goal to think or rock ass at Defender?

If you have a manufacturing business and your AI assistant comes up with a market that may require your stuff that you'd never considered, that's a straight win. Does it matter if it came up with that idea in a flash of brilliance or it did some pattern matching on 5 million potential scenarios like Dr. Strange?
Russic is offline   Reply With Quote