View Single Post
Old 03-30-2021, 06:14 PM   #344
FlamesAddiction
Franchise Player
 
FlamesAddiction's Avatar
 
Join Date: Oct 2001
Location: Vancouver
Exp:
Default

Quote:
Originally Posted by #-3 View Post
/\
Is our current technology on pace to simulate an entire universe perfectly?
To perfectly resolve an entire universe, would require all of the energy in the universe, therefor you might be able to create incredibly convincing simulations, but we would really only resolve them to that point of things we are looking at, at any given time.




I think the whole simulation argument is pretty week, but at this time it is an attempt at a naturalistic explanation at the cause of things. So I would say we can't dismiss it out of hand the way we can dismiss things like god, panpsychism or dualism...

If I were to say what we should think about simulation theory, the best argument they have for this theory relies on speculative technology that is basically science fiction at this point in time. The resolution level of the universe just doesn't seem to fit to well with the idea that well at all, you would expect the resolution of a simulation to be inconsistent from the point of view of those inside the program as we are not the intended viewer, and it would be resolved from the point of view of the user. I suspect this whole idea is a modern version of god that will become less and less plausible as we fill in the gaps in our knowledge until it reaches the point being a ridiculous idea, but for now I wouldn't call it a category error to compare the people who claim we know that we are not in a simulation with those who claim to know that we are, they are in the same realm.
Going off on a tangent, some scientists and engineers think there is a threshold we may, or will, eventually reach, where AI will basically go on rails and increase exponentially. Guys like Stephen Hawking, Bill Gates, Elon Musk, Stuart J. Russell, and Max Tegmark suggest that as soon as we reach a point where AI becomes sentient and has the ability to learn (i.e. program itself from experiences), the singularity would go from what seems several centuries to decades or less.

Basically a technological event horizon will occur that once you cross that threshold, there is no turning back. It won't be humans that get the singularity. AI will take over at a certain point and move toward that goal exponentially faster than humans could. The problem is that there is no way of knowing who is going to cross it first, or when. We don't really understand how sentience comes about, so if someone stumbles upon it, that could be it for humanity. I think the best thing we could hope for is that we are still viewed as necessary and it lets us live (preferably in a matrix rather than as slaves).

Stephan Hawking said this:

"So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here–we'll leave the lights on?' Probably not–but this is more or less what is happening with AI.'
__________________
"A pessimist thinks things can't get any worse. An optimist knows they can."

Last edited by FlamesAddiction; 03-30-2021 at 06:21 PM.
FlamesAddiction is online now   Reply With Quote
The Following User Says Thank You to FlamesAddiction For This Useful Post: