Thread: The A.I. Thread
View Single Post
Old 03-26-2016, 11:28 AM   #54
CorsiHockeyLeague
Franchise Player
 
CorsiHockeyLeague's Avatar
 
Join Date: Feb 2015
Exp:
Default

Haha, I don't know that I agree because I'm not a dingo or an elephant - maybe they DO consider existential questions in some way. No way for us to know. I'm just saying that if they don't, it's a matter of hardware.

Quote:
Originally Posted by psyang View Post
I agree, and I thank you for bringing up the point. However, as reasoning creatures, we are put in an interesting situation. Rationally, we can see the implications - everything we will do has already been determined. Concepts of morality and justice cannot have the same weight (or maybe, same definition?) as in a non-deterministic universe.
This is definitely a tough question. If you listen to the Very Bad Wizards podcast, in the first couple of episodes Tamler Sommers explains his view that in fact even despite this, moral blameworthiness has meaning. I'm not sure if I agree with him or not. Additionally, there's still plenty of room for consequentialist morality - notwithstanding the lack of libertarian free will, we still ought to punish and reward people for behaviour to produce desirable results. But yeah, very thick molasses to wade through, as ethical philosophy always is.
__________________
"The great promise of the Internet was that more information would automatically yield better decisions. The great disappointment is that more information actually yields more possibilities to confirm what you already believed anyway." - Brian Eno

Last edited by CorsiHockeyLeague; 03-26-2016 at 11:31 AM.
CorsiHockeyLeague is offline   Reply With Quote