View Single Post
Old 07-25-2022, 11:10 AM   #60
Itse
Franchise Player
 
Itse's Avatar
 
Join Date: May 2004
Location: Helsinki, Finland
Exp:
Default

There'a few relevant things that I think are logically obvious. Assuming sentient AI's will actually one day be developed.

1. The first AI's to develop sentience will not have any rights and the question of their personhood will be dismissed, for one thing because it would hurt the profits of it's owners. There is a deep financial interest for the people who own them to personally NOT believe in sentient AI's. Whether it's Lamda or some other AI in the future, I think it's fairly obvious this is how the first sentient AI's will begin their existence; having their sentience denied.

The situation with Blake Lemoine, Google and Lamda is pretty much what I consider the most likely scenario of what it would likely look like in public if there actually was hypothetical sentient AI. There would be a "whistleblower" or someone who works with the AI who's the first to say it, and the corporation is going to call him crazy because it's in their interest, but also because there is no obvious way to make that call objectively, and the first one to say something like that is usually pretty far ahead of the curve. Also, a campaign to discredit that "whistleblower" would obviously include "find the craziest picture you can find and spread that around". Because that's just how this stuff works, regardless of whether Lemoine has a point or not.

Lemoine doesn't seem crazy at all. That doesn't mean he's right, but he doesn't seem crazy, and also that picture of him in the tophat is pretty cool, even if the lighting is somewhat weird. I'm guessing Lemoine is not right, but I also don't think anyone on this forum has enough information to make a truly educated guess on that topic. If you aren't literally working with a specific top level AI, there's really no way to know exactly what's going on with it, or really any of the highest level AI's we currently have.

2. There is and can never be an objective way to clearly designate who is sentient or who is a person. For one thing because there is no one answer to these questions. Mushrooms are obviously sentient in some sense of the word, but not the common sense of the word.

It's just a completely arbitrary designation, but one with potentially enormous implications.

3. AI sentience and personhood might never be like human sentience and personhood. It's possible AI's might not even have personalities or identities in the normal human sense of the word. They might have identities in ways we can't understand or imagine yet, or their sentience's might work in such a different way that the questions of identity and personhood might not even be relevant to them. They will very likely have sentience in a way that's very different from humans. And obviously, just because someone is different doesn't mean they shouldn't have rights.

4. Humanity is going to abuse the first sentient AI's horribly.

Last edited by Itse; 07-25-2022 at 11:16 AM.
Itse is offline   Reply With Quote