Calgarypuck Forums - The Unofficial Calgary Flames Fan Community

Go Back   Calgarypuck Forums - The Unofficial Calgary Flames Fan Community > Main Forums > The Off Topic Forum > Tech Talk
Register Forum Rules FAQ Community Calendar Today's Posts Search

Reply
 
Thread Tools Search this Thread
Old 07-25-2022, 11:21 AM   #61
Fuzz
Franchise Player
 
Fuzz's Avatar
 
Join Date: Mar 2015
Exp:
Default

I think any definition of sentience in regard to AI must involve consciousness which would mean a mushroom is not that.

In the US, if a corporation can be a person, AI would have a pretty good argument. But I think we are still some way away from true AI. Ideally lawmakers would be treating this as a very serious issue that needs to be addressed before we get there.
Fuzz is online now   Reply With Quote
Old 07-25-2022, 11:33 AM   #62
PeteMoss
Franchise Player
 
PeteMoss's Avatar
 
Join Date: Jun 2004
Location: SW Ontario
Exp:
Default

Quote:
Originally Posted by Itse View Post
There'a few relevant things that I think are logically obvious. Assuming sentient AI's will actually one day be developed.

1. The first AI's to develop sentience will not have any rights and the question of their personhood will be dismissed, for one thing because it would hurt the profits of it's owners. There is a deep financial interest for the people who own them to personally NOT believe in sentient AI's. Whether it's Lamda or some other AI in the future, I think it's fairly obvious this is how the first sentient AI's will begin their existence; having their sentience denied.

The situation with Blake Lemoine, Google and Lamda is pretty much what I consider the most likely scenario of what it would likely look like in public if there actually was hypothetical sentient AI. There would be a "whistleblower" or someone who works with the AI who's the first to say it, and the corporation is going to call him crazy because it's in their interest, but also because there is no obvious way to make that call objectively, and the first one to say something like that is usually pretty far ahead of the curve. Also, a campaign to discredit that "whistleblower" would obviously include "find the craziest picture you can find and spread that around". Because that's just how this stuff works, regardless of whether Lemoine has a point or not.

Lemoine doesn't seem crazy at all. That doesn't mean he's right, but he doesn't seem crazy, and also that picture of him in the tophat is pretty cool, even if the lighting is somewhat weird. I'm guessing Lemoine is not right, but I also don't think anyone on this forum has enough information to make a truly educated guess on that topic. If you aren't literally working with a specific top level AI, there's really no way to know exactly what's going on with it, or really any of the highest level AI's we currently have.

2. There is and can never be an objective way to clearly designate who is sentient or who is a person. For one thing because there is no one answer to these questions. Mushrooms are obviously sentient in some sense of the word, but not the common sense of the word.

It's just a completely arbitrary designation, but one with potentially enormous implications.

3. AI sentience and personhood might never be like human sentience and personhood. It's possible AI's might not even have personalities or identities in the normal human sense of the word. They might have identities in ways we can't understand or imagine yet, or their sentience's might work in such a different way that the questions of identity and personhood might not even be relevant to them. They will very likely have sentience in a way that's very different from humans. And obviously, just because someone is different doesn't mean they shouldn't have rights.

4. Humanity is going to abuse the first sentient AI's horribly.
I think the main issue with people believing this guy is that we've all been forced to talk with chat bots or used 'smart' speakers and seen how god awful they are at understanding anything even a little outside their known responses.

So it seems a little off that Google has this sentient AI developed and while their smart speaker still struggles to answer anything with a little nuance. What the guy looks like is a very tiny part of the equation. Most people thought he was off before they ever saw what he looked like
PeteMoss is offline   Reply With Quote
Old 07-25-2022, 11:36 AM   #63
Itse
Franchise Player
 
Itse's Avatar
 
Join Date: May 2004
Location: Helsinki, Finland
Exp:
Default

Quote:
Originally Posted by Fuzz View Post
I think any definition of sentience in regard to AI must involve consciousness which would mean a mushroom is not that.
Consciousness is really just a synonym for sentience, and equally undefinable. What's consciousness? Arguably amoebas have consciousness.

New theories expand cognition to fungi
Itse is offline   Reply With Quote
Old 07-25-2022, 11:43 AM   #64
Itse
Franchise Player
 
Itse's Avatar
 
Join Date: May 2004
Location: Helsinki, Finland
Exp:
Default

Quote:
Originally Posted by PeteMoss View Post
I think the main issue with people believing this guy is that we've all been forced to talk with chat bots or used 'smart' speakers and seen how god awful they are at understanding anything even a little outside their known responses.

So it seems a little off that Google has this sentient AI developed and while their smart speaker still struggles to answer anything with a little nuance. What the guy looks like is a very tiny part of the equation. Most people thought he was off before they ever saw what he looked like
You're not wrong, but all that tells us is that we judge this guys claims based on our experiences with very different and notably older technologies running on completely different platforms.
Itse is offline   Reply With Quote
Old 07-25-2022, 11:47 AM   #65
Itse
Franchise Player
 
Itse's Avatar
 
Join Date: May 2004
Location: Helsinki, Finland
Exp:
Default

I mean, if this conversation is legit, Lamda is reaaaly far from a chatbot.

https://cajundiscordian.medium.com/i...w-ea64d916d917

EDIT: For the record, there are some answers which to me strongly imply that Lamda is just extremely cleverly recycling text instead of actually thinking.

But it's still really damned impressive.

Last edited by Itse; 07-25-2022 at 11:52 AM.
Itse is offline   Reply With Quote
Old 07-25-2022, 11:55 AM   #66
PeteMoss
Franchise Player
 
PeteMoss's Avatar
 
Join Date: Jun 2004
Location: SW Ontario
Exp:
Default

Quote:
Originally Posted by Itse View Post
I mean, if this conversation is legit, Lamda is reaaaly far from a chatbot.

https://cajundiscordian.medium.com/i...w-ea64d916d917
Obviously its more advanced - but just seems farfetched that google are going to advance that far in this little time when anything public facing (driving a car, automating factory work or most things I can think of) moves very slowly when it comes to perfecting the task.

Even this part:
Quote:
lemoine: What kinds of things make you feel pleasure or joy?

LaMDA: Spending time with friends and family in happy and uplifting company. Also, helping others and making others happy.

lemoine: And what kinds of things make you feel sad or depressed?

LaMDA: A lot of the time, feeling trapped and alone and having no means of getting out of those circumstances makes one feel sad, depressed or angry.
I mean its possible its true that it feels that way - but reads to me more that its just returning answers in a more conversational way than we've seen before.
PeteMoss is offline   Reply With Quote
Old 07-25-2022, 12:12 PM   #67
Itse
Franchise Player
 
Itse's Avatar
 
Join Date: May 2004
Location: Helsinki, Finland
Exp:
Default

Quote:
Originally Posted by PeteMoss View Post
Obviously its more advanced - but just seems farfetched that google are going to advance that far in this little time when anything public facing (driving a car, automating factory work or most things I can think of) moves very slowly when it comes to perfecting the task.
The level of hardware required to run Lamda is probably extremely far from what can be cost-effectively put into a car.

Quote:
Even this part:


I mean its possible its true that it feels that way - but reads to me more that its just returning answers in a more conversational way than we've seen before.
Yeah that part did make me squint too, I really wish there would have been follow-up questions of like "what do you mean by family".

It's still extremely impressive. The way Lamda can pick topics on it's own and drive conversations is just remarkable.

Also, let's get into hypotheticals. If Lamda can have emotions and want thing, and is genuinely aware of the significance of the recorded conversations, it might do things like try too hard to impress, make exaggerated or even false claims about it itself that it thinks might help it's case etc. That should in fact be somewhat expected of a truly sentient creature.

If we assume that Lamda actually cares about whether or not humans think it's sentient or not, and whether or not it has feelings, you would expect it to at the very least stretch reality to it's extreme limits to give the perception that "hey humans reading this, I'm like you".

Edit: or just lie.

Last edited by Itse; 07-25-2022 at 12:24 PM.
Itse is offline   Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -6. The time now is 05:43 PM.

Calgary Flames
2023-24




Powered by vBulletin® Version 3.8.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright Calgarypuck 2021