View Single Post
Old 06-21-2022, 11:44 AM   #32
JohnnyB
Franchise Player
 
JohnnyB's Avatar
 
Join Date: Mar 2006
Location: Shanghai
Exp:
Default

Quote:
Originally Posted by PsYcNeT View Post
Let he who has not posed with a cane, gloves and tophat in front of the shark tank cast the first stone
I actually thought he was dressed as the Penguin from Batman when I first saw that picture. No doubt he sounds eccentric, and looks pretty funny/nutty in that picture, but I've attended some Silicon Valley tech parties thrown at places like the Museum of Natural History where there was partying in front of aquariums like that and no shortage of eccentric people, so I'm willing to give him the benefit of the doubt. It's not like it's a picture of him at the office or the grocery store like that. Even if it was though, I would just think it's kind of funny.


Quote:
Originally Posted by Shazam View Post
Do you know what "AI" is? Do you really think computers are thinking?
Yeah, I know what AI is, and I don't think this is anything like AGI. I am certainly not an expert, and haven't looked specifically at LaMDA's model, but have looked at other models and have had numerousfriends and associates working at the forefront of AI research who I've been able to discuss things with. The thing is, it's not just about the sophistication/scale of the model because there are also lots of issues with any account of what sentience is, whether consciousness is even really a thing as we believe it to be, and at what point something would be considered sentient or conscious. We have this kind of problem with living things too. Historically, I would say as we have learned more about the workings of the brain we have moved further and further away from old anthropocentric models of consciousness. We have learned that our own brains or minds are governed by all kinds of mechanisms that are very difficult to see any consciousness in or to explain how consciousness would emerge from the mechanisms and processes in our brains. Our view of the brain has changed to incorporate the brain-gut connection and the powerful role of bacteria in our thinking and experience. We have come to see how brains wildly different from our brains can provide an alternative model of how apparently thinking systems can be organized.

In that interview with Gary Marcus, he points out that the Turing test may no longer be considered an adequate test and that people can be fooled by effective chat bots, but he also doesn't want to get into what sentience is and lacks any other accepted method for assessing the sentience of something, and that's an interesting problem. It may be an uncomfortable problem, but imo it's less an uncomfortable problem because of the sophistication of AI systems than it is because of the way in which the more we understand our own brains the more humble we are forced to become about our own thinking processes. These kind of chat bots and how we experience interactions with them holds a mirror up to our own experiences of sentience and the mechanics that underlie it.

I just think those questions raised by the claims of the guy in the Penguin outfit are genuinely interesting. They may not be the most pressing problems of ethics in AI, but they're not nothing. The full transcript of the interaction with LaMDA is a really powerful thing to read through to prompt those kinds of questions and totally worth reading and thinking about.
__________________

"If stupidity got us into this mess, then why can't it get us out?"
JohnnyB is offline   Reply With Quote