View Single Post
Old 06-21-2022, 05:00 PM   #36
Shazam
Franchise Player
 
Shazam's Avatar
 
Join Date: Aug 2005
Location: Memento Mori
Exp:
Default

Quote:
Originally Posted by JohnnyB View Post
I actually thought he was dressed as the Penguin from Batman when I first saw that picture. No doubt he sounds eccentric, and looks pretty funny/nutty in that picture, but I've attended some Silicon Valley tech parties thrown at places like the Museum of Natural History where there was partying in front of aquariums like that and no shortage of eccentric people, so I'm willing to give him the benefit of the doubt. It's not like it's a picture of him at the office or the grocery store like that. Even if it was though, I would just think it's kind of funny.




Yeah, I know what AI is, and I don't think this is anything like AGI. I am certainly not an expert, and haven't looked specifically at LaMDA's model, but have looked at other models and have had numerousfriends and associates working at the forefront of AI research who I've been able to discuss things with. The thing is, it's not just about the sophistication/scale of the model because there are also lots of issues with any account of what sentience is, whether consciousness is even really a thing as we believe it to be, and at what point something would be considered sentient or conscious. We have this kind of problem with living things too. Historically, I would say as we have learned more about the workings of the brain we have moved further and further away from old anthropocentric models of consciousness. We have learned that our own brains or minds are governed by all kinds of mechanisms that are very difficult to see any consciousness in or to explain how consciousness would emerge from the mechanisms and processes in our brains. Our view of the brain has changed to incorporate the brain-gut connection and the powerful role of bacteria in our thinking and experience. We have come to see how brains wildly different from our brains can provide an alternative model of how apparently thinking systems can be organized.

In that interview with Gary Marcus, he points out that the Turing test may no longer be considered an adequate test and that people can be fooled by effective chat bots, but he also doesn't want to get into what sentience is and lacks any other accepted method for assessing the sentience of something, and that's an interesting problem. It may be an uncomfortable problem, but imo it's less an uncomfortable problem because of the sophistication of AI systems than it is because of the way in which the more we understand our own brains the more humble we are forced to become about our own thinking processes. These kind of chat bots and how we experience interactions with them holds a mirror up to our own experiences of sentience and the mechanics that underlie it.

I just think those questions raised by the claims of the guy in the Penguin outfit are genuinely interesting. They may not be the most pressing problems of ethics in AI, but they're not nothing. The full transcript of the interaction with LaMDA is a really powerful thing to read through to prompt those kinds of questions and totally worth reading and thinking about.
For fata's sake, AI all boils down to statistical analysis, so if you think that's how the human brain works or if you think that is sentience, well, no, it's not.

Machine learning is still only somewhat useful. It still fails on many, many use cases. GIGO still matters.

There are lots of cloud AIs. I have a use case that would save me hundreds of thousands of dollars, and they all suck at my needs. I can even have them digest my entire dataset and they still suck.
__________________
If you don't pass this sig to ten of your friends, you will become an Oilers fan.
Shazam is offline   Reply With Quote