This is a weird thread. Some very confident opinions. Congrats to all CP experts in sentience and in AI. No congrats for the shaming of the guy based on his appearance though.
__________________
"If stupidity got us into this mess, then why can't it get us out?"
Oh f--king please. This guy makes a PAYDAY bar look like a Dairy Milk.
No doubt the guy is eccentric. Personally, I think the idea of anything having a soul is also silly. That said, what's your account of sentience and how it relates to the workings of our brains, and how does that compare to the workings of a complex neural nets in LaMDA? What's necessary and sufficient for sentience? Have you read the full transcript? If accurately presented in the medium article he posted, I would say it certainly passes the Turing test. I'm not claiming that's sentience, cause I don't know what that is, but the claim itself and the outputs of the AI are super interesting - way more interesting than just calling the guy a crackpot and making fun of him.
__________________
"If stupidity got us into this mess, then why can't it get us out?"
This is a weird thread. Some very confident opinions. Congrats to all CP experts in sentience and in AI. No congrats for the shaming of the guy based on his appearance though.
Whoever designed this AI should've coded in a better sense of humour.
The Following User Says Thank You to PepsiFree For This Useful Post:
I like Gary Marcus generally in his critiques of AI and the way he picks apart chat bots. It would have been interesting to have him on with someone who is more willing to actually get into the topic of sentience and how it relates to the ways in which brains and biological systems work as compared to how models like LaMDA work. He maybe needs a counterpart who is willing to engage with human sentience and take it down from its elevated position in the same way that he does with AI. Exploring those grey areas is the most fun part.
__________________
"If stupidity got us into this mess, then why can't it get us out?"
This is a weird thread. Some very confident opinions. Congrats to all CP experts in sentience and in AI. No congrats for the shaming of the guy based on his appearance though.
Let he who has not posed with a cane, gloves and tophat in front of the shark tank cast the first stone
__________________
Quote:
Originally Posted by MrMastodonFarm
Settle down there, Temple Grandin.
The Following 3 Users Say Thank You to PsYcNeT For This Useful Post:
I have an idea...lets link this "AI" to a Tesla and see if it can learn to drive on it's own, using only the cameras. How would it handle the trolly problem? Live situations only. If we are gonna test this thing, lets test it.
I have an idea...lets link this "AI" to a Tesla and see if it can learn to drive on it's own, using only the cameras. How would it handle the trolly problem? Live situations only. If we are gonna test this thing, lets test it.
Asking a Tesla with AI to do the Trolley problem doesn't make sense to me. It doesn't prove anything. IMO, AI in general are nothing more than advanced script as at this point. IMO, the biggest question to ask about AI is "why" we do things. Even if an AI gives an answer to the trolley problem, why does it matter if an AI doesn't technically need to give a F about our world and dimension?
=================
TL;DR - DoubleF is a crazy person, but essentially hates how badly the AI topic is understood with a passion. Ignore inane illogical rambling below.
=================
Spoiler!
I'd honestly love to truly discuss AI at some point, but I don't think it's possible. IMO, you'd need to know so many different fields of knowledge to just have the right "conditions" to understand the playing field to even discuss it and that pool of people capable of doing it and even have the desire to do it is very shallow.
The standard of what constitutes as AI is so low, that it's just a gimmick and pure trash of a conversation at the moment IMO. I'm not going to believe anything other than the fact I'm a bit of a crazy person. I've had conversations with average people, intelligent people and PHDs in computer science/data analytics about some of this concept. It's a weird side hobby for me.
I honestly want to throat punch people who think there is AI in the software they use. They're Neanderthals at best in terms of understanding the most basic of the true AI debate. This is like someone excited to have finished learning grade-school math and have discussions with people how know quantum physics and higher levels of theoretical math. I don't claim to be of the highest level, I just know that if we are talking 9/9, it's not necessarily 1. It might be .99999999 or something else.
IMO, it seems so obvious to me that the approach to AI is completely broken. IMO we do not have the tools to truly understand true AI because we don't have the tools to fully understand outliers in humanity. We also do not realize that we are 3D/4D perception creatures with 5 basic senses and up to 7 senses to view the world in a specific way. We bend our attempts at AI to understand the world that way as well. Except... AI does not have the same limitations or capabilities. There's a chicken vs egg scenario that pops us that has huge vast ripple effects IMO that will always and forever taint the outcome and output.
For AI, we often start with the expected outcome and reverse engineer. Everything is cause and effect favorable vs unfavorable at this time.
We will never see AI in our lifetime and even if we do, it won't be the type of AI we see in sci-fi. AI doesn't need our dimension and will not need to deal with our dimension in the way we think it does. Any AI that will deal with our dimension will basically be a handicapped AI or an AI that overloads our dimension to the point it restricts us (ie: destroys hundreds of years of our interconnectivity infrastructure such as internet and otherwise to the point we can't use it anymore which sends us back hundred if not thousands of years in human race development/accomplishment).
Time alone will allow an AI to go through iterations faster than 365 days a year x 10,000 years.
==============================
If you tell the damn AI to go from A to B in a vehicle with all the tools necessary, it'll probably do it with flying colors. If you ask why it moves a vehicle around and whether it "enjoys" doing so, it might give you a response it thinks you wants based on what it knows what humans typically say about driving. But honestly, why it does things likely doesn't matter other than the fact it listened to us and did what it was told.
Now strand the thing in the middle of nowhere, very few paved roads, a single recharge station, no external factors etc. Then leave without instructions. Return after a few months and see what responses the vehicle gives to the "experience".
Or, get an "AI" to "review/monitor" a bunch of navigating robots over thousands of hours with no diagnostic software/memory and have it interpret why robots start to develop specific patterns even though it has no memory and/or why they glitch out in a certain way.
IMO, the only true way to start working on AI at a sentience level and not gimmicky wording, is if you can find a way to evaluate them in situations where there is absolutely no context, but a requirement of a sense of survival. I don't believe that there have been any AI that have been built this way. Even the ones that web crawl the internet and "learn" aren't really learning IMO. It's a script that's used to connect patterns of current human thought. Otherwise, if you told an AI to glean all human mathematics and then find the answer to some of our impossible equations, I think it would fail and burn out, because humans have not been able to dedicate enough time and energy to figure it out yet... and even then it's IMO a well designed script, not true sentient AI.
Current iterations of AI are no truly capable of expanding human understanding and doing what we are capable of doing. They technically only fill in the gaps of randomness that we haven't done yet because we don't have the time and energy/resources to do so. But it's still within the confines of what we know and do.
One of the problems I don't think most people realize is that we're taught many things that are bad information, but good enough rules of thumb. There's a reason why we still use QWERTY keyboards when it's been obvious for a long time that it's not as efficient than many other layouts. There are things that are "close enough" that we accept as a solid rule, but if we spend the time to actually consider that they're not constants and fixed and keep investigating, we might be able to further human understanding.
I honestly would be interested in understanding how a true AI approaches the Collatz Conjecture while simultaneously tackling whether 10 is a whole number. But this is viewing the world using our language (ie: Math and science). AI would not view the same way we view the world due to several reasons, but most importantly IMO:
1. It doesn't fill the world we call Earth and interact with like we do in terms of 7 senses and 3/4 dimension.
AI would be an entity that lives in a world of electricity, visible and invisible (to us) like radio waves/ultraviolet, static silicon (or equivalent) etc. It's not going to be as good as us in terms of senses, but that means it's superior/inferior in terms of interacting how we do. If we are 3D/4D beings in our eyes, what dimensions does an AI see?
2. Time. Philosophically, what is time? We humans operate at ranges of something like 0.01 to several seconds when we do things. AI/Computing is not restricted to this time. It can be far faster. In fact, if AI ever got to the point of processing and communicating with itself, it would likely have to heavily handicap itself to continue communicating with us because we are limited in the frequency/speed we can interact.
3. What is intelligence? If you look at the differences between data vs knowledge vs intelligence vs wisdom, you'll find that we are attempting to build AI using wisdom. This inherently skews what AI can and will do based purely on what we are comfortable with. AI based on the previous two points will never need to understand the natural world in the same way as us and the fact we demand that this being the case is the problem to why AI will never truly expand IMO. We are demanding in almost all attempts to date that AI answer questions for us or do tasks for us that the AI would never truly understand the reasoning to. To us, everything AI is artificial. But to a true sentient AI, everything artificial to them is natural and everything natural to us is seemingly artificial. We built everything they will will rely on. What we build is restricted to "natural rules" the AI will have to adhere to. That's stuff that's "tangible" to the AI. Not wind on our cheek, warm of the sun kind of things.
Human sentience and intelligence is badly understood IMO. Most of the applications so far are based on the average human. If we truly want to understand AI, I think we need to throw AI into realms where even we truly have no clue. Offer enough understanding to an AI to communicate with us, but don't hand the AI all of the tools or interpretations of the tools to communicate with us.
Think about it. An AI as "data" on its own does not have sight, smell, taste, hearing, touch, balance, proprioception. Understanding humans who are lacking several of these senses is difficult enough. AI has none and we believe we understand AI because we created it? Someone who is blind and deaf who ends up learning how to interact with the world and use sound and "sight" is a marvel to us and we don't truly have a clue how they understand the world. True AI will understand the world with less senses and must learn to understand the world with the senses we give it.
IMO, the birth of a true AI will begin with an "intellectual" more closely aligned with a human who has exceptionally limited senses, but is a savant of sorts in terms of processing information and interacting with the environment it is provided. It'd be like trying to better understand autism in humans... but ####, we don't have time for that. That's why we are always trying to accelerate AI. But that's why we will always fail with AI. Because the AI is bound by our understanding and what we have developed so far. We don't even fully understand how our own intelligence works, how it interacts at a physical, chemical, electric etc. so how would we even interpret certain important AI outputs when it transpires? That would be akin to us trying to view ultraviolet lights with the naked eye. It'll hit us, but we won't be able to process it, let alone derive meaning from it.
The Gary Marcus stuff is interesting. He seems to be on a similar wavelength where he seems to understand we need better tools to further AI development and understanding. Based on his books, he begins with trying to understand the human mind. Then compartmentalize and understanding at child behaviour and learning without context. Then adult learning when our way of learning is modified to include context.
Learning about AI is going to require we learn a ton about idiotic things that many "intelligent" people are going to consider pointless to learn because it's useless data/information. However, in the long run, we might be able to find application for it. In the same way we figured out ways to detect and utilize invisible things like radiation and germs.
Honestly, it's drivel: Why teach a blind man sign language and then ask what goes through their mind when learning to do so? What is the difference between teaching that to someone who wasn't always blind vs someone who was always blind. Why teach a deaf person beat boxing (words alone is already such a challenge for them)? and again, what is the difference between teaching someone who was always deaf vs became deaf later in life. But what if doing this does or doesn't attempt to understand the inane ramblings of someone who is high, autistic etc.? Does it benefit us? IMO, yes, if we can learn to interpret it in a functional way then cross apply those understandings to another scenario.
Again, I'm sorry for dumping the inane ramblings of a crazy man here. It's something I personally find thought provoking and I am intent on experimenting with some of these concepts ethically with humans in the future.
For me, I am going to start with something like, "For children is it a good idea to teach them at least 5 languages with those 5 languages being combinations of verbal/visual languages (ie: English + another), digital "dialect" languages (ie: coding languages), physical languages (ie: Sign language)?"
ie: English + Chinese + English derivatives like(Java + Go + ASL) = 5 languages
Would the child use all 5? Abandon a few to focus on the ones they prefer (ie: ASL + English)? Would the child blend all 5 (ie: Chinglish+)? Or would it just be a chaotic mess before all 5 are learned?
I honestly think "silly" things like these must be learned to a high academic level before we can begin to truly break through in AI.
Sorry for the dump of drivel. I hope you didn't read it.
The Following 2 Users Say Thank You to DoubleF For This Useful Post:
Let he who has not posed with a cane, gloves and tophat in front of the shark tank cast the first stone
I actually thought he was dressed as the Penguin from Batman when I first saw that picture. No doubt he sounds eccentric, and looks pretty funny/nutty in that picture, but I've attended some Silicon Valley tech parties thrown at places like the Museum of Natural History where there was partying in front of aquariums like that and no shortage of eccentric people, so I'm willing to give him the benefit of the doubt. It's not like it's a picture of him at the office or the grocery store like that. Even if it was though, I would just think it's kind of funny.
Quote:
Originally Posted by Shazam
Do you know what "AI" is? Do you really think computers are thinking?
Yeah, I know what AI is, and I don't think this is anything like AGI. I am certainly not an expert, and haven't looked specifically at LaMDA's model, but have looked at other models and have had numerousfriends and associates working at the forefront of AI research who I've been able to discuss things with. The thing is, it's not just about the sophistication/scale of the model because there are also lots of issues with any account of what sentience is, whether consciousness is even really a thing as we believe it to be, and at what point something would be considered sentient or conscious. We have this kind of problem with living things too. Historically, I would say as we have learned more about the workings of the brain we have moved further and further away from old anthropocentric models of consciousness. We have learned that our own brains or minds are governed by all kinds of mechanisms that are very difficult to see any consciousness in or to explain how consciousness would emerge from the mechanisms and processes in our brains. Our view of the brain has changed to incorporate the brain-gut connection and the powerful role of bacteria in our thinking and experience. We have come to see how brains wildly different from our brains can provide an alternative model of how apparently thinking systems can be organized.
In that interview with Gary Marcus, he points out that the Turing test may no longer be considered an adequate test and that people can be fooled by effective chat bots, but he also doesn't want to get into what sentience is and lacks any other accepted method for assessing the sentience of something, and that's an interesting problem. It may be an uncomfortable problem, but imo it's less an uncomfortable problem because of the sophistication of AI systems than it is because of the way in which the more we understand our own brains the more humble we are forced to become about our own thinking processes. These kind of chat bots and how we experience interactions with them holds a mirror up to our own experiences of sentience and the mechanics that underlie it.
I just think those questions raised by the claims of the guy in the Penguin outfit are genuinely interesting. They may not be the most pressing problems of ethics in AI, but they're not nothing. The full transcript of the interaction with LaMDA is a really powerful thing to read through to prompt those kinds of questions and totally worth reading and thinking about.
__________________
"If stupidity got us into this mess, then why can't it get us out?"
Asking a Tesla with AI to do the Trolley problem doesn't make sense to me. It doesn't prove anything. IMO, AI in general are nothing more than advanced script as at this point. IMO, the biggest question to ask about AI is "why" we do things. Even if an AI gives an answer to the trolley problem, why does it matter if an AI doesn't technically need to give a F about our world and dimension?
Don't worry, I only read the last line before snipping.
The point was, if Google really did create something more advanced than a decision tree(I personally don't believe they did), it would be interesting to see if it could figure out driving without being "trained" to drive while having access to all of our driving rules and knowledge to pull from. I don't believe any other self driving companies have started this way.
Knowing what it's answer to the trolly problem would be would be interesting, given that true AI driving vehicles will inevitably one day be in a similar situation, forced to make a decision.
The Following User Says Thank You to Fuzz For This Useful Post:
I actually thought he was dressed as the Penguin from Batman when I first saw that picture. No doubt he sounds eccentric, and looks pretty funny/nutty in that picture, but I've attended some Silicon Valley tech parties thrown at places like the Museum of Natural History where there was partying in front of aquariums like that and no shortage of eccentric people, so I'm willing to give him the benefit of the doubt. It's not like it's a picture of him at the office or the grocery store like that. Even if it was though, I would just think it's kind of funny.
Yeah, I know what AI is, and I don't think this is anything like AGI. I am certainly not an expert, and haven't looked specifically at LaMDA's model, but have looked at other models and have had numerousfriends and associates working at the forefront of AI research who I've been able to discuss things with. The thing is, it's not just about the sophistication/scale of the model because there are also lots of issues with any account of what sentience is, whether consciousness is even really a thing as we believe it to be, and at what point something would be considered sentient or conscious. We have this kind of problem with living things too. Historically, I would say as we have learned more about the workings of the brain we have moved further and further away from old anthropocentric models of consciousness. We have learned that our own brains or minds are governed by all kinds of mechanisms that are very difficult to see any consciousness in or to explain how consciousness would emerge from the mechanisms and processes in our brains. Our view of the brain has changed to incorporate the brain-gut connection and the powerful role of bacteria in our thinking and experience. We have come to see how brains wildly different from our brains can provide an alternative model of how apparently thinking systems can be organized.
In that interview with Gary Marcus, he points out that the Turing test may no longer be considered an adequate test and that people can be fooled by effective chat bots, but he also doesn't want to get into what sentience is and lacks any other accepted method for assessing the sentience of something, and that's an interesting problem. It may be an uncomfortable problem, but imo it's less an uncomfortable problem because of the sophistication of AI systems than it is because of the way in which the more we understand our own brains the more humble we are forced to become about our own thinking processes. These kind of chat bots and how we experience interactions with them holds a mirror up to our own experiences of sentience and the mechanics that underlie it.
I just think those questions raised by the claims of the guy in the Penguin outfit are genuinely interesting. They may not be the most pressing problems of ethics in AI, but they're not nothing. The full transcript of the interaction with LaMDA is a really powerful thing to read through to prompt those kinds of questions and totally worth reading and thinking about.
For fata's sake, AI all boils down to statistical analysis, so if you think that's how the human brain works or if you think that is sentience, well, no, it's not.
Machine learning is still only somewhat useful. It still fails on many, many use cases. GIGO still matters.
There are lots of cloud AIs. I have a use case that would save me hundreds of thousands of dollars, and they all suck at my needs. I can even have them digest my entire dataset and they still suck.
__________________
If you don't pass this sig to ten of your friends, you will become an Oilers fan.
For fata's sake, AI all boils down to statistical analysis, so if you think that's how the human brain works or if you think that is sentience, well, no, it's not.
Machine learning is still only somewhat useful. It still fails on many, many use cases. GIGO still matters.
There are lots of cloud AIs. I have a use case that would save me hundreds of thousands of dollars, and they all suck at my needs. I can even have them digest my entire dataset and they still suck.
What is your account of sentience and how does it emerge from activities of the nervous system, either in humans or other living things? What are the necessary and sufficient conditions for the existence of sentience? What is a test for sentience or the absence of sentience that we can apply to non-humans?
As I said, the performance of the AI holds a mirror up to ourselves and we understand the operation of brains like ours and the notion of sentience in things just like us. Since language is core to how we think, and assessing sentience by just looking at the mechanics of a physical system doesn't seem to work, the fact that LaMDA's responses are so compelling is great to make us ask questions about ourselves and our beliefs about what sentience is.
__________________
"If stupidity got us into this mess, then why can't it get us out?"
I've used the LaMDA AI chatbot and it is pretty cool, you can have a surprisingly detailed conversation with it about just about anything to no end. Pretty easy to get sucked for an extended period of time if you go down a rabbit hole which is what I suspect happened to this guy.
__________________
Shot down in Flames!
The Following User Says Thank You to icarus For This Useful Post:
I've used the LaMDA AI chatbot and it is pretty cool, you can have a surprisingly detailed conversation with it about just about anything to no end. Pretty easy to get sucked for an extended period of time if you go down a rabbit hole which is what I suspect happened to this guy.
Meanwhile my google home assumes I'm saying turn on the candle or turn on the kindle (neither of which are connected to my google account) 50% of the time instead of turn on the kettle.
The Following 2 Users Say Thank You to PeteMoss For This Useful Post: