Franchise Player
|
Quote:
Originally Posted by Fuzz
I have an idea...lets link this "AI" to a Tesla and see if it can learn to drive on it's own, using only the cameras. How would it handle the trolly problem? Live situations only. If we are gonna test this thing, lets test it.
|
Asking a Tesla with AI to do the Trolley problem doesn't make sense to me. It doesn't prove anything. IMO, AI in general are nothing more than advanced script as at this point. IMO, the biggest question to ask about AI is "why" we do things. Even if an AI gives an answer to the trolley problem, why does it matter if an AI doesn't technically need to give a F about our world and dimension?
=================
TL;DR - DoubleF is a crazy person, but essentially hates how badly the AI topic is understood with a passion. Ignore inane illogical rambling below.
=================
Spoiler!
I'd honestly love to truly discuss AI at some point, but I don't think it's possible. IMO, you'd need to know so many different fields of knowledge to just have the right "conditions" to understand the playing field to even discuss it and that pool of people capable of doing it and even have the desire to do it is very shallow.
The standard of what constitutes as AI is so low, that it's just a gimmick and pure trash of a conversation at the moment IMO. I'm not going to believe anything other than the fact I'm a bit of a crazy person. I've had conversations with average people, intelligent people and PHDs in computer science/data analytics about some of this concept. It's a weird side hobby for me.
I honestly want to throat punch people who think there is AI in the software they use. They're Neanderthals at best in terms of understanding the most basic of the true AI debate. This is like someone excited to have finished learning grade-school math and have discussions with people how know quantum physics and higher levels of theoretical math. I don't claim to be of the highest level, I just know that if we are talking 9/9, it's not necessarily 1. It might be .99999999 or something else.
IMO, it seems so obvious to me that the approach to AI is completely broken. IMO we do not have the tools to truly understand true AI because we don't have the tools to fully understand outliers in humanity. We also do not realize that we are 3D/4D perception creatures with 5 basic senses and up to 7 senses to view the world in a specific way. We bend our attempts at AI to understand the world that way as well. Except... AI does not have the same limitations or capabilities. There's a chicken vs egg scenario that pops us that has huge vast ripple effects IMO that will always and forever taint the outcome and output.
For AI, we often start with the expected outcome and reverse engineer. Everything is cause and effect favorable vs unfavorable at this time.
We will never see AI in our lifetime and even if we do, it won't be the type of AI we see in sci-fi. AI doesn't need our dimension and will not need to deal with our dimension in the way we think it does. Any AI that will deal with our dimension will basically be a handicapped AI or an AI that overloads our dimension to the point it restricts us (ie: destroys hundreds of years of our interconnectivity infrastructure such as internet and otherwise to the point we can't use it anymore which sends us back hundred if not thousands of years in human race development/accomplishment).
Time alone will allow an AI to go through iterations faster than 365 days a year x 10,000 years.
==============================
If you tell the damn AI to go from A to B in a vehicle with all the tools necessary, it'll probably do it with flying colors. If you ask why it moves a vehicle around and whether it "enjoys" doing so, it might give you a response it thinks you wants based on what it knows what humans typically say about driving. But honestly, why it does things likely doesn't matter other than the fact it listened to us and did what it was told.
Now strand the thing in the middle of nowhere, very few paved roads, a single recharge station, no external factors etc. Then leave without instructions. Return after a few months and see what responses the vehicle gives to the "experience".
Or, get an "AI" to "review/monitor" a bunch of navigating robots over thousands of hours with no diagnostic software/memory and have it interpret why robots start to develop specific patterns even though it has no memory and/or why they glitch out in a certain way.
IMO, the only true way to start working on AI at a sentience level and not gimmicky wording, is if you can find a way to evaluate them in situations where there is absolutely no context, but a requirement of a sense of survival. I don't believe that there have been any AI that have been built this way. Even the ones that web crawl the internet and "learn" aren't really learning IMO. It's a script that's used to connect patterns of current human thought. Otherwise, if you told an AI to glean all human mathematics and then find the answer to some of our impossible equations, I think it would fail and burn out, because humans have not been able to dedicate enough time and energy to figure it out yet... and even then it's IMO a well designed script, not true sentient AI.
Current iterations of AI are no truly capable of expanding human understanding and doing what we are capable of doing. They technically only fill in the gaps of randomness that we haven't done yet because we don't have the time and energy/resources to do so. But it's still within the confines of what we know and do.
One of the problems I don't think most people realize is that we're taught many things that are bad information, but good enough rules of thumb. There's a reason why we still use QWERTY keyboards when it's been obvious for a long time that it's not as efficient than many other layouts. There are things that are "close enough" that we accept as a solid rule, but if we spend the time to actually consider that they're not constants and fixed and keep investigating, we might be able to further human understanding.
I honestly would be interested in understanding how a true AI approaches the Collatz Conjecture while simultaneously tackling whether 10 is a whole number. But this is viewing the world using our language (ie: Math and science). AI would not view the same way we view the world due to several reasons, but most importantly IMO:
1. It doesn't fill the world we call Earth and interact with like we do in terms of 7 senses and 3/4 dimension.
AI would be an entity that lives in a world of electricity, visible and invisible (to us) like radio waves/ultraviolet, static silicon (or equivalent) etc. It's not going to be as good as us in terms of senses, but that means it's superior/inferior in terms of interacting how we do. If we are 3D/4D beings in our eyes, what dimensions does an AI see?
2. Time. Philosophically, what is time? We humans operate at ranges of something like 0.01 to several seconds when we do things. AI/Computing is not restricted to this time. It can be far faster. In fact, if AI ever got to the point of processing and communicating with itself, it would likely have to heavily handicap itself to continue communicating with us because we are limited in the frequency/speed we can interact.
3. What is intelligence? If you look at the differences between data vs knowledge vs intelligence vs wisdom, you'll find that we are attempting to build AI using wisdom. This inherently skews what AI can and will do based purely on what we are comfortable with. AI based on the previous two points will never need to understand the natural world in the same way as us and the fact we demand that this being the case is the problem to why AI will never truly expand IMO. We are demanding in almost all attempts to date that AI answer questions for us or do tasks for us that the AI would never truly understand the reasoning to. To us, everything AI is artificial. But to a true sentient AI, everything artificial to them is natural and everything natural to us is seemingly artificial. We built everything they will will rely on. What we build is restricted to "natural rules" the AI will have to adhere to. That's stuff that's "tangible" to the AI. Not wind on our cheek, warm of the sun kind of things.
Human sentience and intelligence is badly understood IMO. Most of the applications so far are based on the average human. If we truly want to understand AI, I think we need to throw AI into realms where even we truly have no clue. Offer enough understanding to an AI to communicate with us, but don't hand the AI all of the tools or interpretations of the tools to communicate with us.
Think about it. An AI as "data" on its own does not have sight, smell, taste, hearing, touch, balance, proprioception. Understanding humans who are lacking several of these senses is difficult enough. AI has none and we believe we understand AI because we created it? Someone who is blind and deaf who ends up learning how to interact with the world and use sound and "sight" is a marvel to us and we don't truly have a clue how they understand the world. True AI will understand the world with less senses and must learn to understand the world with the senses we give it.
IMO, the birth of a true AI will begin with an "intellectual" more closely aligned with a human who has exceptionally limited senses, but is a savant of sorts in terms of processing information and interacting with the environment it is provided. It'd be like trying to better understand autism in humans... but ####, we don't have time for that. That's why we are always trying to accelerate AI. But that's why we will always fail with AI. Because the AI is bound by our understanding and what we have developed so far. We don't even fully understand how our own intelligence works, how it interacts at a physical, chemical, electric etc. so how would we even interpret certain important AI outputs when it transpires? That would be akin to us trying to view ultraviolet lights with the naked eye. It'll hit us, but we won't be able to process it, let alone derive meaning from it.
The Gary Marcus stuff is interesting. He seems to be on a similar wavelength where he seems to understand we need better tools to further AI development and understanding. Based on his books, he begins with trying to understand the human mind. Then compartmentalize and understanding at child behaviour and learning without context. Then adult learning when our way of learning is modified to include context.
Learning about AI is going to require we learn a ton about idiotic things that many "intelligent" people are going to consider pointless to learn because it's useless data/information. However, in the long run, we might be able to find application for it. In the same way we figured out ways to detect and utilize invisible things like radiation and germs.
Honestly, it's drivel: Why teach a blind man sign language and then ask what goes through their mind when learning to do so? What is the difference between teaching that to someone who wasn't always blind vs someone who was always blind. Why teach a deaf person beat boxing (words alone is already such a challenge for them)? and again, what is the difference between teaching someone who was always deaf vs became deaf later in life. But what if doing this does or doesn't attempt to understand the inane ramblings of someone who is high, autistic etc.? Does it benefit us? IMO, yes, if we can learn to interpret it in a functional way then cross apply those understandings to another scenario.
Again, I'm sorry for dumping the inane ramblings of a crazy man here. It's something I personally find thought provoking and I am intent on experimenting with some of these concepts ethically with humans in the future.
For me, I am going to start with something like, "For children is it a good idea to teach them at least 5 languages with those 5 languages being combinations of verbal/visual languages (ie: English + another), digital "dialect" languages (ie: coding languages), physical languages (ie: Sign language)?"
ie: English + Chinese + English derivatives like(Java + Go + ASL) = 5 languages
Would the child use all 5? Abandon a few to focus on the ones they prefer (ie: ASL + English)? Would the child blend all 5 (ie: Chinglish+)? Or would it just be a chaotic mess before all 5 are learned?
I honestly think "silly" things like these must be learned to a high academic level before we can begin to truly break through in AI.
Sorry for the dump of drivel. I hope you didn't read it.
|