Calgarypuck Forums - The Unofficial Calgary Flames Fan Community

Go Back   Calgarypuck Forums - The Unofficial Calgary Flames Fan Community > Main Forums > The Off Topic Forum
Register Forum Rules FAQ Community Calendar Today's Posts Search

Reply
 
Thread Tools Search this Thread
Old 05-22-2024, 02:03 PM   #461
topfiverecords
Franchise Player
 
topfiverecords's Avatar
 
Join Date: Feb 2010
Location: Hyperbole Chamber
Exp:
Default

Quote:
Originally Posted by edslunch View Post
That's incredible. Remind me not to invite AI to a party though. "Hey, how about that lighting, and I'm intrigued by your ceiling"
Major obsession with the lighting. Like having a conversation with my mother. Yes mother, it's rainy here too.

Both AI are annoying and punchable.
topfiverecords is online now   Reply With Quote
Old 05-22-2024, 02:34 PM   #462
edslunch
Franchise Player
 
edslunch's Avatar
 
Join Date: Apr 2009
Exp:
Default

Quote:
Originally Posted by topfiverecords View Post
Major obsession with the lighting. Like having a conversation with my mother. Yes mother, it's rainy here too.

Both AI are annoying and punchable.

In other words, human in the worst way
edslunch is offline   Reply With Quote
Old 05-26-2024, 11:03 AM   #463
Wormius
Franchise Player
 
Wormius's Avatar
 
Join Date: Feb 2011
Location: Somewhere down the crazy river.
Exp:
Default

Quote:
Originally Posted by edslunch View Post
That's incredible. Remind me not to invite AI to a party though. "Hey, how about that lighting, and I'm intrigued by your ceiling"

If a sex doll could talk.
Wormius is offline   Reply With Quote
Old 05-29-2024, 10:13 AM   #464
edslunch
Franchise Player
 
edslunch's Avatar
 
Join Date: Apr 2009
Exp:
Default

AI will be many things and will have a profound impact on human civilization, but it will never be human. Its entire understanding of humanity is derived from reading about it from whatever text is given to it, listening to it from whatever audio it is given, watching it from whatever video it is given, and whatever interactions it has with users.

Imagine raising a child where their only exposure to other humans was via those media. How well would they navigate society? Even then, they still would have experienced human feelings based in response to their senses or unbidden based on body chemistry.

AI has none of that, so can only simulate what it's like to be human. It has millions of descriptions of boredom and love but has never felt either, so its expression of those are inevitably going to be cliched. We've all met people who act like experts but have only read about the subject.

Ironically, while AI has never experienced social interactions and the mix of verbal and non-verbal cues, if trained to detect them it will do a much better job of recognizing them. Maybe there should be a new wearable AI assistant who's job is to tell you if a person is into you or not, since many of us are totally oblivious.
edslunch is offline   Reply With Quote
Old 05-29-2024, 10:30 AM   #465
Fuzz
Franchise Player
 
Fuzz's Avatar
 
Join Date: Mar 2015
Exp:
Default

That's why the modern usage of the term AI is dumb. Because it isn't really an AI. So now we use the term AGI, for artificial general intelligence to describe human like intelligence. We don't have that. We may or may not ever create that. It may happen next year. But since we have such trouble defining and even explaining consciousness, which is what AGI would require, that I think it is a long ways off.


There is a thought that emergent AGI, that is, you basically build the box and let it evolve and "emerge" and learn is one way of getting there, but no one knows how to even seed that, or what the box needs to be, or not be.
Fuzz is online now   Reply With Quote
Old 05-29-2024, 11:22 AM   #466
Firebot
First Line Centre
 
Join Date: Jul 2011
Exp:
Default

Quote:
Originally Posted by edslunch View Post
Imagine raising a child where their only exposure to other humans was via those media. How well would they navigate society? Even then, they still would have experienced human feelings based in response to their senses or unbidden based on body chemistry.
https://en.wikipedia.org/wiki/Feral_child

You don't need to imagine, this already can happen to humans and there's several documented cases. A child raised by dogs will only express dog like behavior and not progress beyond this despite its capacity to be much more. Religion exists because it was a way for ancestors to explain what cannot be explained, we didn't know any better. At the moment AI is limited by how LLMs function, not AGI but predictably simulates what a human is likely to say.

AGI, once discovered, would have the ability to reason like humans versus predicting which is what AI does today. It may arrive quicker than expected, this is not like travelling at the speed of light which is impossible. It's expected to be possible to achieve within our lifetimes.

This was a very fun reverse turing test experiment where the AI agents with different LLMs try to figure out who is actually human among them.

Firebot is offline   Reply With Quote
Old 05-29-2024, 11:41 AM   #467
Fuzz
Franchise Player
 
Fuzz's Avatar
 
Join Date: Mar 2015
Exp:
Default

Quote:
Originally Posted by Firebot View Post
https://en.wikipedia.org/wiki/Feral_child

You don't need to imagine, this already can happen to humans and there's several documented cases. A child raised by dogs will only express dog like behavior and not progress beyond this despite its capacity to be much more. Religion exists because it was a way for ancestors to explain what cannot be explained, we didn't know any better. At the moment AI is limited by how LLMs function, not AGI but predictably simulates what a human is likely to say.

AGI, once discovered, would have the ability to reason like humans versus predicting which is what AI does today. It may arrive quicker than expected, this is not like travelling at the speed of light which is impossible. It's expected to be possible to achieve within our lifetimes.

This was a very fun reverse turing test experiment where the AI agents with different LLMs try to figure out who is actually human among them.
Not really the same though. We understand the theoretical limits to the speed of light, we do not understand consciousness. We do not know the steps that must be taken, we have no plan to work from.
Fuzz is online now   Reply With Quote
Old 05-29-2024, 11:46 AM   #468
edslunch
Franchise Player
 
edslunch's Avatar
 
Join Date: Apr 2009
Exp:
Default

Quote:
Originally Posted by Firebot View Post
https://en.wikipedia.org/wiki/Feral_child

You don't need to imagine, this already can happen to humans and there's several documented cases. A child raised by dogs will only express dog like behavior and not progress beyond this despite its capacity to be much more. Religion exists because it was a way for ancestors to explain what cannot be explained, we didn't know any better. At the moment AI is limited by how LLMs function, not AGI but predictably simulates what a human is likely to say.

AGI, once discovered, would have the ability to reason like humans versus predicting which is what AI does today. It may arrive quicker than expected, this is not like travelling at the speed of light which is impossible. It's expected to be possible to achieve within our lifetimes.

This was a very fun reverse turing test experiment where the AI agents with different LLMs try to figure out who is actually human among them.


Still, there is surely a difference between being taught about something vs actually experiencing it.
edslunch is offline   Reply With Quote
Old 05-29-2024, 12:42 PM   #469
Firebot
First Line Centre
 
Join Date: Jul 2011
Exp:
Default

Quote:
Originally Posted by Fuzz View Post
Not really the same though. We understand the theoretical limits to the speed of light, we do not understand consciousness. We do not know the steps that must be taken, we have no plan to work from.
Isn't that how most innovations occur? Nuclear fission wasn't thought to be possible until it was (where we created Barium out of Uranium and had no idea why). Nuclear fission was an unexpected accidental result. Likewise it's theoretically possible to harness nuclear fusion as a viable energy source but we have not discovered how yet.

We already know that achieving AGI with consciousness is theoretically possible, at least to our understanding of what consciousness means. We don't necessarily know what the end result will be, but we are ever so closer to what we believe the end result may look like. LLMs are the closest thing we have ever built to what AGI may look like. Computing power is currently our biggest limitation to achieving the theoretical. Advancement of GPUs and the discovery that GPUs would significantly accelerate training was a huge step towards the theoretical.

Eventually we will figure it out.

https://en.wikipedia.org/wiki/Neural..._consciousness

Last edited by Firebot; 05-29-2024 at 12:45 PM.
Firebot is offline   Reply With Quote
Old 05-29-2024, 12:55 PM   #470
Fuzz
Franchise Player
 
Fuzz's Avatar
 
Join Date: Mar 2015
Exp:
Default

An LLM is like drawing a car from memory, AGI would be drawing one from imagination. I don't think that leap is as small as some believe.


We already know that achieving AGI with consciousness is theoretically possible

Curious to read on that, because it's hard to understand how it can be deemed theoretically possible when we don't even really understand what consciousness is, and how it emerges(or could be created). Any good articles?
Fuzz is online now   Reply With Quote
The Following 2 Users Say Thank You to Fuzz For This Useful Post:
Old 05-29-2024, 01:11 PM   #471
Firebot
First Line Centre
 
Join Date: Jul 2011
Exp:
Default

I mean I linked the basis? This is ultimately the goal where AGI is looking to get.

You can also look at Koch's works and publications.

https://en.wikipedia.org/wiki/Christof_Koch

Also LLMs were a huge step in achieving the theoretical to the point were many AI experts believe we are much closer.



https://research.aimultiple.com/arti...larity-timing/

Last edited by Firebot; 05-29-2024 at 01:13 PM.
Firebot is offline   Reply With Quote
Old 05-29-2024, 01:23 PM   #472
Firebot
First Line Centre
 
Join Date: Jul 2011
Exp:
Default

And as you did mention, we don't really truly understand what consciousness is (even though we have several theories on how to achieve), it's one of those situations where we may not know we hit it until we hit it, like the fission example.
Firebot is offline   Reply With Quote
Old 05-29-2024, 02:40 PM   #473
Fuzz
Franchise Player
 
Fuzz's Avatar
 
Join Date: Mar 2015
Exp:
Default

So from your Wikipedia link...
Quote:
Discovering and characterizing neural correlates does not offer a causal theory of consciousness that can explain how particular systems experience anything, the so-called hard problem of consciousness,[6] but understanding the NCC may be a step toward a causal theory. Most neurobiologists propose that the variables giving rise to consciousness are to be found at the neuronal level, governed by classical physics. There are theories proposed of quantum consciousness based on quantum mechanics.[7]

I guess to me this sounds like there is a long way to go to understanding it.
Fuzz is online now   Reply With Quote
The Following User Says Thank You to Fuzz For This Useful Post:
Old 05-29-2024, 02:51 PM   #474
photon
The new goggles also do nothing.
 
photon's Avatar
 
Join Date: Oct 2001
Location: Calgary
Exp:
Default

But as long as it's not magic / spirit / etc; as long as it's based on a physical process then in principle it should be possible to make something other than a brain that can experience consciousness.. it's a software / hardware problem.
__________________
Uncertainty is an uncomfortable position.
But certainty is an absurd one.
photon is offline   Reply With Quote
Old 05-29-2024, 02:54 PM   #475
Fuzz
Franchise Player
 
Fuzz's Avatar
 
Join Date: Mar 2015
Exp:
Default

Ya, I don't think it is magic. I just think we have a long way to go to even understand it, but a breakthrough could change things rapidly. It's just that it probably won't be figured out by evolutions of existing models. So it first has to be figured out, then how we can reproduce it.
Fuzz is online now   Reply With Quote
The Following User Says Thank You to Fuzz For This Useful Post:
Old 05-30-2024, 08:39 AM   #476
Russic
Dances with Wolves
 
Russic's Avatar
 
Join Date: Jun 2006
Location: Section 304
Exp:
Default

For years we held the Turing test as some incredible finish line. Then we passed it and people merely decided it wasn't that impressive in hindsight. I feel like AGI will be much the same, we'll just kick the can further down the road and call it ASI.

I feel we're overly guilty of pumping our own tires. We are all trained by observing other humans and consuming their creations. If you were raised on the other side of the world, or even exactly where you are but in a different decade, you'd be a completely different person. Your training data would be different.

I find it interesting that the benchmark seems to be for it to come up with something novel and unique. Meanwhile... can we? Can we come up with anything that isn't just a remix of what's come before? Does anyone create anything that wasn't inspired by something else?
Russic is offline   Reply With Quote
Old 05-30-2024, 08:51 AM   #477
Fuzz
Franchise Player
 
Fuzz's Avatar
 
Join Date: Mar 2015
Exp:
Default

I don't consider novel and unique anything special, computers can do that with a random number generator. I get what you are saying, but what I see that we have now is more mimicry.


To, say, drive a vehicle in the way a human would is only possible by creating something that has all the capabilities and functions of a human. Do you need feelings to drive a vehicle? No. But to drive it like a human would, I'd argue you do, and that comes in to play with how we react to others and the choices we may have to make.


And does an AGI require replicating the functions of our lizard brain? The faster reacting part, only taking in vital signals to make split second decisions, far faster than our more evolved brain does? What features of a brain are required to be human-like?
Fuzz is online now   Reply With Quote
Old 05-30-2024, 09:04 AM   #478
photon
The new goggles also do nothing.
 
photon's Avatar
 
Join Date: Oct 2001
Location: Calgary
Exp:
Default

Not sure I'd say LLMs have passed the Turing test. It really depends on the kind of question asked, in some cases sure it sounds very good (and it should since it's just taking what real people have said and re-stating it), but in other cases it's clear that there's zero comprehension going on and it fails laughably.

Not even sure the Turing test is an applicable test for LLM AIs since again it's just regurgitating what people have said.. so the test is testing what humans say, not what the AI "thinks". But I guess that could be said of humans to some degree.

And it's not a bad thing that better tests become relevant as things advance.. the Turing test isn't some absolute thing, it's just an idea from an era with zero experience with what was to come.

That said then I kind of agree with the point about the benchmark being for it to come up with something novel and unique. Well first I disagree with that being the only benchmark, but it is something that humans are capable of, just maybe not to the degree that we probably would like to think of ourselves. But the point that some significant portion of what we say and do and create is just an extension or remix of something previous. Creators are inspired by what they consume, but does inspire just mean some degree of taking something and remaking it in a different way with some different influence? But new stuff does occur.

Demonstrating understanding would be something that would be a sign of a true AI vs a LLM.
__________________
Uncertainty is an uncomfortable position.
But certainty is an absurd one.
photon is offline   Reply With Quote
The Following User Says Thank You to photon For This Useful Post:
Old 05-30-2024, 09:07 AM   #479
Russic
Dances with Wolves
 
Russic's Avatar
 
Join Date: Jun 2006
Location: Section 304
Exp:
Default

Quote:
Originally Posted by Fuzz View Post
I don't consider novel and unique anything special, computers can do that with a random number generator. I get what you are saying, but what I see that we have now is more mimicry.


To, say, drive a vehicle in the way a human would is only possible by creating something that has all the capabilities and functions of a human. Do you need feelings to drive a vehicle? No. But to drive it like a human would, I'd argue you do, and that comes in to play with how we react to others and the choices we may have to make.


And does an AGI require replicating the functions of our lizard brain? The faster reacting part, only taking in vital signals to make split second decisions, far faster than our more evolved brain does? What features of a brain are required to be human-like?
I suppose my question comes back: is what we do now not mimicry? It's not good enough to captivate us yet, but I don't see how we don't hit that point within a year or two.

Is driving a vehicle like a human optimal? We kill a ****-ton of people each year with vehicles. I know the common example is "what if you have to kill 3 nuns or a child," but let's cut to the chase: not only will that choice never be presented to you, but if it does, your lizard brain will take over and at that point you won't be the one driving anyways (you'll probably panic and kill all 4 of them). We've got a long way to go on self-driving and it's not there yet, but again, it becomes a time problem.

As for the AGI question, I don't think it needs to have all the capabilities of a human brain for AGI. As far as I know, AGI is just "like a human" or "average human," is that correct? Perhaps it gets to the conclusions differently, but I'd say achieving it doesn't rely on how it does the trick.

Quote:
Originally Posted by photon View Post
Not sure I'd say LLMs have passed the Turing test. It really depends on the kind of question asked, in some cases sure it sounds very good (and it should since it's just taking what real people have said and re-stating it), but in other cases it's clear that there's zero comprehension going on and it fails laughably.

Not even sure the Turing test is an applicable test for LLM AIs since again it's just regurgitating what people have said.. so the test is testing what humans say, not what the AI "thinks". But I guess that could be said of humans to some degree.

And it's not a bad thing that better tests become relevant as things advance.. the Turing test isn't some absolute thing, it's just an idea from an era with zero experience with what was to come.

That said then I kind of agree with the point about the benchmark being for it to come up with something novel and unique. Well first I disagree with that being the only benchmark, but it is something that humans are capable of, just maybe not to the degree that we probably would like to think of ourselves. But the point that some significant portion of what we say and do and create is just an extension or remix of something previous. Creators are inspired by what they consume, but does inspire just mean some degree of taking something and remaking it in a different way with some different influence? But new stuff does occur.

Demonstrating understanding would be something that would be a sign of a true AI vs a LLM.
A quick rip around Facebook tells me everybody over the age of 55 is failing the Turing Test at a spectacular rate.

As for new stuff... does there exist an example of something novel or unique that came from nothing?

Last edited by Russic; 05-30-2024 at 09:15 AM.
Russic is offline   Reply With Quote
The Following User Says Thank You to Russic For This Useful Post:
Old 05-30-2024, 09:15 AM   #480
photon
The new goggles also do nothing.
 
photon's Avatar
 
Join Date: Oct 2001
Location: Calgary
Exp:
Default

Quote:
Originally Posted by Russic View Post
I suppose my question comes back: is what we do now not mimicry?
Probably a lot of mimicry, I mean what is society except how we all interact with each other and the rules on what we should all do in very similar ways.

But definitely more than mimicry. General intelligence can do simple things like generalizations and abstractions that let us at least try to answer a simple question like "what's in my pocket" because we understand what pockets are, what size of things can fit in pockets, whether the person owns a car, etc. A LLM will just look at its training data to find out what previous answers to that were.

General intelligence can infer or derive based on models while LLMs can only regurgitate what they've been trained with.
__________________
Uncertainty is an uncomfortable position.
But certainty is an absurd one.
photon is offline   Reply With Quote
Reply

Tags
they will overtake us


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -6. The time now is 03:55 PM.

Calgary Flames
2023-24




Powered by vBulletin® Version 3.8.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright Calgarypuck 2021