Calgarypuck Forums - The Unofficial Calgary Flames Fan Community

Go Back   Calgarypuck Forums - The Unofficial Calgary Flames Fan Community > Main Forums > The Off Topic Forum
Register Forum Rules FAQ Community Calendar Today's Posts Search

Reply
 
Thread Tools Search this Thread
Old 03-26-2016, 01:06 PM   #61
psyang
Powerplay Quarterback
 
Join Date: Jan 2010
Exp:
Default

Quote:
Originally Posted by Enoch Root View Post
However, unpredictability exists within other things, such as random mutations in evolution. They don't negate the scientific method.
Unpredictability in the purest sense does negate the scientific method. In a closed system, the same test under the same conditions has to have the same results. Otherwise, a theist might use this sort of unpredictability as an argument for God.

But I think I know what you are saying - that it's impossible to absolutely control a system so that it is considered closed and/or the conditions are identical.

Quote:
Originally Posted by Enoch Root View Post
Trying to pull some of this back to, you know, AI...

Kurzwell, in 'The Singularity Is Near' does a good job of describing how, despite the fact that neurons are constantly dying and being replaced, memories remain. And it doesn't take long until none of the neurons that contain a memory were around at the time that the memory actually occurred.

The interesting thing about that is that it poses the question: can memories be transferred (specifically to an AI)?

And of course, if at some point they are, it would be a rather large blow to the 'free thought and free will are outside the chemical makeup of our brains' argument.
I don't see the connection between memories and free will. Am I missing something?
psyang is offline   Reply With Quote
Old 03-26-2016, 01:22 PM   #62
blueski
Scoring Winger
 
Join Date: Dec 2014
Exp:
Default

The thing I find interesting about this thread is that there is very little doubt that AI will compete with and even replace a lot of functions we do. Even a year ago, I'd bet half the thread would have found this idea ludicrous.
blueski is offline   Reply With Quote
Old 03-26-2016, 01:24 PM   #63
psyang
Powerplay Quarterback
 
Join Date: Jan 2010
Exp:
Default

Quote:
Originally Posted by CorsiHockeyLeague View Post
God or a soul could conceivably be empirically tested. Those are just words to label things that might or might not exist, but for which there is presently no evidence.
Agreed, though I'd add "or no present capability to detect that evidence".

I remember an experiment done a few years ago where eye tracking was used to make all text on a computer monitor scrambled except for the area the eye was actually focused on. The reader read the screen of text fine, and did not realize other parts of the screen was scrambled. A third party observer, however, could not read anything on the screen - it was all gibberish.
psyang is offline   Reply With Quote
Old 03-26-2016, 01:44 PM   #64
psyang
Powerplay Quarterback
 
Join Date: Jan 2010
Exp:
Default

Quote:
Originally Posted by blueski View Post
The thing I find interesting about this thread is that there is very little doubt that AI will compete with and even replace a lot of functions we do. Even a year ago, I'd bet half the thread would have found this idea ludicrous.
I did a quick search to see how long people thought it would take an AI to win at Go after Deep Blue defeated Kasparov. Deep Blue won in 1997 (it won its first chess game against Kasparov the year before). I thought I remember reading 50-100 years. However, this article from 2007 said Go would be won within a decade. That's pretty prescient.

I think AI taking over practical applications really became more mainstream with Google's self-driving car experiments. That level of complexity and unpredictability was thought to be beyond AI. But as people saw the reality of a self-driving car (albeit in good, California conditions), I think they could see that not only had AI advanced, but maybe more importantly, that people, in common tasks, are more or less pattern matching machines. See car on left changing lanes, give it room. See unpredictable cyclist, slow down. Red light, stop.
psyang is offline   Reply With Quote
The Following User Says Thank You to psyang For This Useful Post:
Old 03-26-2016, 03:37 PM   #65
peter12
Franchise Player
 
peter12's Avatar
 
Join Date: Jul 2002
Exp:
Default

Quote:
I agree, and I thank you for bringing up the point. However, as reasoning creatures, we are put in an interesting situation. Rationally, we can see the implications - everything we will do has already been determined. Concepts of morality and justice cannot have the same weight (or maybe, same definition?) as in a non-deterministic universe. Does not the materialist have to then live slightly irrationally, shut off that part of their reasoning, in order to function in society? I'm not trying to argue against a materialist worldview, but not prescribing to that worldview myself, I'm genuinely interested.
Not only reasoning creatures, but more importantly, speaking creatures. Aristotle said that we are the political animal - that is, the animal concerned with creating fluid moral hierarchies dependent upon many factors such as wisdom and the pretense of wisdom.

As an aside, a purely materialist perspective that the cosmos is somehow predetermined has its roots in Calvinism of all places. In fact, the materialism that CHL speaks so confidently of would make absolute sense for a Christian who believes that God's will has determined every outcome in advance.

However, I do not prescribe to that view either - from a scientific or religious view. I think that there is limited free will, at the very least in how we describe our own narratives, and that that can have a profound effect on how we act in the world.

We can pretend, like CHL, to be quite confident about the brain's structure, but we have made very little progress in understanding that structure or how to manipulate it, let alone completely copy it. So yes, it is theoretically possible given this perspective, but it is a problem that we don't appear to have a hope of solving anytime soon.

We can say that the mind, like the body, like the cosmos is purely mechanical, but in doing so, we have to use metaphor, which is purely non-mechanical. I feel that the public discussion of AI glosses over these purely human inclinations in the search for a more simple, and thus, crudely understandable version of ourselves that we want to believe in. If that makes any sense at all.
peter12 is offline   Reply With Quote
Old 03-26-2016, 10:46 PM   #66
jayswin
Celebrated Square Root Day
 
jayswin's Avatar
 
Join Date: Mar 2006
Exp:
Default

Quote:
Originally Posted by CorsiHockeyLeague View Post
Since we're apparently feeling like blowing minds anyway, shall we shift the discussion to the likely existence of parallel universes now?

Yes please, as a stupid observer who wouldn't be able to contribute I'd love to follow along. Watched an excellent documentary on this the other day and would love to see some discussion on it here.

Would you be down for starting a thread on it? I'm sure it'd get some traction.

jayswin is offline   Reply With Quote
Old 03-27-2016, 07:28 AM   #67
JohnnyB
Franchise Player
 
JohnnyB's Avatar
 
Join Date: Mar 2006
Location: Shanghai
Exp:
Default

I hope this thread can ultimately get along without being pulled down into old and well-worn rabbit holes of philosophical and religious discussion of consciousness. AI is a good topic for a thread and should have plenty of worthy ideas and news to discussion without reference to those debates. I would certainly find the thread more interesting that way.

For those just getting interested in the topic, Nick Bostrom's book Superintelligence is a very informative book on the topic from 2014. Bostrom is the director of the Future of Humanity Institute at Oxford. He is a philosopher, but he really addresses practicalities of achieving different forms of artificial superintelligence in this book.
__________________

"If stupidity got us into this mess, then why can't it get us out?"
JohnnyB is offline   Reply With Quote
The Following User Says Thank You to JohnnyB For This Useful Post:
Old 03-27-2016, 10:14 AM   #68
Duruss
Scoring Winger
 
Duruss's Avatar
 
Join Date: Nov 2012
Location: Sundre
Exp:
Default

The entire twitterbot event is no surprise, Twitter has well known issues with trolls.

I believe we are starting to see the rise of animalistic A.I. that functon on basic preprogrammed instinct with limited learning ability.

What I find interesting is that as we strive to create intellegences equal to our own we may find that they may also have to face psychological issues. We may see A.I. schizophrenia, PTSD, or Narcisism.
Duruss is offline   Reply With Quote
Old 03-27-2016, 11:20 AM   #69
Fuzz
Franchise Player
 
Fuzz's Avatar
 
Join Date: Mar 2015
Exp:
Default

Looks like a dramatic leap in processing speed has just been achieved for neural nets:
https://hardware.slashdot.org/story/...l-net-learning
Quote:
Problems that currently require days of training on a datacenter-size cluster with thousands of machines can be addressed within hours on a single RPU accelerator.
Fuzz is online now   Reply With Quote
The Following 3 Users Say Thank You to Fuzz For This Useful Post:
Old 03-27-2016, 07:51 PM   #70
GGG
Franchise Player
 
GGG's Avatar
 
Join Date: Aug 2008
Exp:
Default

Quote:
Originally Posted by psyang View Post
Unpredictability in the purest sense does negate the scientific method. In a closed system, the same test under the same conditions has to have the same results. Otherwise, a theist might use this sort of unpredictability as an argument for God.

But I think I know what you are saying - that it's impossible to absolutely control a system so that it is considered closed and/or the conditions are identical.



I don't see the connection between memories and free will. Am I missing something?

The same test under the same conditions may not produce the same results when considering quantum physics. As the initial state of any system cannot 100% known.

I think a theist could wrangle God into quantum physics.
GGG is offline   Reply With Quote
Old 03-28-2016, 10:27 AM   #71
psyang
Powerplay Quarterback
 
Join Date: Jan 2010
Exp:
Default

Ok, switching gears a little. Apologize in advance for the long post.

I spent some time trying to learn more about where deep learning/multi-level neural networks are, and found these talks by Geoff Hinton of the U of Toronto, given to Google. They are still fairly technical, but you can follow the ideas without having to understand the technical theory/mathematics. I'll try to summarize (in my own limited understanding of the talks) and give the times of neat parts of the video, since the videos are each an hour long.

The first video gives a foundation of what current neural nets are like and how they are built up. He also gives some great examples/demos.


Summary
In the past, neural nets were often hard-coded, or training data was labelled to help the net know what it was learning. This resulted in a mapping from data (say a picture of a car) to an abstraction of the data called "features" (say that the car has wheels and a body).

Also, a technique (back progagation) where errors between what the neural net thinks an object is, and what it really is, is fed back into the net to help fine tune the results.

Unfortunately, this type of net was very limited - it didn't really learn on its own (it was told what the image was), and couldn't distinguish well between, say, other cars that didn't look like the training data cars. It was also very slow to train, and the back propagation ended up not helping much at all.

In the new way of doing things, training data is unlabelled (the net doesn't know what it is looking at during training) and so features abstracted from the training data are unbiased. Also, those features then get fed as training data into a second layer of features, and so on, which builds up multiple layers of features, each of which improves on the previous. Also, the building up of the layers was vastly sped up by not requiring hundreds of iterations (the features are built as kind of a feedback loop with the training data). Finally, only at the very end, add labels to the "things" the neural net can distinguish (though he says later even this isn't necessary). And also use back propagation at the end: because the feature layers are "smarter", the fine tuning of the back propagation actually works much better.

Demos
The demo of his neural net recognizing numerical digits (and also generating its own mental picture of digits) starts at 21:35.

The results of applying the neural net to a million documents to try to classify documents into different topics starts at 33:18. Here, he compares two methods. The older way of classifying documents (PCA) which looks for keywords in documents, and his neural net. The documents are mapped into a 2-d plane to provide a visual representation.

The second video of Geoff is an update to the first one, given 3 years later:


Summary
The big new thing here is the ability to combine multiple features together help improve the neural nets capabilities. For instance, he gives the example of telling the neural net to draw a square at a particular location/orientation (15:00). The net could simply specify exact vertex points which will define a square, or it can break the problem down into two more general parts: 1) a square has same-length edges and corners and 2) the relationship between edges and corners (edges are colinear to corners). Both operations are easier to manage in the net, and together will give the same result.

Lots of fascinating stuff where this change allows the net to work with motion (ie. to recognize different types of motion, then predict or model that motion on its own), and then to try to distinguish objects within images.

Demos
The motion demo is at 34:25, and shows how the neural net was trained on different types of walking patterns, and then can make a stick figure walk in those same patterns.

A real suprising result is at 43:45. To determine what an object is within a picture, the net is trained on a number of images, and taught to focus on mean and covariance of the pixels (which will give average colors of pixels in the image, and the "sameness" of pixels within the image, respectively). This results in a number of filters based on mean and covariance which can get applied to an image to help determine what the content of the image is.

The cool part is at 44:10 where a topographic map of the neural net's covariance filters for an image is shown. Apparently that map models what is seen in a monkey's brain. Some good evidence that the neural net is close to modelling how a brain actually stores information to understand images.
psyang is offline   Reply With Quote
The Following 3 Users Say Thank You to psyang For This Useful Post:
Old 03-28-2016, 10:33 AM   #72
psyang
Powerplay Quarterback
 
Join Date: Jan 2010
Exp:
Default

Quote:
Originally Posted by JohnnyB View Post
I hope this thread can ultimately get along without being pulled down into old and well-worn rabbit holes of philosophical and religious discussion of consciousness. AI is a good topic for a thread and should have plenty of worthy ideas and news to discussion without reference to those debates. I would certainly find the thread more interesting that way.

For those just getting interested in the topic, Nick Bostrom's book Superintelligence is a very informative book on the topic from 2014. Bostrom is the director of the Future of Humanity Institute at Oxford. He is a philosopher, but he really addresses practicalities of achieving different forms of artificial superintelligence in this book.
Well, AI will cause a lot of different rabbit holes.

Can you give a brief overview of Bostrom's book? It's unlikely I'll get the book to read for myself in the near future, but am interested in what he has to say.
psyang is offline   Reply With Quote
Old 03-28-2016, 10:43 AM   #73
peter12
Franchise Player
 
peter12's Avatar
 
Join Date: Jul 2002
Exp:
Default

Yeah, JohnnyB. The entire concept of AI is one gigantic philosophical/religious rabbit hole. It is literally, as I have tried to emphasize, a technology we do not understand.
peter12 is offline   Reply With Quote
Old 03-28-2016, 11:16 AM   #74
psyang
Powerplay Quarterback
 
Join Date: Jan 2010
Exp:
Default

Quote:
Originally Posted by peter12 View Post
Yeah, JohnnyB. The entire concept of AI is one gigantic philosophical/religious rabbit hole. It is literally, as I have tried to emphasize, a technology we do not understand.
Part of why I sought out the videos above to try to see how accurate your assertion is. While a number of the steps are proven mathematically, I was still a little surprised to hear the speaker say a couple times something along the lines of "and we know this works because the results we get are good".

I suppose this is pretty common in mathematics - there are a lot of mathematical conjectures that most assume are true, but for which there is no proof.

But there is a big difference between not understanding something, and not being able to understand something. And the only way to know where we lie is in trying.
psyang is offline   Reply With Quote
Old 03-28-2016, 11:22 AM   #75
peter12
Franchise Player
 
peter12's Avatar
 
Join Date: Jul 2002
Exp:
Default

Quote:
Originally Posted by psyang View Post
Part of why I sought out the videos above to try to see how accurate your assertion is. While a number of the steps are proven mathematically, I was still a little surprised to hear the speaker say a couple times something along the lines of "and we know this works because the results we get are good".

I suppose this is pretty common in mathematics - there are a lot of mathematical conjectures that most assume are true, but for which there is no proof.

But there is a big difference between not understanding something, and not being able to understand something. And the only way to know where we lie is in trying.
Isn't it becoming a fairly common occurrence in AI research to have results that appear successful but are fairly meaningless to the researcher? I know that Watson would do things routinely that completely befuddled the scientists working on his team.
peter12 is offline   Reply With Quote
Old 03-28-2016, 11:26 AM   #76
peter12
Franchise Player
 
peter12's Avatar
 
Join Date: Jul 2002
Exp:
Default

psyang, I think this discussion will inevitably lead to "complexity" theory. That is, the idea that once we or computers master the principles underlying complex systems, then we can predict outcomes without really understanding why or how.

My favourite contrarian, John Horgan, has a number of good articles on this subject.

http://blogs.scientificamerican.com/...er-complexity/
peter12 is offline   Reply With Quote
Old 03-28-2016, 12:19 PM   #77
jammies
Basement Chicken Choker
 
jammies's Avatar
 
Join Date: Jan 2007
Location: In a land without pants, or war, or want. But mostly we care about the pants.
Exp:
Default

Artificial consciousness is a different, albeit closely related, discussion than artificial intelligence. As I think I've said before, it's could be that you can create a human-level (or better) intelligence without consciousness, nor is it necessarily so that you would *want* consciousness in your AI, even if possible - conscious AI forced to serve is slavery.

Free will is another tangential subject, it's still conjecture that humans have free will, so expecting to prove it exists in an AI seems pointless. Regardless of the philosophical arguments for and against, the only real way to know would be to continue development and experimentation, as there is nothing like proving an argument right or wrong by demonstration.

I find many of the authors, like Kurzweil, who speak on this subject are equally mystical when talking about our progress in this area as the most medieval Catholic apologists and quoters of Aristotle among us. We just don't know enough yet to do more than wave hands around in the air and speak in generalities, someone needs to come up with a theory of intelligence - not necessarily mathematical, but more like the theory of evolution - that explains enough about the processes and laws of it to make meaningful predictions and assertions possible.
__________________
Better educated sadness than oblivious joy.
jammies is offline   Reply With Quote
The Following 3 Users Say Thank You to jammies For This Useful Post:
Old 03-30-2016, 10:55 AM   #78
troutman
Unfrozen Caveman Lawyer
 
troutman's Avatar
 
Join Date: Oct 2002
Location: Winebar Kensington
Exp:
Default

Microsoft’s racist chatbot returns with drug-smoking Twitter meltdown

Short-lived return saw Tay tweet about smoking drugs in front of the police before suffering a meltdown and being taken offline

http://www.theguardian.com/technolog...-twitter-drugs

Tay is made in the image of a teenage girl and is designed to interact with millennials to improve its conversational skills through machine-learning. Sadly it was vulnerable to suggestive tweets, prompting unsavoury responses.

This isn’t the first time Microsoft has launched public-facing AI chatbots. Its Chinese XiaoIce chatbot successfully interacts with more than 40 million people across Twitter, Line, Weibo and other sites but the company’s experiments targeting 18- to 24-year-olds in the US on Twitter has resulted in a completely different animal.

Microsoft 'deeply sorry' for racist and sexist tweets by AI chatbot

http://www.theguardian.com/technolog...-by-ai-chatbot
__________________
https://www.mergenlaw.com/
http://cjsw.com/program/fossil-records/
twitter/instagram @troutman1966

Last edited by troutman; 03-30-2016 at 10:57 AM.
troutman is offline   Reply With Quote
Old 04-07-2016, 03:45 PM   #79
Dan02
Franchise Player
 
Dan02's Avatar
 
Join Date: Jun 2004
Location: Calgary
Exp:
Default

http://www.iflscience.com/technology...-across-europe

Dozens of self driving semis drive themselves across Europe.
Dan02 is offline   Reply With Quote
Old 04-07-2016, 03:54 PM   #80
calumniate
Franchise Player
 
calumniate's Avatar
 
Join Date: Feb 2007
Location: A small painted room
Exp:
Default

A pretty light / funny interview touching on singularity:
http://duncantrussell.com/go-with-aa...iscussion_id=0

(Interview gets rolling 20 minutes in)

Pretty interesting for sure - a good example of singularity being where 'deep' learning produces findings and results, but the original programmers have no idea how it ended up being accomplished.

Last edited by calumniate; 04-07-2016 at 04:00 PM.
calumniate is online now   Reply With Quote
Reply

Tags
they will overtake us


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -6. The time now is 08:17 AM.

Calgary Flames
2023-24




Powered by vBulletin® Version 3.8.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright Calgarypuck 2021