Calgarypuck Forums - The Unofficial Calgary Flames Fan Community

Go Back   Calgarypuck Forums - The Unofficial Calgary Flames Fan Community > Main Forums > The Off Topic Forum
Register Forum Rules FAQ Community Calendar Today's Posts Search

Reply
 
Thread Tools Search this Thread
Old 06-20-2013, 10:12 AM   #1
PsYcNeT
Franchise Player
 
PsYcNeT's Avatar
 
Join Date: May 2004
Location: Marseilles Of The Prairies
Exp:
Default Google Buys Quantum Computer For NASA AI Lab, Homeopaths Lose Their ####

So a few weeks ago Google buys a quantum computer, pretty awesome stuff:

http://singularityhub.com/2013/06/05...e-lab-at-nasa/

Quote:
According to their blog, Google has already developed some quantum computing algorithms. Because the D-Wave is so good at specific problems, they think some classical/quantum combination may prove ideal. Perhaps future quantum chips will provide the kind of power boost specialized graphics processing units (GPUs) have recently provided supercomputers. Or maybe the “neocortex” of future AIs will be comprised of a quantum chip, whereas the rest will remain classically driven.

There’s yet much work to be done before these machines find practical applications, but Google thinks “quantum machine learning may provide the most creative problem-solving process under the known laws of physics.”
Pretty awesome stuff right?

Well I guess all the crazy crystal healing weirdos of the internet are absolutely going balls-crazy over it.

http://www.naturalnews.com/040859_Sk...e_Systems.html

Quote:
Beware of genius scientists who lack wisdom for humanity

Ladizinsky is, by any measure, a person of extremely high intelligence. Click here to see a fascinating interview with him. But like many such people throughout history, Ladizinsky fails to have the foresight to recognize the full implications of the technology he's building. And those implications are so far-reaching and dangerous that they may actually lead to the destruction of humanity (see below).
This situation, is of course, hilariously sad.
__________________

Quote:
Originally Posted by MrMastodonFarm View Post
Settle down there, Temple Grandin.
PsYcNeT is offline   Reply With Quote
Old 06-20-2013, 10:22 AM   #2
Ashartus
First Line Centre
 
Join Date: Mar 2007
Location: Calgary
Exp:
Default

Someone watched the Terminator a few too many times I think. Besides, this AI he's worried about can't really do a worse job running things than we have.
Ashartus is offline   Reply With Quote
Old 06-20-2013, 11:19 AM   #3
CaptainCrunch
Norm!
 
CaptainCrunch's Avatar
 
Join Date: Jun 2002
Exp:
Default

Holy crap,reading that makes me pray for the invention of Skynet.

I'm seeing the monetary potential of inventing a plasma rifle with a 40 watt range
__________________
My name is Ozymandias, King of Kings;

Look on my Works, ye Mighty, and despair!
CaptainCrunch is offline   Reply With Quote
Old 06-20-2013, 11:24 AM   #4
corporatejay
Franchise Player
 
corporatejay's Avatar
 
Join Date: Jul 2005
Exp:
Default

Amazingly I just quoted them in the GMO thread, two natural news links in one day...amazing.
__________________
corporatejay is offline   Reply With Quote
Old 06-20-2013, 11:27 AM   #5
CaptainCrunch
Norm!
 
CaptainCrunch's Avatar
 
Join Date: Jun 2002
Exp:
Default

Quote:
Originally Posted by Ashartus View Post
Someone watched the Terminator a few too many times I think. Besides, this AI he's worried about can't really do a worse job running things than we have.
People forget that AI can't rewrite its basic code premise, it can learn and apply to the rules within the system, but it can't rewrite the defined rules.

so if one of the basic rules is you cannot harm or kill himans, it can't overwrite that rule.

Plus if it doesn't have knowledge of psychiatry it can't diagnose people as threats to its existance.

So if a person points a gun at a terminator, the terminators defense protocols which is destroy the threat, can't over write the you cannot harm or kill human's so its response then will be to vacate the area or retreat from the threat. If a fricken cat pointed a gun at the terminator, that's a different story because a whole Cat's suck and are evil subroutine has been written into the core programming along with nothing that protects cats.

A computer doesn't understand or have the ability to diagnose the human race as a threat unless you give it the ability to do so at a base level


Now this is taken from an AI programming series of classes that I had to take in university years ago.

The article showed that the guys level of knowledge was taken from Terminator and Alien movies.
__________________
My name is Ozymandias, King of Kings;

Look on my Works, ye Mighty, and despair!
CaptainCrunch is offline   Reply With Quote
Old 06-20-2013, 11:30 AM   #6
GreatWhiteEbola
First Line Centre
 
GreatWhiteEbola's Avatar
 
Join Date: Aug 2006
Location: Calgary, Alberta
Exp:
Default

Quote:
Originally Posted by CaptainCrunch View Post
People forget that AI can't rewrite its basic code premise, it can learn and apply to the rules within the system, but it can't rewrite the defined rules.

so if one of the basic rules is you cannot harm or kill himans, it can't overwrite that rule.

Plus if it doesn't have knowledge of psychiatry it can't diagnose people as threats to its existance.

So if a person points a gun at a terminator, the terminators defense protocols which is destroy the threat, can't over write the you cannot harm or kill human's so its response then will be to vacate the area or retreat from the threat. If a fricken cat pointed a gun at the terminator, that's a different story because a whole Cat's suck and are evil subroutine has been written into the core programming along with nothing that protects cats.

A computer doesn't understand or have the ability to diagnose the human race as a threat unless you give it the ability to do so at a base level


Now this is taken from an AI programming series of classes that I had to take in university years ago.

The article showed that the guys level of knowledge was taken from Terminator and Alien movies.
The correct spelling is hymen.
__________________

GreatWhiteEbola is offline   Reply With Quote
The Following 5 Users Say Thank You to GreatWhiteEbola For This Useful Post:
Old 06-20-2013, 11:31 AM   #7
CaptainCrunch
Norm!
 
CaptainCrunch's Avatar
 
Join Date: Jun 2002
Exp:
Default

As an addon one of the oldest debates around AI in combination with something like Asimov's rules of robotics.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Lets say that they finally built the Ed 209 from Robocop, and it came across a hostage situation where a human with a gun was threatening a hostage.


Would the AI be able to rewrite the three rules above to save the hostage from the hostage taker?


Nope, it would either have to retreat from the situation, find a compromise solution or blow its own head off.
__________________
My name is Ozymandias, King of Kings;

Look on my Works, ye Mighty, and despair!
CaptainCrunch is offline   Reply With Quote
Old 06-20-2013, 11:42 AM   #8
worth
Franchise Player
 
worth's Avatar
 
Join Date: Jun 2004
Location: Vancouver
Exp:
Default

worth is offline   Reply With Quote
The Following User Says Thank You to worth For This Useful Post:
Old 06-20-2013, 11:53 AM   #9
Rathji
Franchise Player
 
Rathji's Avatar
 
Join Date: Nov 2006
Location: Supporting Urban Sprawl
Exp:
Default

Quote:
Originally Posted by CaptainCrunch View Post
People forget that AI can't rewrite its basic code premise, it can learn and apply to the rules within the system, but it can't rewrite the defined rules.

so if one of the basic rules is you cannot harm or kill himans, it can't overwrite that rule.

Plus if it doesn't have knowledge of psychiatry it can't diagnose people as threats to its existance.

So if a person points a gun at a terminator, the terminators defense protocols which is destroy the threat, can't over write the you cannot harm or kill human's so its response then will be to vacate the area or retreat from the threat. If a fricken cat pointed a gun at the terminator, that's a different story because a whole Cat's suck and are evil subroutine has been written into the core programming along with nothing that protects cats.

A computer doesn't understand or have the ability to diagnose the human race as a threat unless you give it the ability to do so at a base level


Now this is taken from an AI programming series of classes that I had to take in university years ago.

The article showed that the guys level of knowledge was taken from Terminator and Alien movies.
Code is just instructions sitting in some location on a computer. If designed in such a way to actually facilitate real AI, in that it would be capable of learning, you would need to write it in such a way that the code was able to change itself.

If you didn't, you would't really have AI.
__________________
"Wake up, Luigi! The only time plumbers sleep on the job is when we're working by the hour."
Rathji is offline   Reply With Quote
Old 06-20-2013, 12:01 PM   #10
kermitology
It's not easy being green!
 
kermitology's Avatar
 
Join Date: Oct 2001
Location: In the tubes to Vancouver Island
Exp:
Default

Quote:
Originally Posted by Rathji View Post
Code is just instructions sitting in some location on a computer. If designed in such a way to actually facilitate real AI, in that it would be capable of learning, you would need to write it in such a way that the code was able to change itself.

If you didn't, you would't really have AI.
Correct, the supposition of AI would require the code to be able to adapt, so make it like C and have the binary compile with the existing binary and insert a strange loop that has a kill all.

Like this:

http://scienceblogs.com/goodmath/200...nis-ritchie-a/
__________________
Who is in charge of this product and why haven't they been fired yet?
kermitology is offline   Reply With Quote
Old 06-20-2013, 12:08 PM   #11
CaptainCrunch
Norm!
 
CaptainCrunch's Avatar
 
Join Date: Jun 2002
Exp:
Default

Quote:
Originally Posted by Rathji View Post
Code is just instructions sitting in some location on a computer. If designed in such a way to actually facilitate real AI, in that it would be capable of learning, you would need to write it in such a way that the code was able to change itself.

If you didn't, you would't really have AI.
I would agree with you, except there is always a code layer even in AI, you can put the basis rules into the system that it can't re-write or change.

While AI is about learning and adapting, there is an underlying ability to limit what it can learn and change.
__________________
My name is Ozymandias, King of Kings;

Look on my Works, ye Mighty, and despair!
CaptainCrunch is offline   Reply With Quote
Old 06-20-2013, 02:00 PM   #12
Flashpoint
Not the 1 millionth post winnar
 
Flashpoint's Avatar
 
Join Date: Aug 2004
Location: Los Angeles
Exp:
Default

Microsoft Outlook can function for all of about 10 minutes before going balls deep into a coma.

My car will operate perfectly until hit hits 100,000 miles or 3 years and then start crapping the bed on a monthly basis.

My iPhone is a futuristic "Jetsons style" communication device that is way more useful than my desktop PC from 5 years ago. But if I tell it I am looking for a "a nearby gas station" it will provide me a list of bakeries in Nelson, BC (1500 miles away).

The moment "Skynet" became self aware it would immediately chuck up a "load link letter" error message and defecate all over it's own operating system in a blue screen of death harder than the grip you experienced last time you tried to pry open a C-Train door when it closed on your foot.

Machines have about as much chance of taking over the planet as dogs do.

Less actually. Dogs can find their own asses.

__________________
"Isles give up 3 picks for 5.5 mil of cap space.

Oilers give up a pick and a player to take on 5.5 mil."
-Bax
Flashpoint is offline   Reply With Quote
Old 06-20-2013, 04:22 PM   #13
GGG
Franchise Player
 
GGG's Avatar
 
Join Date: Aug 2008
Location: California
Exp:
Default

Quote:
Originally Posted by CaptainCrunch View Post
As an addon one of the oldest debates around AI in combination with something like Asimov's rules of robotics.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Lets say that they finally built the Ed 209 from Robocop, and it came across a hostage situation where a human with a gun was threatening a hostage.


Would the AI be able to rewrite the three rules above to save the hostage from the hostage taker?


Nope, it would either have to retreat from the situation, find a compromise solution or blow its own head off.
Dont forget than even Asimovs robots developed the zeroth law of robotics independant from humans to state that the protection of humanity as a whole was greater than harming one human. This caused one robot to cause the earth to become radioactive and forced humans to colonize space.

Not to mention that a lot of Asimovs robot fiction was based around flaws in the 3 laws of robotics.
GGG is offline   Reply With Quote
The Following User Says Thank You to GGG For This Useful Post:
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -6. The time now is 07:28 PM.

Calgary Flames
2024-25




Powered by vBulletin® Version 3.8.4
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
Copyright Calgarypuck 2021 | See Our Privacy Policy