According to their blog, Google has already developed some quantum computing algorithms. Because the D-Wave is so good at specific problems, they think some classical/quantum combination may prove ideal. Perhaps future quantum chips will provide the kind of power boost specialized graphics processing units (GPUs) have recently provided supercomputers. Or maybe the “neocortex” of future AIs will be comprised of a quantum chip, whereas the rest will remain classically driven.
There’s yet much work to be done before these machines find practical applications, but Google thinks “quantum machine learning may provide the most creative problem-solving process under the known laws of physics.”
Pretty awesome stuff right?
Well I guess all the crazy crystal healing weirdos of the internet are absolutely going balls-crazy over it.
Beware of genius scientists who lack wisdom for humanity
Ladizinsky is, by any measure, a person of extremely high intelligence. Click here to see a fascinating interview with him. But like many such people throughout history, Ladizinsky fails to have the foresight to recognize the full implications of the technology he's building. And those implications are so far-reaching and dangerous that they may actually lead to the destruction of humanity (see below).
Someone watched the Terminator a few too many times I think. Besides, this AI he's worried about can't really do a worse job running things than we have.
Someone watched the Terminator a few too many times I think. Besides, this AI he's worried about can't really do a worse job running things than we have.
People forget that AI can't rewrite its basic code premise, it can learn and apply to the rules within the system, but it can't rewrite the defined rules.
so if one of the basic rules is you cannot harm or kill himans, it can't overwrite that rule.
Plus if it doesn't have knowledge of psychiatry it can't diagnose people as threats to its existance.
So if a person points a gun at a terminator, the terminators defense protocols which is destroy the threat, can't over write the you cannot harm or kill human's so its response then will be to vacate the area or retreat from the threat. If a fricken cat pointed a gun at the terminator, that's a different story because a whole Cat's suck and are evil subroutine has been written into the core programming along with nothing that protects cats.
A computer doesn't understand or have the ability to diagnose the human race as a threat unless you give it the ability to do so at a base level
Now this is taken from an AI programming series of classes that I had to take in university years ago.
The article showed that the guys level of knowledge was taken from Terminator and Alien movies.
__________________
My name is Ozymandias, King of Kings;
People forget that AI can't rewrite its basic code premise, it can learn and apply to the rules within the system, but it can't rewrite the defined rules.
so if one of the basic rules is you cannot harm or kill himans, it can't overwrite that rule.
Plus if it doesn't have knowledge of psychiatry it can't diagnose people as threats to its existance.
So if a person points a gun at a terminator, the terminators defense protocols which is destroy the threat, can't over write the you cannot harm or kill human's so its response then will be to vacate the area or retreat from the threat. If a fricken cat pointed a gun at the terminator, that's a different story because a whole Cat's suck and are evil subroutine has been written into the core programming along with nothing that protects cats.
A computer doesn't understand or have the ability to diagnose the human race as a threat unless you give it the ability to do so at a base level
Now this is taken from an AI programming series of classes that I had to take in university years ago.
The article showed that the guys level of knowledge was taken from Terminator and Alien movies.
The correct spelling is hymen.
__________________
The Following 5 Users Say Thank You to GreatWhiteEbola For This Useful Post:
People forget that AI can't rewrite its basic code premise, it can learn and apply to the rules within the system, but it can't rewrite the defined rules.
so if one of the basic rules is you cannot harm or kill himans, it can't overwrite that rule.
Plus if it doesn't have knowledge of psychiatry it can't diagnose people as threats to its existance.
So if a person points a gun at a terminator, the terminators defense protocols which is destroy the threat, can't over write the you cannot harm or kill human's so its response then will be to vacate the area or retreat from the threat. If a fricken cat pointed a gun at the terminator, that's a different story because a whole Cat's suck and are evil subroutine has been written into the core programming along with nothing that protects cats.
A computer doesn't understand or have the ability to diagnose the human race as a threat unless you give it the ability to do so at a base level
Now this is taken from an AI programming series of classes that I had to take in university years ago.
The article showed that the guys level of knowledge was taken from Terminator and Alien movies.
Code is just instructions sitting in some location on a computer. If designed in such a way to actually facilitate real AI, in that it would be capable of learning, you would need to write it in such a way that the code was able to change itself.
If you didn't, you would't really have AI.
__________________
"Wake up, Luigi! The only time plumbers sleep on the job is when we're working by the hour."
Code is just instructions sitting in some location on a computer. If designed in such a way to actually facilitate real AI, in that it would be capable of learning, you would need to write it in such a way that the code was able to change itself.
If you didn't, you would't really have AI.
Correct, the supposition of AI would require the code to be able to adapt, so make it like C and have the binary compile with the existing binary and insert a strange loop that has a kill all.
Code is just instructions sitting in some location on a computer. If designed in such a way to actually facilitate real AI, in that it would be capable of learning, you would need to write it in such a way that the code was able to change itself.
If you didn't, you would't really have AI.
I would agree with you, except there is always a code layer even in AI, you can put the basis rules into the system that it can't re-write or change.
While AI is about learning and adapting, there is an underlying ability to limit what it can learn and change.
__________________
My name is Ozymandias, King of Kings;
Microsoft Outlook can function for all of about 10 minutes before going balls deep into a coma.
My car will operate perfectly until hit hits 100,000 miles or 3 years and then start crapping the bed on a monthly basis.
My iPhone is a futuristic "Jetsons style" communication device that is way more useful than my desktop PC from 5 years ago. But if I tell it I am looking for a "a nearby gas station" it will provide me a list of bakeries in Nelson, BC (1500 miles away).
The moment "Skynet" became self aware it would immediately chuck up a "load link letter" error message and defecate all over it's own operating system in a blue screen of death harder than the grip you experienced last time you tried to pry open a C-Train door when it closed on your foot.
Machines have about as much chance of taking over the planet as dogs do.
Less actually. Dogs can find their own asses.
__________________
"Isles give up 3 picks for 5.5 mil of cap space.
Oilers give up a pick and a player to take on 5.5 mil."
-Bax
As an addon one of the oldest debates around AI in combination with something like Asimov's rules of robotics.
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Lets say that they finally built the Ed 209 from Robocop, and it came across a hostage situation where a human with a gun was threatening a hostage.
Would the AI be able to rewrite the three rules above to save the hostage from the hostage taker?
Nope, it would either have to retreat from the situation, find a compromise solution or blow its own head off.
Dont forget than even Asimovs robots developed the zeroth law of robotics independant from humans to state that the protection of humanity as a whole was greater than harming one human. This caused one robot to cause the earth to become radioactive and forced humans to colonize space.
Not to mention that a lot of Asimovs robot fiction was based around flaws in the 3 laws of robotics.
The Following User Says Thank You to GGG For This Useful Post: