Thread: The A.I. Thread
View Single Post
Old 05-10-2016, 11:04 AM   #83
psyang
Powerplay Quarterback
 
Join Date: Jan 2010
Exp:
Default

Interesting article. Presents a decent albeit a little extreme view of both sides of the AI debate.

Should We Be Afraid of AI?

Quote:
Believers in true AI and in Good’s ‘intelligence explosion’ belong to the Church of Singularitarians. For lack of a better term, I shall refer to the disbelievers as members of the Church of AItheists. Let’s have a look at both faiths and see why both are mistaken. And meanwhile, remember: good philosophy is almost always in the boring middle.
I think his view of Singularitarians is a bit too strawman-ish for my liking:
Quote:
Like all faith-based views, Singularitarianism is irrefutable because, in the end, it is unconstrained by reason and evidence. It is also implausible, since there is no reason to believe that anything resembling intelligent (let alone ultraintelligent) machines will emerge from our current and foreseeable understanding of computer science and digital technologies. Let me explain.

Sometimes, Singularitarianism is presented conditionally. This is shrewd, because the then does follow from the if, and not merely in an ex falso quodlibet sense: if some kind of ultraintelligence were to appear, then we would be in deep trouble (not merely ‘could’, as stated above by Hawking). Correct. Absolutely. But this also holds true for the following conditional: if the Four Horsemen of the Apocalypse were to appear, then we would be in even deeper trouble.
And while he rightly constrains AIs by the same laws/conditions that constrain Turing Machines, his view that AI cannot achieve true intelligence shows his bias that the human brain is definitely not an extremely efficient Turing Machine. I think this is still an open question.

Quote:
Plenty of machines can do amazing things, including playing checkers, chess and Go and the quiz show Jeopardy better than us. And yet they are all versions of a Turing Machine, an abstract model that sets the limits of what can be done by a computer through its mathematical logic.

Quantum computers are constrained by the same limits, the limits of what can be computed (so-called computable functions). No conscious, intelligent entity is going to emerge from a Turing Machine.
Yes, AlphaGo formulated patterns based on millions of Go games. But perhaps we do the same - but more efficiently. We don't need millions of Go games, but we are able to pull strategies and pattern match derived from multiple other sources and apply them to problems in other domains. Is AI fundamentally unable to do this, or have we just not figured out a way to implement this well?
psyang is offline   Reply With Quote