Quote:
Originally Posted by jammies
I was randomly surfing links from one page to another and I stumbled upon this site - http://www.longbets.org/bets - where people are encouraged to make long-term predictions about the future, and then back those predictions with money. Of all the predictions on the site, the very first one was most intriguing to me; Mitch Kapor bet Ray Kurzweil that by 2029, no machine will have passed the famous Turing Test and proved itself to be conscious.
For those who don't know, Kapor was one of the founders of Lotus (the software maker not the car manufacturer), and an important figure in the world of open source software with his co-founding of the EFF (Electronic Frontier Foundation). Kurzweil is a polymath who has worked in the fields of OCR, speech recognition, musical synthesizers, and artificial intelligence, as well as authoring several futurist books. Both men are therefore well versed in the possibilities of computer technology, which makes their disagreement on the AI question so interesting.
What do you think? Will we one day be matched and then inevitably surpassed by our robotic overlords? Or is there something special about human intelligence that can never be copied by a computer?
|
I've had conversations with people who think that computers (particularly chatbots) can already pass the Turing test. Which, to me, suggests a poor understanding at what the Turing test is. Yeah, there are chat bots that can pass for human for brief periods of time on a chatroom (especially if the chatbot is saying sexy things and all the humans in the room are horny teenage boys), but none can survive any sort of interrogation, which is what the Turing test really is. Simple logic or syntax questions like "Can you repeat the third word you just said?" fool just about any chat bot, which is why existing contests like the Loebner prize force the interrogator to stick to questions about a topic.
That said, there's a huge difference between being able to process language and logic problems and having a conscience. More impressive (and important) would be a computer that can handle problems of mind: be aware of others, what they're likely thinking, and how to act in order to influence them. A computer that chooses to take the Turing test, understands the nature of the test, and then sets out to deceive the interrogator, is far more impressive an act than something that can simply mimic a human and parse language really well. Heck, a computer that had crow-level intelligence (a social understanding that allows for problems of mind, multi-step problem solving, tool construction and usage) would be pretty impressive.
It's like the chess problem: computers are able to play at a grand-master level, but only because their processing power has increased exponentially. They aren't truly thinking, they're just running through formulas at an incredibly fast speed. Programmers have been working on computers that can play Go for just as long, but have not produced machines that can play the game at anywhere close to a master level, because the game has far too many possible combinations to be solved with raw CPU speed.
I don't really think that the current direction of AI research is going to produce a conscious entity on its own. It needs to wait for our understanding of neuro-science to catch up.