The win puts an exclamation point on a significant moment for artificial intelligence. Over the last twenty-five years, machines have beaten the best humans at checkers, chess, Othello, even Jeopardy! But this is the first time a machine has topped the very best at Go—a 2,500-year-old game that’s exponentially more complex than chess and requires, at least among humans, an added degree of intuition.
AlphaGo relies on deep neural networks—networks of hardware and software that mimic the web of neurons in the human brain. With these neural nets, it can learn tasks by analyzing massive amounts of digital data. If you feed enough photos of cow in the neural net, it can learn to recognize a cow. And if you feed it enough Go moves from human players, it can learn the game of Go. But Hassabis and team have also used these techniques to teach AlphaGo how to manage time. And the machine certainly seemed to manage it better than the Korean grandmaster. Its clock still carried sixteen minutes.
The Google machine repeatedly made rather unorthodox moves that the commentators could quite understand. But that too is expected. After training on real human moves, AlphaGo continues its education by playing game after game after game against itself. It learns from a vast trove of moves that it generates on it own—not just from human moves. That means it is sometimes makes moves no human would. This is what allows it to beat a top human like Lee Sedol. But over the course of an individual game, it can also leave humans scratching their heads.
Yeah. It does in fact equal AI. All that the AI you're thinking about requires is further development along these same lines we've been moving along... we'll get there eventually.
__________________ "The great promise of the Internet was that more information would automatically yield better decisions. The great disappointment is that more information actually yields more possibilities to confirm what you already believed anyway." - Brian Eno
That said, Lee Sedol did take a match. This on its own is very significant.
Two, Alpha Go was specifically trained to beat Lee Sedol by watching, and playing millions of matches based on Sedol's own specific style. Some skeptical commentary on the Internets (sorry Alpha Go if you are reading this) has pointed out that the reams of specific data amassed for this particular project may have very little use against another top-ranking human player. It is also notable, apparently, that Alpha Go played games against itself, and not another AI.
Three, human players tend to improve and complement their skills when playing with or against AI. Fan Sui, the European champ that played against Alpha Go in its debut (and lost 5-0) has increased his global rankings from around 600 to 300. The same thing happened with chess.
So, yes, it is pretty much specific AI at this point - even though it is very powerful.
The problem with these kinds of systems is that they have yet to have the massive world-changing effect that is constantly being promised as lurking just around the corner. Yes, smart software has made significant gains (robo-trading, manufacturing, legal briefs etc...), but major AI initiatives such as Watson or Deep Blue haven't carried over in a big way to the real world.
Furthermore, as Robert Gordon has pointed out very convincingly, we haven't seen anything resembling a productivity bump from this new machine learning tech.
The winner, a contestant based in Israel called Chaim Linhart, combined several established machine-learning techniques with large databases of scientific information to correctly answer 59 percent of the questions. Like other participants, Linhart fed his computer system hundreds of thousands of questions paired with correct answers so that it could learn to come up with the right answer.
A score of almost 60 percent might disappoint most parents, but it is remarkable for a computer. The test used for the contest was, however, simplified slightly to make it practical for computers to attempt. Diagrams were removed, for example, and only questions with multiple-choice answers were used.
...
An impressive recent example is a Google program designed to play the subtle and computationally complex board game Go (see “Google’s AI Masters the Game of Go a Decade Earlier Than Expected”).
This progress has inspired hopes, and fears, that truly intelligent machines may not be so far away. But Etzioni believes new techniques will be needed to achieve even basic competency at more complex tasks, something that the latest results seem to confirm. Likewise, the best means of gauging progress in artificial intelligence, the Turing Test, has proved all too easy to rig using simple tricks.
Was following the matches and the commentary. Really fascinating. Lee played a brilliant move in his lone win that seemed to confuse AlphaGo and caused it to make several odd moves in response (could deem them mistakes). The win helped dispel the general mood that AlphaGo was undefeatable, especially after its virtually perfect 3rd game win.
Found this video snippet of how they trained the AI on old Atari video games. Very cool, though the statement "it ruthlessly exploits the weakness in a system that it's found" is a bit chilling.
Two, Alpha Go was specifically trained to beat Lee Sedol by watching, and playing millions of matches based on Sedol's own specific style.
This isn't true. The researches stated that they started training AlphaGo on a number amateur GO matches, then proceeded to just let AlphaGo play millions of games against itself. There was nothing specific about Lee, and the researchers even said that feeding AlphaGo games that Lee played wouldn't have made much difference since the number of games were so small.
I heard they next want to start training AlphaGo from scratch by playing games against itself without first seeding it with previous games played by humans. The idea being that its algorithms will not be biased by how humans play the game, and a whole new way of understanding/playing Go may result.
The Following User Says Thank You to psyang For This Useful Post:
A great article on the actual gulf that separates machine learning from a human toddler. Transfer learning is the biggest part of general intelligence. The ability to intuitively map together information about the world. So while AI is very good at an increasing number of specific tasks, it can't adapt prior learning on a previous task to a new one in a way that rapidly increases competency. It must learn from scratch every single time.
This may or may not matter. As PZ Myers (an AI skeptic) has said before, the attempt to create a general AI intelligence that models closely after humans is an impossible dream. Human consciousness is too contingent on way too complex a set of variables, and is also set in physical circumstances that would never exist for an AI. Every emotional change you have is accompanied by your brain bathing itself in a cocktail of hormones.
Now machine learning may or may not be reaching a point where its decisions exceed human reasoning. Certainly, that appears to be the edge that Alpha Go had over Lee Sedol and it is what lead Deep Blue to triumph over Kasperov back in 1995.
Chess has not diminished since then, and neither will Go. Human-machine learning enhances the capacities of both. Human-machine tandems in chess are more effective than individual machines or humans.
This is a milestone, yes. But we just don't know whether it is merely technical or cohesively cultural, yet.
Last edited by peter12; 03-15-2016 at 02:46 PM.
The Following 2 Users Say Thank You to peter12 For This Useful Post:
What I find so interesting about this is that it's the most impressive display of AI independent learning and creativity that I've ever heard of. As far as I understand it, the AI only learns by analyzing past games, but playing a million iterations, it would eventually encounter a situation that had not occurred in the database of games and has to make a unique decision. The game would then observe the results of that unique decision and through more iterations of game, could eventually hone that unique decision into a technique that surpasses a grandmaster.
I once heard that "Originality is a myth" and that all originality is just an attempt to recreate an accident or aberration.
e: It's be really interesting to see if the human players are able to decipher the new techniques developed by the program and counter them.
What I find so interesting about this is that it's the most impressive display of AI independent learning and creativity that I've ever heard of. As far as I understand it, the AI only learns by analyzing past games, but playing a million iterations, it would eventually encounter a situation that had not occurred in the database of games and has to make a unique decision. The game would then observe the results of that unique decision and through more iterations of game, could eventually hone that unique decision into a technique that surpasses a grandmaster.
I once heard that "Originality is a myth" and that all originality is just an attempt to recreate an accident or aberration.
e: It's be really interesting to see if the human players are able to decipher the new techniques developed by the program and counter them.
As I said above, Fan Hui, Alpha Go's first opponent, has improved his game immensely since his 5-0 series loss to the Google program. So yes, humans excel at transfer learning, and high-end Go players can understand some of the machine's moves, and adapt. Sedol's win is the real story here, in my opinion.
My personal view is that a lot of this so-called AI development is just hubris on the part of the programmers, VC's etc... If it proves to be all but impossible for a machine to develop a general intelligence, like humans, and all that occurs is the automation of a handful of routine-based tasks or token victories against human game players, then I have to ask, "what is the point."
A huge part of these games is that they are duels between equals. A machine doesn't get tired, it doesn't have any tells or give anything away. It also doesn't know why it made a move. Kasparov was beaten by a Deep Blue glitch - a move so out of the ordinary that it threw him for the rest of the series, which at the time, he attributed to intelligence. Now we know that the program had reached a point where it did not have any optimal moves so went to a default random move.