View Single Post
Old 03-15-2016, 01:49 PM   #2
troutman
Unfrozen Caveman Lawyer
 
troutman's Avatar
 
Join Date: Oct 2002
Location: Crowsnest Pass
Exp:
Default

http://www.wired.com/2016/03/googles...ius-lee-sedol/

The win puts an exclamation point on a significant moment for artificial intelligence. Over the last twenty-five years, machines have beaten the best humans at checkers, chess, Othello, even Jeopardy! But this is the first time a machine has topped the very best at Go—a 2,500-year-old game that’s exponentially more complex than chess and requires, at least among humans, an added degree of intuition.

AlphaGo relies on deep neural networks—networks of hardware and software that mimic the web of neurons in the human brain. With these neural nets, it can learn tasks by analyzing massive amounts of digital data. If you feed enough photos of cow in the neural net, it can learn to recognize a cow. And if you feed it enough Go moves from human players, it can learn the game of Go. But Hassabis and team have also used these techniques to teach AlphaGo how to manage time. And the machine certainly seemed to manage it better than the Korean grandmaster. Its clock still carried sixteen minutes.


The Google machine repeatedly made rather unorthodox moves that the commentators could quite understand. But that too is expected. After training on real human moves, AlphaGo continues its education by playing game after game after game against itself. It learns from a vast trove of moves that it generates on it own—not just from human moves. That means it is sometimes makes moves no human would. This is what allows it to beat a top human like Lee Sedol. But over the course of an individual game, it can also leave humans scratching their heads.
troutman is offline   Reply With Quote