Others are arguing, that because AlphaGo does use a search tree process, that his prediction was correct. I would say that idea of the search tree was correct, but the methodology he was proposing in the rest of the article were further optimizations upon techniques indicative of deep blue, and not what AlphaGo employeed. In fact, his proposals largely consist of improving computer speed and some cleaver Go specific optimizations would be what crack Go.
The article talks about two extremes between only trying a few moves while being intelligent about the choices:
Specifically, they wanted computers to examine only playing sequences that were meaningful according to some human reasoning process. In computer chess this policy, known as selective search, never really made progress. The reason is that humans are extremely good at recognizing patterns; it is one of the things that we do best.
and using raw brute force to test all positions:
The idea was to let computers do what they do best, namely, calculate. A simple legal-move generator finds all the permissible moves in a position, considers all the possible responses, and then repeats the cycle. Each cycle is called a ply, each generation of new possibilities is called a node—that is, a branching point in a rapidly widening tree of analysis. The branches terminate in “leaf,” or end positions.
and the solution was a balance. A very dumb valuation, and searching lots of positions:
Deep Blue typically looked 12 plies ahead in all variations (and 40 or more plies in selective lines), generating around 170 million leaf nodes per second. Next, the program would evaluate each of these positions by counting “material,” that is, the standard values of the chess pieces. For example, a pawn is worth one point, a knight or bishop three, and so on. Then it added points for a range of positional factors, chosen with the help of human grandmasters.
In the middle of the article, he talks a lot about applying Go specific knowledge to the pruning process to make searching large number of positions easier, by caching knowledge it has discovered about how the game is going. Ie, this group of stones is dead.
At the end of the article, he talks about computing power, and assumes a good go Ai will use custom hardware and search far more positions than Deep Blue:
My gut feeling is that with some optimization a machine that can search a trillion positions per second would be enough to play Go at the very highest level. It would then be cheaper to build the machine out of FPGAs (field-programmable gate arrays) instead of the much more expensive and highly unwieldy full-custom chips.
It's important to note that AlphaGo searched fewer positions than Deep Blue did. He discounts the methodology of "Selective Search" because it relies upon human's ability to do patterns. However, this is effectively how AlphaGo worked. It is able to do very well without using a tree search at all (beating other Go playing games)
So to answer your question, "is this indicative of how this field has shifted in thought since 2007?"
This whole article seems to show a transition in thought. They have the right idea: Searching positions and having knowledge about what positions are good and bad helps AlphaGo achieve professional level play. However, he completely misses that neural networks would be able to get the human quality of pattern recognition, and that no Go specific search optimizations would be needed.
The article talks about two extremes between only trying a few moves while being intelligent about the choices:
Specifically, they wanted computers to examine only playing sequences that were meaningful according to some human reasoning process. In computer chess this policy, known as selective search, never really made progress. The reason is that humans are extremely good at recognizing patterns; it is one of the things that we do best.
and using raw brute force to test all positions:
The idea was to let computers do what they do best, namely, calculate. A simple legal-move generator finds all the permissible moves in a position, considers all the possible responses, and then repeats the cycle. Each cycle is called a ply, each generation of new possibilities is called a node—that is, a branching point in a rapidly widening tree of analysis. The branches terminate in “leaf,” or end positions.
and the solution was a balance. A very dumb valuation, and searching lots of positions:
Deep Blue typically looked 12 plies ahead in all variations (and 40 or more plies in selective lines), generating around 170 million leaf nodes per second. Next, the program would evaluate each of these positions by counting “material,” that is, the standard values of the chess pieces. For example, a pawn is worth one point, a knight or bishop three, and so on. Then it added points for a range of positional factors, chosen with the help of human grandmasters.
In the middle of the article, he talks a lot about applying Go specific knowledge to the pruning process to make searching large number of positions easier, by caching knowledge it has discovered about how the game is going. Ie, this group of stones is dead.
At the end of the article, he talks about computing power, and assumes a good go Ai will use custom hardware and search far more positions than Deep Blue:
My gut feeling is that with some optimization a machine that can search a trillion positions per second would be enough to play Go at the very highest level. It would then be cheaper to build the machine out of FPGAs (field-programmable gate arrays) instead of the much more expensive and highly unwieldy full-custom chips.
It's important to note that AlphaGo searched fewer positions than Deep Blue did. He discounts the methodology of "Selective Search" because it relies upon human's ability to do patterns. However, this is effectively how AlphaGo worked. It is able to do very well without using a tree search at all (beating other Go playing games)
So to answer your question, "is this indicative of how this field has shifted in thought since 2007?" This whole article seems to show a transition in thought. They have the right idea: Searching positions and having knowledge about what positions are good and bad helps AlphaGo achieve professional level play. However, he completely misses that neural networks would be able to get the human quality of pattern recognition, and that no Go specific search optimizations would be needed.