It was in the news, Google's AlphaGo won against the European Champion Fan Hui in the game of GO...another frontier is fallen to computer domination.
The question if such an attempt with deep neural networks works also for chess was answered by Matthew Lai in his Master Thesis with his chess engine Giraffe, which reached the level of an FIDE International Master (about 2400 Elo), an astounding achievement considering only 4 month of work....
...so, when are we going to see AlphaChess Mr. Lai? :-)
Giraffe: Using Deep Reinforcement Learning to Play Chess by Matthew Lai, 2015
Mastering the Game of Go with Deep Neural Networks and Tree Search by Google Deepmind, 2016
Learning to Play the Game of Chess by Sebastian Thrun, 1995
NeuroChess by Sebastian Thrun on CPW
To port an classic chess engine approach with an parallel Alphabeta algorithm like YBWC to an GPU architecutre would take a significant bunch of time, if it is even possible to port all well known computer chess techniques straight forward. And it is questionable if an Elo gain, by more computed nodes per second, is eaten up again by an higher branchingfactor due to an simpler implementation.
Zeta 098 and 097 make use of an Randomized Best First MiniMax Search, but my implementation makes excessive use of Global Memory and scales poorly.
At the very beginning of the project it was clear, that an Monte Carlo Tree Search would fit best for gpus. But until now there is no known engine that could make MCTS work well for Chess.
What is left, except to try to port an classic approach?
I could improve the performance of the BestFist search significantly by switching from GlobalMemory to LocalMemory and i could remove the randomness...another alternative would be to switch to MCAB, Monte Carlo Alphabeta...