Zeta Chess

Zeta - Source Code and Binaries online

I fixed some issues in Zeta and Zeta Dva, source code and binaries are online again

https://github.com/smatovic/ZetaDva/releases

https://github.com/smatovic/Zeta/releases

Please read the whole README file before running the Zeta engine on GPU

https://github.com/smatovic/Zeta/blob/master/README

I lost the source of Zeta Vintage, and an attempt to do an rewrite in C showed again that the 6502 processor should really be programmed in assembly, so a rewrite in 6502 assembly is still on my bucket list...

https://github.com/smatovic/ZetaVintage

Zeta v099

I finished my current run on Zeta v099, my experimental gpu chess engine.

https://github.com/smatovic/Zeta

The actual conclusion of the current iteration is, that an simple engine, with standard chess programming techniques, can be ported to OpenCL to run on a gpu, but it would take more effort to make the engine competitive in terms of computed nodes per second (speed), heuristics (expert knowledge), and scaling (parallel search algorithm).

Computer Chess, as an computer science topic, evolved over decades, starting in the 40s and 50s, and reached one peak 1997 with the match Deep Blue vs. Kapsarow. Nowadays chess engines are tuned by playing thousands and thousands of games, so to get an chess playing engine running on the gpu and to get an competitive chess playing engine running on the gpu are two different tasks.

Google's AlphaGo Deepmind and Chess Giraffe

It was in the news, Google's AlphaGo won against the European Champion Fan Hui in the game of GO...another frontier is fallen to computer domination.

The question if such an attempt with deep neural networks works also for chess was answered by Matthew Lai in his Master Thesis with his chess engine Giraffe, which reached the level of an FIDE International Master (about 2400 Elo), an astounding achievement considering only 4 month of work....

...so, when are we going to see AlphaChess Mr. Lai? :-)

Links:

Giraffe: Using Deep Reinforcement Learning to Play Chess by Matthew Lai, 2015

Mastering the Game of Go with Deep Neural Networks and Tree Search by Google Deepmind, 2016

Learning to Play the Game of Chess by Sebastian Thrun, 1995

NeuroChess by Sebastian Thrun on CPW

YBWC vs. RBFMS vs. MCTS vs. MCAB

To port an classic chess engine approach with an parallel Alphabeta algorithm like YBWC to an GPU architecutre would take a significant bunch of time, if it is even possible to port all well known computer chess techniques straight forward. And it is questionable if an Elo gain, by more computed nodes per second, is eaten up again by an higher branchingfactor due to an simpler implementation.

Zeta 098 and 097 make use of an Randomized Best First MiniMax Search, but my implementation makes excessive use of Global Memory and scales poorly.

At the very beginning of the project it was clear, that an Monte Carlo Tree Search would fit best for gpus. But until now there is no known engine that could make MCTS work well for Chess.

What is left, except to try to port an classic approach?

I could improve the performance of the BestFist search significantly by switching from GlobalMemory to LocalMemory and i could remove the randomness...another alternative would be to switch to MCAB, Monte Carlo Alphabeta...

Review of Papers on GPU Game Tree Search

  • It looks like Monte Carlo Tree Search gives the best speedups compared to an CPU implementation.
  • The Node Based Parallel Search is an hybrid approach that offloads computational tasks to the GPU.
  • MiniMax search can be parallelized on the GPU, but is inferior to AlphaBeta.
  • Speedup of parallel AlphaBeta implementations depend on the branching factor of the Game.

So far i have found nothing about an implementation that makes use of the recursive features of newer architectures.

Papers on GPU Game Tree Search

Linklist of papers related to Game Tree Seach on GPUs for two-player zero-sum games...

Parallel Game Tree Search on SIMD Machines
Holger Hopp and Peter Sanders (1995), citeseerx pdf
Note - Implementation of YBWC, parallel AlphaBeta, on an 16K SIMD machine for an synthetic game tree.

Efficiency of Parallel Minimax Algorithm for Game Tree Search
Plamenka Borovska, Milena Lazarova (2007), citeseerx pdf
Note - Efficiency of AlphaBeta search for 4x4 TicTacToe via MPI and OpenMP on an CPU cluster.

GPU-Accelerated program to play Go
Zachary Clifford (2009), pdf
Note - Implementation of MCTS for Go.

Playing Zero Sum Games on the GPU
Avi Bleiweiss (2010), pdf
Note - Multliple Game Tree Searches on GPU.

Parallel Minimax Tree Searching on GPU
Kamil Rocki and Reiji Suda (2010), pdf
Note - Implementation of MinixMax for Reversi.

Large-Scale Parallel State Space Search Utilizing Graphics Processing Units and Solid State Disks
Damian Sulewski (2011), link to pdf
Note - Chapter 4 - GPUSSD-BFS - A GPU and SSD supported Breadth-First Search.

Parallel Game Tree Search Using GPU
L’ubomír Lackovi (2011), pdf
Note - Implementation of parallel search for Czech Draughts.

Parallel Monte Carlo Tree Search on GPU
Kamil Rocki and Reiji Suda (2011) pdf
Note - Implementation of MCTS for Reversi.

Parallel alpha-beta algorithm on the GPU
Damjan Strnad and Nikola Guid (2011), scholar google pdf
Note - Implementation of PV-Split, parallel AlphaBeta, for Reversi.

A Node-based Parallel Game Tree Algorithm Using GPUs
Liang Li, Hong Liu, Peiyu Liu, Taoying Liu, Wei Li, Hao Wang (2012) IEEE
Note - Implementation of node-based parallelism for Connect6.

Parallel UCT Search on GPUs
Nicolas A. Barriga, Marius Stanescu, Michael Buro (2014), IEEE
Note - Implementation of MCTS with UCT for 8x8 Ataxx.

A Review on Parallelization of Node based Game Tree Search Algorithms on GPU
Ms. Rutuja U. Gosavi, Prof. Payal S. Kulkarni (2014), pdf

Parallelization of Node Based Game Tree Search Algorithm on GPU
Ms. Rutuja U. Gosavi, Mrs. Archana S. Vaidya (2015), pdf
Note - Implementation of node-based parallelism for Connect4/Connect6.

How Computer Chess Engines could run on GPUs

  1. One SIMD Unit - One Board
    To avoid thread divergence in a Warp, resp. Wavefront, the engine could couple, for example, 32 or 64 Work-Items of one Work-Group to work together on the same chess position. For instance, to generate moves, sort a move list or do an board evaluation in parallel. A move generator of such an Work-Group could operate over pieces, directions, or simply 64 squares in parallel. But in any of these cases current GPU SIMD units will 'waste' some instructions compared to the more efficient, sequential, processing of an CPU.
  2. Use of Local Memory* instead of Global Memory
    The more sequential threads are coupled into one Work-Group to work on one chess position in parallel, the more Local Memory* per Work-Group could be available to store a move list, or a move list stack. By the use of faster Local Memory ,less Warps (resp. Wavefronts) are in need to hide Global Memory latency.
  3. Hundreds of Work-Groups instead Thousands of Threads
    YBWC is a parallel game tree search algorithm used in nowadays chess engines, but the more workers the algorithm runs, the less efficient he performs.So, by coupling sequential operating threads into one Work-Group, to work on one chess position in parallel, we lower the total number of workers and increase efficiency of the parallel search.

* Local Memory as OpenCL term is translated to Shared Memory as Nvidia Cuda term.

Alternative Game Tree Search Algorithms

Here some alternative algorithms to plain MiniMax AlphaBeta search...

 

Why Computer Chess Engines do not run on GPUs

  1. SIMT architecture of GPUs
    GPUs consists of tens to hundreds of SIMD or Vector Units that process multiple threads in multiple Warps or Wavefronts in SIMT fashion.
  2. Memory architecture of GPUs
    To hide latency of Global Memory (VRAM) GPUs can run multiple Warps or Wavefronts and prefer to do computation by the use of Local or Private Memory. So, the more Work-Items and Work-Groups you run to hide latency, the less Local and Private Memory per thread will be available.
  3. Thousands of threads on GPUs
    MiniMax search with Alpha-beta pruning performs best serial, not parallel.


* edit on 2015-03-30 *

Zeta - Milestones

Here an overview of what happened before....

Zeta (099h to 099k)

  • fixes n cleanups
  • switch from Lazy SMP to ABDADA parallel search
  • added IID - Internal Iterative Deepening
  • one cl file for all gpu generations with inlined optimizations
  • Zeta 099k on AMD Radeon R9 Fury X, 256 workers, ~ 7.6 Mnps
  • Zeta 099k on Nvidia GeForce GTX 750, 16 workers, ~ 800 Knps
  • Zeta 099k on AMD Radeon HD 7750, 32 workers, ~ 700 Knps
  • Zeta 099k on Nvidia GeForce 8800 GT, 14 workers, ~ 110 Knps

Zeta (099b to 099g)

  • switch from KoggeStone based move generation to Dumb7Fill
  • added atomic features for different gpu generations

Zeta (099a)

  • switch from best first minimax search to parallel alphabeta (lazy smp)
  • ported all (except IID) search techniques from Zeta Dva v0305 to OpenCL
  • ported the evaluation function of Zeta Dva v0305 to OpenCL
  • vectorized and generalized 64 bit Kogge-Stone move generator
    64 threads are now coupled to one worker, performing move generation,
    move picking and evaluation, square-wise, in parallel on the same node
  • portability over performance, should run on the very first gpus with
    OpenCL 1.x support (>= 2008)

Zeta (098d to 098g)

  • mostly cleanup and fixes
  • restored simple heuristics from Zeta Dva (~2000 Elo on CCRL) engine
  • protocol fixes
  • fixed autoconfig for AMD gpus
  • switched to KoggeStone based move generator
  • switched to rotate left based Zobrist hashes
  • switched to move picker
  • switched to GPL >= 2
  • Zeta 098e on Nvidia GeForce GTX 580, ca. 6 Mnps, est. 1800 Elo on CCRL
  • Zeta 098e on AMD Radeon HD 7750, ca. 1 Mnps
  • Zeta 098e on AMD Phenom X4, ca. 1 Mnps
  • Zeta 098e on Nvidia GeForce 8800 GT, ca. 500 Knps


Zeta (098a to 098c)

  • Improved heuristics, partly ported from the Stockfish chess engine
  • AutoConfig for OpenCL devices
  • Parameter tuning
  • Zeta 098c on Nvidia GeForce GTX 480, ca. 5 Mnps, est. 2000 Elo on CCRL
  • Zeta 098c on AMD Radeon R9 290, ca. 3.2 Mnps

Zeta (097a to 097z)

  • Implementation of an BestFirstMiniMax search algorithm with UCT parameters for parallelization
  • Zeta 097x on Nvidia GeForce GTX 480, ca. 5 Mnps, est. 1800 Elo on CCRL
  • Zeta 097x on AMD Radeon HD 7750, ca. 800 Knps

Zeta (0930 to 0960)

  • Tested Monte Carlo Tree Search without UCT across multiple Compute Units of the GPU
  • Tested LIFO-Stack based load balancing for AlphaBeta search on one Compute Unit of the GPU
  • Tested the 'Nagging' and 'Spam' parallelization approach for AlphaBeta search on one Compute Unit of the GPU
  • Tested 'RBFMS', Randomized BestFirstMiniMax Search, a parallelized version of BestFirstMinixMax, across multiple Compute Units of the GPU

Zeta (0915 to 0918)

  • 64 bit Magic Bitboard Move Generator running
  • AlphaBeta search algorithm with 'SPPS'-parallelization running 128 threads on one Compute Unit of the GPU

Zeta (0900 to 0910)

  • Tested 32 bit 0x88 and 64 bit Magic Bitboard Move Generator
  • Ported Heuristics, the Evaluation Function, from CPU engine 'ZetaDva' (~2000 Elo) to OpenCL

 

* updated on 2018-11-13 *

Zeta - Source Code

Zeta and Zeta Dva support only some basic Xboard protocol commands and some users have reported problems with the configuration and interface of the last Zeta versions.‭ ‬So i will publish the source code again when these parts are more user friendly designed and tested for Windows Chess-GUIs like Winboard or Arena.

Home - Top