Zeta Chess

Three Ways of GPGPU Chess...

The most common way of gpgpu programming is simply to run thousands to millions gpu threads performing the same task, take input from global memory, VRAM, do some computation on local memory or registers, copy results back to VRAM.

Current Neural Network based chess engines do run millions of threads on gpu in this manner.

With Zeta v099 I tried a different mode, to view the gpu as a bunch of SIMD units. The parallel cores of one SIMD unit are coupled to one worker to work on the same chess node in parallel during move generation, move picking, and position evaluation. Different SIMD units work on the chess game tree via an classic parallel AlphaBeta search like ABDADA or Lazy SMP. The advantage of such an design is that you have only hundreds of workers (the amount of SIMD units) to feed an parallel search with, cos the more workers you run in AlphaBeta search, the less efficient the parallel game tree search performs.

With Zeta v097 and v098 I tried yet another mode. Current gpus are based on SIMT architecture, they are able to run multiple waves of Warps (Nvidia: 32 parallel SIMD threads) resp. Wavefronts (AMD: 64 parallel SIMD threads) on the same SIMD unit to hide memory latencies. So they are able to run a multitude of threads on a single core. With this mode I ran thousands of parallel gpu threads, each working on a single node, all working on the chess game tree search in parallel via an parallel Best-First-MiniMax search. The disadvantage of such an design is the low nodes-per-second performance of a single gpu thread, and the massive amount of parallel workers.

So, the question which remains open is how to run thousands to millions of threads, performing all the same computation, in an parallel game tree search.

I still do not know. As I already mentioned in an earlier post, I tried an LIFO-Stack based parallel search. Take one position from global stack, compute the children, put back the children onto global stack. But I could not implement AlphaBeta pruning effective, therefore some kind of linked list as data structure is in need, and my mind was not able to wrap around this in a SIMD friendly manner...but I guess such an approach could be one way to go to utilize a gpu for chess.

Neural Networks on GPU

Currently there is much going on with neural networks for chess. With GiraffeAlphaZero, and its open source adaptation LC0 (Leela Chess Zero), it was shown that, with enough horse power, artificial neural networks are competitive in computer chess.

Currently LC0 uses an MCTS, Monte-Carlo Tree Search, approach with GPU as neural network accelerator for position evaluation.

My own experiments showed that AlphaBeta search is superior to MCTS, but current GPU architectures suffer from host-device latency, so you have to couple tasks to batches to be executed in one run on the GPU, not that conform with the serial nature of AlphaBeta.

With upcoming GPGPU architectures (or ANN accelerators) with less latency there might be AlphaBeta ANN engines possible...

Bye bye 8800 GT

Hmm,
my GPU workstation went broken, so I decommissioned my Maschina, time to say bye bye to my workhorse from 2008, the Nvidia 8800 GT
 

Nvidia 8800 GT

 

and the Asus Crosshair Formula II

Asus Crosshair Formula II

 

and the 8 GB GeIL Black Dragon Memory

GeIL Black Dragon Memory

 

not to forget, the water-cooled 'Beast' from 2015, AMD Fury X with 4096 cores, 8 TFLOPS and 4 GB HBM

The Beast - AMD Fury X

...we had a lot of fun, may you find a new, worthy owner on eBay.
 

Home - Top