Zeta Chess

Zeta with NNUE on GPU?

I think it is possible to add the new neural network technique 'NNUE' to Zeta for upcoming GPU architectures like Nvidia Lovelace, Intel Xe and AMD RDNA3 which probably will all have support of INT8, 8 bit integer, math with higher throughput and maybe some 10 to 20 MB L3 cache per SIMD unit for the network weights file.

With INT8 optimized datatypes and instructions, one could build an vectorized 8 bit 0x88 move generator which operates over the 8 directions as vector and with 32 parallel gpu threads of one SIMD unit handles all pieces at once. Maybe reaching 1 to 2 million nodes per second per SIMD unit in an Zeta engine like framework.

With 32 SIMD gpu threads performing 32xFP32 or 32x2xFP16 operations per clock the NNUE inference performance could be 2 to 4 times faster than with current NNUE on CPUs with AVX2 (roughly estimated), considering a switch from integer to float weights.

Volta/Turing/Ampere have currently 16 cores per FPU SIMD and support doubled throughput for FP16 operations, I guess Nvidia will move with Lovelace back again to an 32 core per SIMD design with unified INT/FP16 cores. RDNA has 32 cores per SIMD, also with doubled throughput for FP16. Intel seems to use SIMD8 with 8 FP cores for its Xe GPU (with support for higher throughput for lower precision), maybe Intel will also add some kind of SIMD32, to couple 4 EUs to one compute unit.

So...

  • up to 2 Mnps per SIMD unit possible
  • up to 4x faster inference for NNUE possible
  • up to 160 parallel workers (SIMD units) on current highend-gpus

again, just some rough numbers estimated, big grain of salt and alike...

if the above all holds, then you get a hell of NNUE monster on highend-gpus.

Zeta v099 already has an simple AB framework implemented with ABDADA or as option RMO Lazy SMP parallel search across SIMD units, hence the main part would then be to implement all those funny search extensions and tricks Stockfish does in an iterative way in Zeta for GPU - full time job ;)

Followup:

I wrote 10 to 20 MB L3 cache per SIMD unit, assuming the whole net should fit in cache, doubt that this is common practice with NNUE on CPU, maybe the first layer with most of the weights resists in RAM for the incremental updates, and the further layers only get cached? Dunno.

2021-04-12 Followup:

  • I mixed up NNUE first layer INT16 and further INT8 weights, so the possible 4x inference speedup holds only if we assume 8-bit vector packed math on gpu.
  • I was not able to implement an efficient 8-bit 0x88 vector-based board representation on pen n paper, hence no 8-bit speedup for move gen in sight.

  • Even if I keep the current v099 bitboard design a switch to 32 gpu-threads piece-wise worker may pay off with certain architecture improvements of AMD's RDNA and increasing gpu clocks in mind.

Zeta - v099 revisited II

If I wish to keep the v099 design of Zeta, with classic parallel AlphaBeta, how could I improve the nps throughput per worker further?

The current board presentation is Bitboard, 64 bit based, this makes it easier to parallelize across the SIMD unit of a GPU. Current GPUs are 32 bit machines, and upcoming GPUs will probable support INT8, 8 bit integer, math with higher throughput, so you can do four INT8 operations per cycle instead of one 32 bit. Further I used the most simple parallelization of Bitboards for SIMD during move generation and evaluation, square-wise, so the engine runs 64 times, per square, the same code. Current GPU architectures (Turing/RDNA) have 32 cores per SIMD unit, so I need to run the square-wise code over two waves on the SIMD unit.

If I change to some kind of vectorized 8 bit move generation and evaluation to use INT8 optimized math, I could achieve a ~8x speedup, cos 64 bit operations need multiple cycles on 32 bit hardware. If I switch further from an square-wise parallelization to some kind of piece-wise, or better direction-wise, I could achieve at least a further 2x speedup.

Of course, these are numbers on paper, in practice there is always a trade-off, and one has to consider Amdahl's law, but it seems to me that this could be one way to go.

Zeta - v099 revisited

It works, Zeta v099 plays decent chess, with an classic parallel AlphaBeta approach, and I am convinced that with some further work it could reach more than 3000 CCRL Elo on an highend gpu.

But the obvious thing is, it lacks nps throughput per worker, the single thread performance is too low, and even with an better parallel search, there is not much to gain on massive parallel systems with more than 128 workers.

So to be able to beat the top 10 chess engines out there, the nps throughput per worker must be increased ten or twenty fold...

during early development I tried an design based on an LIFO-Stack parallel search. It had the best nps throughput of all my designs, but I was not able to implement AlphaBeta pruning efficient, so the speed gain was lost again during pruning.

If I had to start over, and make another Zeta version, I would try the LIFO-Stack based parallel search again...

Zeta v099m

Zeta v099m released as source and Linux/Windows 64 bit binary:

https://github.com/smatovic/Zeta/releases

Alternative downloads:

https://zeta-chess.app26.de/downloads/

Please consider the README file or --help option before running the engine.

From the changelog:

Zeta (099m) alpha; urgency=medium

* patch for ABDADA parallel search
* disabled RMO parallel search
* removed max device memory limitation
* mods in time control
* cleanups
*
* Zeta 099m on Nvidia V100, 160 workers, ~ 13.5 Mnps
* Zeta 099m on Nvidia V100, 1 worker, ~ 85 Knps

-- Srdja Matovic 13 Jul 2019

Here some nps and search scaling results...

################################################################################
# Zeta 099m, startposition, depth 12, best of 4 runs, Nvidia V100:
# tt1: 2048 MB, tt2: 1536 MB
#
### workers #nps          #nps speedup   #time in s   #ttd speedup   #relative ttd
### 1       86827         1.000000       156.586000   1.000000       1.000000 
### 2       180282        2.076336       55.749000    2.808768       2.808768 
### 4       356910        4.110588       35.564000    4.402936       1.567568 
### 8       704741        8.116611       19.637000    7.974029       1.811071 
### 16      1385758       15.959989      14.583000    10.737571      1.346568 
### 32      2786039       32.087242      11.124000    14.076411      1.310949 
### 64      5460849       62.893443      8.838000     17.717357      1.258656 
### 128     10235993      117.889516     7.377000     21.226244      1.198048 
### 160     11639290      134.051505     7.202000     21.742016      1.024299 

Zeta v099l

Zeta v099k did not scale well on Nvidia Pascal and Turing gpus, so I wrote a patch to fix this issue, and released Zeta v099l:

https://github.com/smatovic/Zeta/release

On Pascal it runs now 4 workers per Compute Unit and on Turing 2 workers per Compute Unit during guessconfigx.

According to Nvidia papers, Turing should have 16 wide SIMD units, with four units per Compute Unit, but according to my tests I can only speculate that the integer units are 32 wide, not 16, with two of them per Compute Unit.

During benchmarks on other systems it was shown again that some Windows OS have an OS gpu timeout, so you may want to apply this registry update on your Windows machine:

https://zeta-chess.app26.de/downloads/SetWindowsGPUTimeoutTo20s.reg

Download, double-click and reboot OS to increase gpu timeout from 2 to 20 seconds.

If you want to run an SMP benchmark for your gpu, I suggest to increase the gpu timeout to 400 seconds:

https://zeta-chess.app26.de/downloads/SetWindowsGPUTimeoutTo400s.reg

Zeta - Source Code and Binaries online

I fixed some issues in Zeta Dva and Zeta, source code and binaries are online again

https://github.com/smatovic/ZetaDva/releases

https://github.com/smatovic/Zeta/releases

Please consider the README file or --help option before running the Zeta engine on GPU.

I lost the source of Zeta Vintage, and an attempt to do an rewrite in C showed again that the 6502 processor should really be programmed in assembly, so a rewrite in 6502 assembly is still on my bucket list...

https://github.com/smatovic/ZetaVintage

Alternative downloads:

https://zeta-chess.app26.de/downloads/

 

Zeta v099

I finished my current run on Zeta v099, my experimental gpu chess engine.

https://github.com/smatovic/Zeta

The actual conclusion of the current iteration is, that an simple engine, with standard chess programming techniques, can be ported to OpenCL to run on a gpu, but it would take more effort to make the engine competitive in terms of computed nodes per second (speed), heuristics (expert knowledge), and scaling (parallel search algorithm).

Computer Chess, as an computer science topic, evolved over decades, starting in the 40s and 50s, and reached one peak 1997 with the match Deep Blue vs. Kapsarow. Nowadays chess engines are tuned by playing thousands and thousands of games, so to get an chess playing engine running on the gpu and to get an competitive chess playing engine running on the gpu are two different tasks.

Zeta - Milestones

Here an overview of what happened before....

Zeta (099m)

  • patch for ABDADA parallel search
  • disabled RMO parallel search
  • removed max device memory limitation
  • mods in time control
  • cleanups
  • Zeta 099m on Nvidia V100, 160 workers, ~ 13.5 Mnps
  • Zeta 099m on Nvidia V100, 1 worker, ~ 85 Knps

-- Srdja Matovic 13 Jul 2019

Zeta (099l)

  • patch for parallel search scaling
  • max device memory increased from 1 GB to 16 GB

-- Srdja Matovic Jun 2019

Zeta (099h to 099k)

  • fixes n cleanups
  • switch from Lazy SMP to ABDADA parallel search
  • added IID - Internal Iterative Deepening
  • one cl file for all gpu generations with inlined optimizations
  • Zeta 099k on AMD Radeon R9 Fury X, 256 workers, ~ 7.6 Mnps
  • Zeta 099k on Nvidia GeForce GTX 750, 16 workers, ~ 800 Knps
  • Zeta 099k on AMD Radeon HD 7750, 32 workers, ~ 700 Knps
  • Zeta 099k on Nvidia GeForce 8800 GT, 14 workers, ~ 110 Knps

-- Srdja Matovic 2018

Zeta (099b to 099g)

  • switch from KoggeStone based move generation to Dumb7Fill
  • added atomic features for different gpu generations

-- Srdja Matovic 2017

Zeta (099a)

  • switch from best first minimax search to parallel alphabeta (lazy smp)
  • ported all (except IID) search techniques from Zeta Dva v0305 to OpenCL
  • ported the evaluation function of Zeta Dva v0305 to OpenCL
  • vectorized and generalized 64 bit Kogge-Stone move generator
  • 64 threads are now coupled to one worker, performing move generation,
    move picking and evaluation, square-wise, in parallel on the same node
  • portability over performance, should run on the very first gpus with
    OpenCL 1.x support (>= 2008)

-- Srdja Matovic 2017

Zeta (098d to 098g)

  • mostly cleanup and fixes
  • restored simple heuristics from Zeta Dva (~2000 Elo on CCRL) engine
  • protocol fixes
  • fixed autoconfig for AMD gpus
  • switched to KoggeStone based move generator
  • switched to rotate left based Zobrist hashes
  • switched to move picker
  • switched to GPL >= 2
  • Zeta 098e on Nvidia GeForce GTX 580, ca. 6 Mnps, est. 1800 Elo on CCRL
  • Zeta 098e on AMD Radeon HD 7750, ca. 1 Mnps
  • Zeta 098e on AMD Phenom X4, ca. 1 Mnps
  • Zeta 098e on Nvidia GeForce 8800 GT, ca. 500 Knps

-- Srdja Matovic 2016


Zeta (098a to 098c)

  • Improved heuristics, partly ported from the Stockfish chess engine
  • AutoConfig for OpenCL devices
  • Parameter tuning
  • Zeta 098c on Nvidia GeForce GTX 480, ca. 5 Mnps, est. 2000 Elo on CCRL
  • Zeta 098c on AMD Radeon R9 290, ca. 3.2 Mnps

-- Srdja Matovic Aug 2013

Zeta (097a to 097z)

  • Implementation of an BestFirstMiniMax search algorithm with UCT parameters for parallelization
  • Zeta 097x on Nvidia GeForce GTX 480, ca. 5 Mnps, est. 1800 Elo on CCRL
  • Zeta 097x on AMD Radeon HD 7750, ca. 800 Knps

-- Srdja Matovic Jan 2013

Zeta (0930 to 0960)

  • Tested Monte Carlo Tree Search without UCT across multiple Compute Units of the GPU
  • Tested LIFO-Stack based load balancing for AlphaBeta search on one Compute Unit of the GPU
  • Tested the 'Nagging' and 'Spam' parallelization approach for AlphaBeta search on one Compute Unit of the GPU
  • Tested 'RBFMS', Randomized BestFirstMiniMax Search, a parallelized version of BestFirstMinixMax, across multiple Compute Units of the GPU

-- Srdja Matovic 2012

Zeta (0915 to 0918)

  • 64 bit Magic Bitboard Move Generator running
  • AlphaBeta search algorithm with 'SPPS'-parallelization running 128 threads on one Compute Unit of the GPU

-- Srdja Matovic 2011

Zeta (0900 to 0910)

  • Tested 32 bit 0x88 and 64 bit Magic Bitboard Move Generator
  • Ported Heuristics, the Evaluation Function, from CPU engine 'ZetaDva' (~2000 Elo) to OpenCL

-- Srdja Matovic 2010

 

* updated on 2019-07-13 *

Zeta - Source Code

Zeta and Zeta Dva support only some basic Xboard protocol commands and some users have reported problems with the configuration and interface of the last Zeta versions.‭ ‬So i will publish the source code again when these parts are more user friendly designed and tested for Windows Chess-GUIs like Winboard or Arena.

Home - Top