uct on gpu

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

diep
Posts: 1822
Joined: Thu Mar 09, 2006 11:54 pm
Location: The Netherlands

Re: uct for chess - move gen speedup by vector datatypes

Post by diep »

Much depends upon which card you use of course,
as for the tesla's i have. Suppose i'd use 4 for this type of
SMP search and get chess to work.

The cards are 215 watt each. Add some psu overhead, but you can
get efficient psu's.

In terms of instructions you can execute and now i'm not assuming vectors at all, but just assuming a generic program with some units using hashtables (the full 6GB on each card) and other just using the last plies the local shared RAM (which is 64KB).

How fast is it possible to calculate then in theory?

Let's first estimate how many instructions per cycle we can push through.
Again this is not a gflop vector calculation, just for chess.

448 cores * 4 gpu's * 1.15Ghz = 2060 G instructions per second

Most chessprograms have an IPC at nehalem/i7 of around 1.5.
Diep is around 1.72 there currently, might get up a tad there.
That's in 32 bits of course. It's just over 1.5 as well at 64 bits.

Now getting a high IPC at todays gpu's is quite possible, but you'll have quite some overhead as in each node you willl need to do a full evaluation of course.

It's true that you can avoid that using APHID, but the algorithmic efficiency of aphid is so bad that it's not even worth trying it. You'll get to percentages like 1% or probably less at todays efficient deep searchers with APHID.

The mathematical problem of that is possible to show, but that'll go too far for a few messages here.

Now i assume evaluation is tiny and simple for now, as of course my plan was to write a HUGE evaluation function, which is quite possible in gpu's (sure there is a L1i problem then at each SIMD but that's another topic).

Let's look at potential first ok?

You need more overhead that you don't have at cpu's. Now for a tiny program you might be able to limit that overhead, i did do calculations on that and got to factor 5.

IPC then is around 80% with some effort. which makes it 0.8,
of course still 2x worse nearly than CPU's, which makes sense.

So effectively available for the searchspeed then is:

2060 GIPS * 0.8 / 5 = 329 GIPS

Again this is a very CONSERVATIVE estimate. It's really a lowerbound on what you can effectively throw into battle.

Now let's look at a sixcore i7.

3.46Ghz * 6 cores * IPC=1.5 = 31.14

So we're factor 10.6 faster then with the gpu's.
Again very conservative estimate. Probably a good programmer
will not lose factor 5, which is based upon having a huge eval and L1i problems at the GPU.

Sure there is 4 socket machines there, even 8 socket machines for intel, but they're $200k, so not so realistic to use.

Now making that 3 layer smp is complicated of course, but
there is however good news for the gpu's.

The last layer of SMP has something that no CPU has. Namely lightning speed fast communication to a shared local RAM.

You can't do this that fast at a CPU.

The cache runs at full speed you know. Sure it's tiny but you don't need too much anyway. So those 32 cores very effectively can search there at Nvidia.

AMD's cache is a tad slower and half the size and for 64 PE's, but same principle there as well; note i'm not sure how to get the same GIPS number from AMD, as each SIMD basically is 16 compute cores, and you need to use vectorized datatypes for the compute cores, which in computerchess is a bad idea. Works easier in Nvidia for the chess.

So at nvidia what i do in Diep : use a hashtable within qsearch, is peanuts at Nvidia to do in each SIMD/compute unit.

As we already did do a factor 5 reduction, which is a huge reduction. So this is not a 'good weather' calculation. Factor 5 penalty is *so huge*.

That means each node we can easily compare to CPU's. So we get to 1000 interval units then.

329 GIPS / 1000 = 329 million nps

That's not so difficult to get. Getting the SMP efficient is the only problem. Great task for me - but unpaid not so interesting for now.

Vincent

p.s. now i got sponsored by Nvidia of course, but even then it's easy to prove only Nvidia is the cards to get a chessprogram going; simply put you can run DIFFERENT instruction streams at each SIMD. At AMD that is not possible. So all cards (as far as you can steer enough at AMD) and all SIMD's basically execute the same instruction at the same time at AMD, despite 100 promises to change that - i don't see it happen any soon at AMD, even simple requests they don't manage there so far.

this 329 million nps is not a virtual number. Realize i reduced by factor 5, which is a HUGE overhead, if you get it more efficient you're over a billion nodes per second.

Also no need to razor then last few plies - as you have the shared hashtable there then, which at a cpu would be not possible - you can pick up nearly all tactics there - the ultimate tactical monster it will be not only outsearching everyone, also picking up tactics more - whereas todays houdini's are tactical not so strong when compared to Hiarcs and Diep.
smatovic
Posts: 2642
Joined: Wed Mar 10, 2010 10:18 pm
Location: Hamburg, Germany
Full name: Srdja Matovic

Re: uct for chess - move gen speedup by vector datatypes

Post by smatovic »

p.s. now i got sponsored by Nvidia of course,
cool.

Are you going to use OpenCL or Cuda?
You're buliding a chessprogram on a gpu as well Srdja?
Working on it since 2 years in spare time, but compared to you i am a newbie in chessprogramming, my blog:

http://zeta-chess.blogspot.de/
Can you remind us which gpu you're using Srdja and how high it is clocked?
http://zeta-chess.blogspot.de/2012/03/n ... -7750.html

GTS250:
Cores: 128
SIMD Units: 16
GPU Clock: 738 MHz
Shader Clock: 1836 MHz
GFLOP/s SP: 705 (max. single point precision)
GFLOP/s DP: 88 (max. double point precision)
Memory Interface: 256 bit
Memory Clock: 2200 MHz (GDDR3)
Memory Bandwidth: 70 GB/s
private memory/SIMD Unit: 32 KB
local memory/SIMD Unit: 16 KB
max warps/SIMD Unit: 24


My Move Generator is still under construction (no en passant or castle moves) and i am not sure which Search Algorithm to use.....got currently a best first search running.

What do you plan for generating moves?

AFAIK 24 bit mul or mad operations have the best performance on GPUs...maybe someone should design a 24 bit based move generation.


--
Srdja
Daniel Shawul
Posts: 4185
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: uct for chess - move gen speedup by vector datatypes

Post by Daniel Shawul »

Thank you for your thoughts. I can't really comment on it now but I will give it a thought later.
The fermi is interesting because of the 64kb L1 cache. That and other improvements may change my view of gpu search. But I still think it is not enough to have full occupancy with that since you have to divide it to 48kb/16kb as shared/l1 cache. The 17% occupancy I have now on my current device is not enough even to hide the 24 cycles pipeline latency. I am going to order a device (not a fermi though too expensie 2500$!) that would bring that up to 30% atleast.

Every thread needs its own stack (say 64 ply) of move lists int[256] and other variables to do an iterative alpha-beta search. That is what puts me off from alpha-beta even for doing shallow q-search at the leafs. If the L1 cache can hold that stack for each thread then each thread can do its own search. Another idea mentioned here some time ago is to use the threads for parallel move generation (kogge-stone), parallel evaluation etc.. I don't like that idea because there isn't much too do with that and threads will likely spend most of their time idle. But I understand that is how parallel search on cpus started and it may well be good alternative on gpus.

What I do is really simple. Each warp (32 thread) works independetly from others ( which means it can grab its own node from the tree). Each thread of the warp does it own simulation. There is no barrier except the inherent sync of threads in a warp.
p.s. now i got sponsored by Nvidia of course, but even then it's easy to prove only Nvidia is the cards to get a chessprogram going; simply put you can run DIFFERENT instruction streams at each SIMD. At AMD that is not possible. So all cards (as far as you can steer enough at AMD) and all SIMD's basically execute the same instruction at the same time at AMD, despite 100 promises to change that - i don't see it happen any soon at AMD, even simple requests they don't manage there so far.
I didn't know that. Now I am glad to have NVIDIA gpu.
diep
Posts: 1822
Joined: Thu Mar 09, 2006 11:54 pm
Location: The Netherlands

Re: uct for chess - move gen speedup by vector datatypes

Post by diep »

smatovic wrote:
p.s. now i got sponsored by Nvidia of course,
cool.

Are you going to use OpenCL or Cuda?
You're buliding a chessprogram on a gpu as well Srdja?
Working on it since 2 years in spare time, but compared to you i am a newbie in chessprogramming, my blog:

http://zeta-chess.blogspot.de/
Can you remind us which gpu you're using Srdja and how high it is clocked?
http://zeta-chess.blogspot.de/2012/03/n ... -7750.html

GTS250:
Cores: 128
SIMD Units: 16
GPU Clock: 738 MHz
Shader Clock: 1836 MHz
GFLOP/s SP: 705 (max. single point precision)
GFLOP/s DP: 88 (max. double point precision)
Memory Interface: 256 bit
Memory Clock: 2200 MHz (GDDR3)
Memory Bandwidth: 70 GB/s
private memory/SIMD Unit: 32 KB
local memory/SIMD Unit: 16 KB
max warps/SIMD Unit: 24


My Move Generator is still under construction (no en passant or castle moves) and i am not sure which Search Algorithm to use.....got currently a best first search running.

What do you plan for generating moves?

AFAIK 24 bit mul or mad operations have the best performance on GPUs...maybe someone should design a 24 bit based move generation.


--
Srdja
You are right for the old Nvidia's. The Fermi and newer have fast 32 bits multiplications as well. AMD runs behind there and won't improve. 32 bits multiplications at AMD require all 4 PE's to "cooperate" so it's virtual 8 cycles or 2 cycles throughput @ 4 pe's.

At Nvidia it's just 2 cycles of 1 streamcore, as it's 2 instructions as well.

So at multiplication of integers, Nvidia total owns AMD and probably also upcoming intel's thing is no big deal there.

Intel always been not so strong in multiplications, so we would need to see solid evidence there.

I wouldn't worry about en passant and castling for now. Important is a better search. Best First search doesn't seem very impressive in that sense.

Probably the only real big problem at gpu programming is the fact that SMP search already is so complicated at CPU's and at a gpu it's 10 times harder as you REALLY need a good efficiency from the SMP searches there.

That's however a big job to design on paper first, prove it on paper, and then implement it. Very interesting job just to write it out on paper i'd say, but unpaid i wouldn't do that.
diep
Posts: 1822
Joined: Thu Mar 09, 2006 11:54 pm
Location: The Netherlands

Re: uct for chess - move gen speedup by vector datatypes

Post by diep »

Daniel Shawul wrote:Thank you for your thoughts. I can't really comment on it now but I will give it a thought later.
The fermi is interesting because of the 64kb L1 cache. That and other improvements may change my view of gpu search. But I still think it is not enough to have full occupancy with that since you have to divide it to 48kb/16kb as shared/l1 cache. The 17% occupancy I have now on my current device is not enough even to hide the 24 cycles pipeline latency. I am going to order a device (not a fermi though too expensie 2500$!) that would bring that up to 30% atleast.

Every thread needs its own stack (say 64 ply) of move lists int[256] and other variables to do an iterative alpha-beta search. That is what puts me off from alpha-beta even for doing shallow q-search at the leafs. If the L1 cache can hold that stack for each thread then each thread can do its own search. Another idea mentioned here some time ago is to use the threads for parallel move generation (kogge-stone), parallel evaluation etc.. I don't like that idea because there isn't much too do with that and threads will likely spend most of their time idle. But I understand that is how parallel search on cpus started and it may well be good alternative on gpus.

What I do is really simple. Each warp (32 thread) works independetly from others ( which means it can grab its own node from the tree). Each thread of the warp does it own simulation. There is no barrier except the inherent sync of threads in a warp.
p.s. now i got sponsored by Nvidia of course, but even then it's easy to prove only Nvidia is the cards to get a chessprogram going; simply put you can run DIFFERENT instruction streams at each SIMD. At AMD that is not possible. So all cards (as far as you can steer enough at AMD) and all SIMD's basically execute the same instruction at the same time at AMD, despite 100 promises to change that - i don't see it happen any soon at AMD, even simple requests they don't manage there so far.
I didn't know that. Now I am glad to have NVIDIA gpu.
Daniel i disagree on the occupancy with you. You just need to be a good programmer to get to a high IPC and also you're using a dead old GPU. Less talented programmers already get 70% IPC at Fermi and newer, same thing for AMD by the way.

It has to do with the fact that the newer gpu's are alternating 2 threads quickly (wavefronts), so the 8 cycles you have to wait until a registers result is available then drops to 4 cycles.

That way of programming however majority of programmers will never understand; it's obvious from your comments you have some clues there.

It is however 8 cycles at the GPU you got, except if you use expensive instructions; you should avoid those at all cost i'd argue.

Just use simple instructions.

Don't use branches.
diep
Posts: 1822
Joined: Thu Mar 09, 2006 11:54 pm
Location: The Netherlands

Re: uct for chess - move gen speedup by vector datatypes

Post by diep »

So for the non informed what i'm trying to say is this:

Suppose we do:

A = x + y; // it takes now 8 cycles for A to be available for further processing
b = A + c;

The throughput is ok, but you have to wait that 8 cycles first for A to be available.

At Fermi it launches simply 2x more threads, causing it to be 4 cycles. Usually compiler already is capable of pretty much hiding the latency then.

So that means one should write something like:

A = x + y;
f = k + i;
g = i + 3;
h = j + 2;

// A is available now after 4 cycles
b = A + c;

It's very simple cores of course.
smatovic
Posts: 2642
Joined: Wed Mar 10, 2010 10:18 pm
Location: Hamburg, Germany
Full name: Srdja Matovic

Re: uct for chess - move gen speedup by vector datatypes

Post by smatovic »

my measurement was wrong on this,

currently i get only 166 Knps per SIMD Unit using a loop over the starting position.

--
Srdja
Daniel Shawul
Posts: 4185
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: uct for chess - move gen speedup by vector datatypes

Post by Daniel Shawul »

Daniel i disagree on the occupancy with you. You just need to be a good programmer to get to a high IPC and also you're using a dead old GPU. Less talented programmers already get 70% IPC at Fermi and newer, same thing for AMD by the way.
It is not about occupancy really rather performance. I can get to 70% occupancy by limiting register usage --maxrregcount during nvcc compilation. It does its best to reduce register usage at the cost of putting some variables , that it judges as least often used, in local memory. There are other tricks like using 'volatile' which boils down to the same thing: sacrificing speed for occupancy. I very much doubt it is my coding that is the problem because nvcc compiler does really optimize register usage even if you write a bad code. I don't write code that has lots of branches or with very big dependency chains, unless I have to. Maybe I will save some some if I work hard on it but I doubt it would be a significant improvement.

You should understand that what makes my occupancy so low is register pressure caused by design choices I made: that each thread does its own search,
and that it stores everything it needs on registers and shared mem. You talk about alpha-beta here but it is virtually impossible to have the move stacks
for each thread be on chip. I can implement a YBW or something similar if that is what I wanted but what is the point if it will be slow while everything is stored
on global memory. To put it in perspective, I can't even have one array of int[246] for each thread to generate its moves and remove the illegal ones (even on fermi).
So it really baffles me to think about doing any form of alpha-beta fast without incuring the 500 cycles latency everytime you pick up a move from uncached (barely cached) global memory..
A = x + y; // it takes now 8 cycles for A to be available for further processing
b = A + c;
We have a different concept of latency here. On gpus simple instruction like add, mad and othesr typically take 24 cycles. This pipeline latency is hidden by at least having 192 threads (6 warps on tesla probably twice of that on fermi), so once you have that you need not worry about it this dependency issue. It requires atleast 25% occupancy. To hide global memory latency , you would need far more than that...
diep
Posts: 1822
Joined: Thu Mar 09, 2006 11:54 pm
Location: The Netherlands

Re: uct for chess - move gen speedup by vector datatypes

Post by diep »

SNIP
Are you going to use OpenCL or Cuda?
I'm going OpenCL of course for my own codes, depending upon how well that works for the chess; as for the prime number codes those are already in CUDA, so if i modify those slightly that's a logical thing to do; the CUDA prime number stuff just runs in sparetime. The chess always has priority.

I don't feel that for chess codes the choice OpenCL vs CUDA is a big choice, as for the chess we don't work with huge integers; for Diep i need 20 bits max and then some lineair extrapolations in evaluation i do since 2000 already, they require a tad more bits (40+ of course for the multiplication of it), but it's not like the prime numbers that really need big bit accuracy, so majority of code in diep won't profit much from faster carry, to give example. Also it's easy to crisscross port things of course there, and i feel OpenCL is a good incentive there.

Portability and future prospects which is a big issue for some larger organisations, meaning automatically choosing for Nvidia, because of their long term reliability (AMD might go bankrupt of course with that huge disaster called bulldozer); all those issues are not so relevant for the chess.

So whether i use opencl there or cuda is no big deal of course, I'm not so sure how stable OpenCL is - we already see how bad AMD supports it - but i'm always a big proponent of new technology that in longterm might be interesting.

The prime number codes are impossible to optimize yourself better, without being fulltime busy for 10+ years yourself there you won't be able to optimize that much better - these guys have tested every cycle they can save out pretty well - logically that runs in CUDA.

Of course Nvidia promotes CUDA bigtime, as basically those who want the utmost performance can find that in CUDA.

Chess is simple 32 bits integer work however, something that runs fastest on Nvidia, factors faster on each core than at AMD - AMD needs 4 cores to do 32 bits multiplications and Nvidia just 1, so you can divide AMD's multiplication performance in 32 bits integers by factor 4.

Both for the primenumber factorisation code i run as well as for the chess Nvidia is a better choice.

SNIP
diep
Posts: 1822
Joined: Thu Mar 09, 2006 11:54 pm
Location: The Netherlands

Re: uct for chess - move gen speedup by vector datatypes

Post by diep »

Daniel Shawul wrote:
Daniel i disagree on the occupancy with you. You just need to be a good programmer to get to a high IPC and also you're using a dead old GPU. Less talented programmers already get 70% IPC at Fermi and newer, same thing for AMD by the way.
It is not about occupancy really rather performance. I can get to 70% occupancy by limiting register usage --maxrregcount during nvcc compilation. It does its best to reduce register usage at the cost of putting some variables , that it judges as least often used, in local memory. There are other tricks like using 'volatile' which boils down to the same thing: sacrificing speed for occupancy.

I very much doubt it is my coding that is the problem because nvcc compiler does really optimize register usage even if you write a bad code. I don't write code that has lots of branches or with very big dependency chains, unless I have to.
Well you do something wrong obviously; what you should do is have a generic code that executes which is basically doing nearly nothing towards the local shared memory. You evaluate entire position, execute code, execute code, you generic backtrack (and in cores that don't need to backtrack you don't backtrack), then you have prefetched by then the local hashtable and backtrack again (so those cycles are wasted for cores that do not backtrack).

Everything except hashtable is then local, the hashtable being local shared memory. So we speak about thousands of cycles that you execute that are all local code and have *nothing* to do with any reference to the local shared memory, let alone the global shared memory, let alone the device RAM.

This is what happens in 99.9% of the cases, so you can really get a very close to that IPC=1.0 rate there. Note other 'official measures' are not so interesting there as we're not busy with multiply-add, which explains 50% of the paper performance, cpu's or gpu's does not matter there, but it changes the percentages bigtime.
Maybe I will save some some if I work hard on it but I doubt it would be a significant improvement.

You should understand that what makes my occupancy so low is register pressure caused by design choices I made: that each thread does its own search,
Well it should of course and only once per node you do a local shared check there. Programming that in an efficient manner isn't easy of course, it's big work.

and that it stores everything it needs on registers and shared mem. You talk about alpha-beta here but it is virtually impossible to have the move stacks
for each thread be on chip. I can implement a YBW or something similar if that is what I wanted but what is the point if it will be slow while everything is stored
on global memory.
YBW is the only algorithm that you should consider of course, the other algorithms are all junk. Took me months of puzzling on paper to solve that problem on Nvidia GPU's. I'm sure there is more solutions, but yeah it's hard work. Yet saying it's virtual impossible is not correct - i solved it on paper already around 2007-2008.

Saying it's a HUGE work is correct however. Chessprogramming is like that :)
To put it in perspective, I can't even have one array of int[246] for each thread to generate its moves and remove the illegal ones (even on fermi).
So it really baffles me to think about doing any form of alpha-beta fast without incuring the 500 cycles latency everytime you pick up a move from uncached (barely cached) global memory..
I understand your problem but oh boy - are you far off how to solve things.

Realize you want to preserve the branching factor that YBW gives - anywhere else you can sacrafice. And once again - i did do my calculations for the generation GPU's before Fermi, as you can see from the date - i DID count at losing in total up to factor 5 in overhead when implementing a huge evaluation function. What you're busy with here is however the years 70s basic of how to write an efficient move generator.

The move generator isn't the biggest problem to solve though. You are ALLOWED to lose something there.
A = x + y; // it takes now 8 cycles for A to be available for further processing
b = A + c;
We have a different concept of latency here. On gpus simple instruction like add, mad and othesr typically take 24 cycles.
This pipeline latency is hidden by at least having 192 threads (6 warps on tesla probably twice of that on fermi), so once you have that you need not worry about it this dependency issue. It requires atleast 25% occupancy. To hide global memory latency , you would need far more than that...
http://www.cs.berkeley.edu/~volkov/volkov10-GTC.pdf

Look at the example, he's building an instruction level parallellism of 4 instructions prior to referencing, that gets the optimal throughput.

He had a tad older writing showing this more clearly, check around at his homepage there.