There is _always_ cache coherency, or else you can't write a multi-threaded code that wll execute properly. The "shadow lock" code in Crafty depends on this. It uses bus-snooping to avoid the long periods where the bus (or in newer architectures all caches) get locked while the interlocked-exchange is executed...Daniel Shawul wrote:I read somewhere that the optimization works only if there is cache coherency and that is probably the reason why I left it as it is. When I tested once or twice with the asm spinlocks I found from here http://www.cis.temple.edu/~ingargio/ci ... nsem.html they did not really perform any better than mutexes. For windows also, critical sections perform almost as good as spin locks on my computer.
I am currently testing how busy processors are during search. What I am trying to do is split only at the root and search it at large depth to check if I get a better scaling. Theoretically I should since I don't lock the HT which is the only thing that could connect them. If the result is still the same it is definitely a lock contention problem.
Thanks again.
nps scaling
Moderators: hgm, Rebel, chrisw
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: nps scaling
-
- Posts: 4185
- Joined: Tue Mar 14, 2006 11:34 am
- Location: Ethiopia
Re: nps scaling
Basic spinlock implementations I found on the net have the simple test and set operation with no optimization. In the link I gave above I found this
I do not know specifically what the "shadow lock" in crafty but from what I see in lock.h it looks to me that you do same thing as mine. I am pretty sure that I misunderstood something that I don't know.
That optimization does not seem to be used in the section I marked. Another question,why are you using ((v)[0] = 0) instead of simply (v) = 0 in some of the above code...
Yet another question, after the SearchParallel() call finished for a thread and you are about to copy the result from the child, you do a lock on the general lock IE lock_smp. Is that really necessary? I lock only the parent lock only so that children would not simultaneously update their results and it never gave me a problem that I don't lock the general lock_smp for that purpose.
Thanks.
Those statements are the reason why I was reluctant to use it.A big problem of the spinlock is that by repeatedly testandsetting a location in memory it can use a high percentage of the memory bus bandwidth, thus reducing the utilization of other procesors sharing the memory bus. This location that is repeatedly accessed is called a Hot Spot in the memory. A partial solution (in the case of cache-coherent shared memories) is to change the Lock operation as follows:which uses less of the memory bandwidth since the TestAndSet instruction involves both a read and a write transaction on the memory while (*L==1) involves only a read operation, and this read operation is normally carried out on a cached copy. [The inner While Loop operates on the cached copy without affecting the shared memory. It assumes that the hardware supports some form (like snooping caches) of cache coherence algorithm which invalidates the cache copy when the shared memory copy is updated.]Code: Select all
void Lock(SpinLock *L) { while (TestAndSet(L)) while (*L == 1) ; }
I do not know specifically what the "shadow lock" in crafty but from what I see in lock.h it looks to me that you do same thing as mine. I am pretty sure that I misunderstood something that I don't know.
Code: Select all
# if ((defined (_M_IA64) || defined (_M_AMD64)) && !defined(NT_INTEREX))
# include <windows.h>
# pragma intrinsic (_InterlockedExchange)
typedef volatile LONG lock_t[1];
# define LockInit(v) ((v)[0] = 0)
# define LockFree(v) ((v)[0] = 0)
# define Unlock(v) ((v)[0] = 0)
__forceinline void Lock(volatile LONG * hPtr)
{
int iValue;
for (;;) {
iValue = _InterlockedExchange((LPLONG) hPtr, 1);
if (0 == iValue)
return;
while (*hPtr);
}
}
# else /* NT non-Alpha/Intel, without assembler Lock() */
# define lock_t volatile int
# define LockInit(v) ((v) = 0) <===========
# define LockFree(v) ((v) = 0)
# define Lock(v) do { \
while(InterlockedExchange((LPLONG)&(v),1) != 0); \ <==============
} while (0)
# define Unlock(v) ((v) = 0)
# endif /* architecture check */
# else
Yet another question, after the SearchParallel() call finished for a thread and you are about to copy the result from the child, you do a lock on the general lock IE lock_smp. Is that really necessary? I lock only the parent lock only so that children would not simultaneously update their results and it never gave me a problem that I don't lock the general lock_smp for that purpose.
Code: Select all
value =
SearchParallel(thread[tid], thread[tid]->alpha, thread[tid]->beta,
thread[tid]->value, thread[tid]->wtm, thread[tid]->depth,
thread[tid]->ply);
Lock(lock_smp);
Lock(thread[tid]->parent->lock);
....
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: nps scaling
You are looking at the wrong lock. That is the inline asm for the dec Alpha chip.Daniel Shawul wrote:Basic spinlock implementations I found on the net have the simple test and set operation with no optimization. In the link I gave above I found thisThose statements are the reason why I was reluctant to use it.A big problem of the spinlock is that by repeatedly testandsetting a location in memory it can use a high percentage of the memory bus bandwidth, thus reducing the utilization of other procesors sharing the memory bus. This location that is repeatedly accessed is called a Hot Spot in the memory. A partial solution (in the case of cache-coherent shared memories) is to change the Lock operation as follows:which uses less of the memory bandwidth since the TestAndSet instruction involves both a read and a write transaction on the memory while (*L==1) involves only a read operation, and this read operation is normally carried out on a cached copy. [The inner While Loop operates on the cached copy without affecting the shared memory. It assumes that the hardware supports some form (like snooping caches) of cache coherence algorithm which invalidates the cache copy when the shared memory copy is updated.]Code: Select all
void Lock(SpinLock *L) { while (TestAndSet(L)) while (*L == 1) ; }
I do not know specifically what the "shadow lock" in crafty but from what I see in lock.h it looks to me that you do same thing as mine. I am pretty sure that I misunderstood something that I don't know.That optimization does not seem to be used in the section I marked. Another question,why are you using ((v)[0] = 0) instead of simply (v) = 0 in some of the above code...Code: Select all
# if ((defined (_M_IA64) || defined (_M_AMD64)) && !defined(NT_INTEREX)) # include <windows.h> # pragma intrinsic (_InterlockedExchange) typedef volatile LONG lock_t[1]; # define LockInit(v) ((v)[0] = 0) # define LockFree(v) ((v)[0] = 0) # define Unlock(v) ((v)[0] = 0) __forceinline void Lock(volatile LONG * hPtr) { int iValue; for (;;) { iValue = _InterlockedExchange((LPLONG) hPtr, 1); if (0 == iValue) return; while (*hPtr); } } # else /* NT non-Alpha/Intel, without assembler Lock() */ # define lock_t volatile int # define LockInit(v) ((v) = 0) <=========== # define LockFree(v) ((v) = 0) # define Lock(v) do { \ while(InterlockedExchange((LPLONG)&(v),1) != 0); \ <============== } while (0) # define Unlock(v) ((v) = 0) # endif /* architecture check */ # else
Yet another question, after the SearchParallel() call finished for a thread and you are about to copy the result from the child, you do a lock on the general lock IE lock_smp. Is that really necessary? I lock only the parent lock only so that children would not simultaneously update their results and it never gave me a problem that I don't lock the general lock_smp for that purpose.
Thanks.Code: Select all
value = SearchParallel(thread[tid], thread[tid]->alpha, thread[tid]->beta, thread[tid]->value, thread[tid]->wtm, thread[tid]->depth, thread[tid]->ply); Lock(lock_smp); Lock(thread[tid]->parent->lock); ....
Here is the intel lock:
Code: Select all
static void __inline__ LockX86(volatile int *lock) {
int dummy;
asm __volatile__(
"1: movl $1, %0" "\n\t"
" xchgl (%1), %0" "\n\t"
" testl %0, %0" "\n\t"
" jz 3f" "\n\t"
"2: pause" "\n\t"
" movl (%1), %0" "\n\t"
" testl %0, %0" "\n\t"
" jnz 2b" "\n\t"
" jmp 1b" "\n\t"
"3:" "\n\t"
:"=&q"(dummy)
:"q"(lock)
:"cc", "memory");
}
while (exchange(lock, 1) while(lock);
The idea is that the exchange is an atomic operation, and very slow. So you try it first, if it gets a 1 (the lock is owned by another thread) then we get into the while(lock) which is just a fetch and jnz operation (not atomic). The lock gets copied into cache and we spin on the cache hit until bus snooping detects that another processor/thread has changed the lock value back to zero. At that point, the while(lock) becomes false, and we go back and try the safe atomic xchg() instruction again...
I do not know of _any_ multiprocessor architecture today that does not have coherent cache. Otherwise you could not write SMP-safe code on them...
As far as the lock goes, if you are looking at the one at the bottom of SearchParallel() you might have overlooked the key point that that is only done in a fail-high situation, where I need to stop other threads that are helping at this split point, or at any split point below this split point that is a part of this search. I need to set that lock so that I can safely set stop indicators for any split block that is helping me, without having the ugly case of another thread trying to set my stop flag while I am setting his, because more than one can fail high at the same node. Is that what you are looking at???
-
- Posts: 4185
- Joined: Tue Mar 14, 2006 11:34 am
- Location: Ethiopia
Re: nps scaling
Ok then I take it that the optimization works except for this chip.You are looking at the wrong lock. That is the inline asm for the dec Alpha chip.
No that is not it. I quoted the relevant code in my previous post. It is after a thread runs out of work (no more moves to search at the split point) then you return from SearchParallel() and call the CopyFromChild(). I think this section needs to be protected with only the parent lock.As far as the lock goes, if you are looking at the one at the bottom of SearchParallel() you might have overlooked the key point that that is only done in a fail-high situation, where I need to stop other threads that are helping at this split point, or at any split point below this split point that is a part of this search. I need to set that lock so that I can safely set stop indicators for any split block that is helping me, without having the ugly case of another thread trying to set my stop flag while I am setting his, because more than one can fail high at the same node. Is that what you are looking at???
-
- Posts: 4675
- Joined: Mon Mar 13, 2006 7:43 pm
Re: TTTT: Two Tier Transposition Table
It's hard enough to predict two tier movepath enumeration times let alone those of a deep search. Only testing a real implementation of a TTTT over many runs will help.
----
For what it's worth, I have the latest enumeration results on a four core 2.66 GHz Xeon, and the program managed to come up with perft(10) in under three hours. The upper five ply (approximately one million unique positions and their counts) were kept in a shared locking tree; the rest were kept in per-thread local transposition tables.
Edited to add: the shared tree is accessed via pthread locking while the threads are synchronized using sleeplocks.
----
For what it's worth, I have the latest enumeration results on a four core 2.66 GHz Xeon, and the program managed to come up with perft(10) in under three hours. The upper five ply (approximately one million unique positions and their counts) were kept in a shared locking tree; the rest were kept in per-thread local transposition tables.
Code: Select all
Depth: 8 Count: 84998978956 Elapsed: 42.0357
2.02207e+09 Hz 4.94543e-10 seconds
Depth: 9 Count: 2439530234167 Elapsed: 617.726
3.94921e+09 Hz 2.53215e-10 seconds
Depth: 10 Count: 69352859712417 Elapsed: 9922.21
6.98966e+09 Hz 1.43069e-10 seconds
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: nps scaling
Daniel Shawul wrote:Ok then I take it that the optimization works except for this chip.You are looking at the wrong lock. That is the inline asm for the dec Alpha chip.
No that is not it. I quoted the relevant code in my previous post. It is after a thread runs out of work (no more moves to search at the split point) then you return from SearchParallel() and call the CopyFromChild(). I think this section needs to be protected with only the parent lock.As far as the lock goes, if you are looking at the one at the bottom of SearchParallel() you might have overlooked the key point that that is only done in a fail-high situation, where I need to stop other threads that are helping at this split point, or at any split point below this split point that is a part of this search. I need to set that lock so that I can safely set stop indicators for any split block that is helping me, without having the ugly case of another thread trying to set my stop flag while I am setting his, because more than one can fail high at the same node. Is that what you are looking at???
Aha. No, there are other race conditions. One can be backing up at the same instant another is saying "we need to stop everyone, I've found a beta cutoff." I can't let those two things overlap as it can wreck the copying process. Normal exits don't need the smp lock, but I use it to protect against a race condition I found a long time back. I think I actually fixed the problem so that lock might not be needed, but it will take a _ton_ of testing to prove it works without it since the timing window is so small. however, since I'm not seeing any performance issues with this at the moment, I've not taken the time to continue the testing...