How common is Common Lisp?

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

Do you have a Common Lisp environment on your machine?

Yes, and I use it or have used it.
3
9%
Yes, but I've never tried it.
5
15%
No, and I'm not interested in Lisp.
23
70%
No, but I might try Lisp if there was an upgraded CIL package.
2
6%
No, because there is no free/cheap Lisp for my machine.
0
No votes
No, because Lisp is only for those who can't code in a real language
0
No votes
 
Total votes: 33

Michael Sherwin
Posts: 3196
Joined: Fri May 26, 2006 3:00 am
Location: WY, USA
Full name: Michael Sherwin

Re: Some progress

Post by Michael Sherwin »

sje wrote:I do not deny that Scheme has its place in the academic environment. Also, it's obvious that there is a significant amount of interest in Scheme. But overall, Common Lisp is a more mature, more unified, more supported, and more general platform for symbolic programming.

Scheme can handle bitboards. But can it handle them as well as Common Lisp? CL has both integer bit-wise access (e.g., logior, logand, logbitp) and also bit vector primitive types and operators (e.g., bit-ior, bit-xor, sbit, etc.). And there are some Common Lisp wrappers that support fancy interactive graphics, too. But I can get by with a plain xterm and a text debugger.

Note: there are some commercial CL developer kits that sell for US$2,500 single user with a hefty annual maintenance fee. I assume there must be enough extra goodies in these kits that make the expenditure worthwhile.
Not for me!!!

Anyway if someone, like Tord maybe, were to offer to port your code to haskell would you be for it? Would it be a bad thing to have it ported to a good learning enviroment such as PLT scheme?

BTW, PLT scheme and the GHC haskell compiler are both free and downloadable. There is a version of GHC haskell that plugs into MSVS!
If you are on a sidewalk and the covid goes beep beep
Just step aside or you might have a bit of heat
Covid covid runs through the town all day
Can the people ever change their ways
Sherwin the covid's after you
Sherwin if it catches you you're through
smcracraft
Posts: 737
Joined: Wed Mar 08, 2006 8:08 pm
Location: Orange County California
Full name: Stuart Cracraft

Re: How common is Common Lisp?

Post by smcracraft »

CRoberson wrote:I coded in Lisp over 20 years ago and haven't touched it since.
You can do some really cool stuff with it. I rather liked the
stuff your "not supposed to do": self modifying code.
In class at Stanford, JMC (McCarthy) actively encouraged and gave out
assignments to write code that generated code but thankfully left out the self-modifying code. :-)

Lisp is a great language but very verbose because of all the
full atom names. Hence, programming in it takes forever compared
to other, more terse languages. The extreme examples are APL
and TECO.

At Toshiba, my boss and I wrote a complex production system in
Emacs Lisp. We could add anything we wanted.

Also, I think Lisp encourages abstract thinking.

Steve - I'm very interested in Chess in Lisp's resurrection. Go for it.

Stuart
User avatar
sje
Posts: 4675
Joined: Mon Mar 13, 2006 7:43 pm

Re: Some progress

Post by sje »

In the spirit of honesty, I must mention that some Lisps may not handle bitboards as well as might be expected. This is due to the common implementation technique of reducing the number of bits available for integer representation as a side effect of tagged pointer storage.

For example, the clisp bytecode compiler Lisp I'm using restricts FIXNUM integers to only 24 bits out of 32 used for storage. The transparently substituted FIXNUM alternative, BIGNUM, allows integers with many millions of bits, but is of course much slower.

I am not sure exactly how bit vectors are stored; there are several alternatives.
smcracraft
Posts: 737
Joined: Wed Mar 08, 2006 8:08 pm
Location: Orange County California
Full name: Stuart Cracraft

Re: Some progress

Post by smcracraft »

sje wrote:In the spirit of honesty, I must mention that some Lisps may not handle bitboards as well as might be expected. This is due to the common implementation technique of reducing the number of bits available for integer representation as a side effect of tagged pointer storage.

For example, the clisp bytecode compiler Lisp I'm using restricts FIXNUM integers to only 24 bits out of 32 used for storage. The transparently substituted FIXNUM alternative, BIGNUM, allows integers with many millions of bits, but is of course much slower.

I am not sure exactly how bit vectors are stored; there are several alternatives.
Not having full 64-bit representation, either in 2x32 or 1x64, would
be a definite issue.

Possibly even a showstopper...

Who would want to write a chess program without bitmapping these days?
User avatar
sje
Posts: 4675
Joined: Mon Mar 13, 2006 7:43 pm

Some timing data

Post by sje »

The CIL structure "pos" contains all of the information for a position including the full attack-from and attack-to bitboard arrays. Some of the data is regenerated with each move execution while nearly all of the bitboard data is incrementally updated as it is much much slower to regenerate the attack tables from scratch.

That being said, the following code builds, from scratch, the entire position structure for the initial chess array and represents a worst case with respect to timing:

Code: Select all

(create-initial-array-pos)
How long does this take?

Code: Select all

> (time (dotimes (i 1000) (create-initial-array-pos)))
Real time: 6.897717 sec.
Run time: 6.891719 sec.
Space: 12480768 Bytes
GC: 10, GC time: 0.139937 sec.
The above is with a 2.66 GHz Intel Woodcrest Xeon running GNU clisp.

That's about 7 milliseconds per each complete regeneration, and I'd guess that most of that is for storage allocation. And I'd also guess that the time and storage for an incremental update is a tenth or less of a full rebuild. Also, positions with fewer men will go faster.

Furthermore, there are as of yet no optimization directives. These can vastly improve speed. Also, these numbers are from a byte code compiler; a native code compiler might produce a run time speed result ten times better.

Finally, recall that an advanced and still yet to be written search might only require a few hundred nodes for an acceptable analysis. For such a search, the total overhead for bitboard database manipulation would be very small indeed.
User avatar
sje
Posts: 4675
Joined: Mon Mar 13, 2006 7:43 pm

Re: Some timing data

Post by sje »

Running a compiled version on the same box with the same bytecode interpreter:

Code: Select all

> (time (dotimes (i 1000) (create-initial-array-pos)))
Real time: 1.310729 sec.
Run time: 1.276475 sec.
Space: 7040788 Bytes
GC: 6, GC time: 0.071502 sec.
That's a factor of five speed gain. Again, there are absolutely no optimization directives in place. So there's still plenty of room for speed improvement.
smcracraft
Posts: 737
Joined: Wed Mar 08, 2006 8:08 pm
Location: Orange County California
Full name: Stuart Cracraft

Re: Some timing data

Post by smcracraft »

sje wrote:The CIL structure "pos" contains all of the information for a position including the full attack-from and attack-to bitboard arrays. Some of the data is regenerated with each move execution while nearly all of the bitboard data is incrementally updated as it is much much slower to regenerate the attack tables from scratch.

That being said, the following code builds, from scratch, the entire position structure for the initial chess array and represents a worst case with respect to timing:

Code: Select all

(create-initial-array-pos)
How long does this take?

Code: Select all

> (time (dotimes (i 1000) (create-initial-array-pos)))
Real time: 6.897717 sec.
Run time: 6.891719 sec.
Space: 12480768 Bytes
GC: 10, GC time: 0.139937 sec.
The above is with a 2.66 GHz Intel Woodcrest Xeon running GNU clisp.

That's about 7 milliseconds per each complete regeneration, and I'd guess that most of that is for storage allocation. And I'd also guess that the time and storage for an incremental update is a tenth or less of a full rebuild. Also, positions with fewer men will go faster.

Furthermore, there are as of yet no optimization directives. These can vastly improve speed. Also, these numbers are from a byte code compiler; a native code compiler might produce a run time speed result ten times better.

Finally, recall that an advanced and still yet to be written search might only require a few hundred nodes for an acceptable analysis. For such a search, the total overhead for bitboard database manipulation would be very small indeed.
I am sure it would produce an SM-level program on modern single
processor architecture. Whether IM or GM is another matter entirely.
My colleague and I were working on this until he got hired away by
two companies.

JMC was actively interested too.

Drat!

Stuart
User avatar
sje
Posts: 4675
Joined: Mon Mar 13, 2006 7:43 pm

Re: Some timing data

Post by sje »

I've also tried some benchmarks with gcl (GNU Common Lisp, different from clisp). One problem here is that gcl seems to have a hard time compiling certain closures like:

Code: Select all

;;; Enumeration assistance using a closure

(let ((enum-index nil))
  (defun enum-init ()
    (setf enum-index 0)
    (enum-next))
  (defun enum-next ()
    (let ((index enum-index))
      (incf enum-index)
      index))
  (defun enum-limit ()
    (let ((index enum-index))
      (setf enum-index nil)
      index)))
The above produces a closure where a persistent variable is bound to the closure with only the enclosed functions having access. It's something like private static variables in C++.

Anyway, with the above re-coded to be more palatable, gcl's compiler produced a C output source which was then compiled into a regular object file. And the result ran much faster than even the compiled bytecode version.

And still there are no explicit optimization declarations, only a few macros.
User avatar
sje
Posts: 4675
Joined: Mon Mar 13, 2006 7:43 pm

Elegant weapons

Post by sje »