I am using something I wrote. But it isn't particularly complicated. I have a simple referee that keeps up with game state, passes moves back and forth, knows what subset of positions it is supposed to use to play a set of games, etc. Another program submits these to the cluster in batches, and continues doing so until all tests complete.Daniel Shawul wrote:Can you guide a newbie on how to run multiple games on a cluster.
Are you using cutechess-cli for that purpose or do you use your own script ?
For other softwares that I use capapble of running on cluster , I just invoke
"mpirun -np 12 'which xxxx' " to run the command xxxx on 12 processors. I guess I need an MPI script to replace xxxxx , which assign the games to different IP addresses.
Right now when I do "mpirun -np 4 scorpio" , it starts 4 instances of scorpio on 4 nodes.
interesting test data.
Moderator: Ras
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: interesting test data.
-
- Posts: 1922
- Joined: Thu Mar 09, 2006 12:51 am
- Location: Earth
Re: interesting test data.
Yes, that is what I do. The MPI code is pretty simple, I have the master rank sit around and wait for game results, summing them with MPI_Reduce. I start the main machine with one extra core (mpd --ncpus=5) for the master rank.Daniel Shawul wrote:Can you guide a newbie on how to run multiple games on a cluster.
Are you using cutechess-cli for that purpose or do you use your own script ?
For other softwares that I use capapble of running on cluster , I just invoke
"mpirun -np 12 'which xxxx' " to run the command xxxx on 12 processors. I guess I need an MPI script to replace xxxxx , which assign the games to different IP addresses.
Right now when I do "mpirun -np 4 scorpio" , it starts 4 instances of scorpio on 4 nodes.
-
- Posts: 4185
- Joined: Tue Mar 14, 2006 11:34 am
- Location: Ethiopia
Re: interesting test data.
Thanks. Can you share your scripts or show me an example for doing such tasks ? I may study MPI later for something else I need , but right now I want to see something that works.
Daniel
Daniel
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: interesting test data.
The code I have is in three parts.Daniel Shawul wrote:Thanks. Can you share your scripts or show me an example for doing such tasks ? I may study MPI later for something else I need , but right now I want to see something that works.
Daniel
(1) is a referee program that starts the two engines via fork()/exec(), uses pipes to talk to both, and monitors the game state to declare a game over after 3 repetitions, 50 moves, or if one side wins. It is similar in function to xboard, but has no GUI of any sort.
(2) is a C shell script that produces a file with a large number of "match" commands. "match" is the name of the referee program. This script is told which opponents to use, and how many positions to use (default = all 4000). It then produces a match command that will pair two opponents using match. It has a parameter to tell it how many games per position (default is 2) and how many positions per match command (I adjust this to load-balance the cluster, faster games make it more efficient to play more games per single match). It produces a large file of commands, any one of which will crank up an instance of "match" and play a series of games using the starting positions from the set of positions I use, where match is told which position in the file to start with, and how many to use.
(3) is a simple C program that monitors the cluster state for how many nodes are busy, and how many pending jobs there are for other users. I try to occupy the cluster fully (assuming no other users) but I don't fill up the input queue so that when others want in, they have to wait for one of my jobs to finish and they start running. As the queue builds up with jobs from other users, this program will back off and drop down to using about 1/2 of the cluster, so long as there are other users wanting to run things, and as they go away, it ramps back up again. Makes this a user-friendly application as many tests run for several days at a time.
No MPI used here at all. On the dual-cpu nodes, I simply submit two match commands per node to use both cpus. On the 8-cpu cluster, I run 8 per node.
I have not distributed this code because it is all highly specific to our cluster software, not a general linux box.
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: interesting test data.
I don't use MPI for this because I wanted to be cluster/user friendly. Once I start something it would be more painful to terminate the stuff on one node. I use the sun grid engine software here, and just submit shell scripts to a que and let SGE schedule things on the nodes. I only keep up with what I have queued up and what others have queued up, and use that to try to let others run as much as possible, while having my testing burn up every "idle" cycle that is available.Zach Wegner wrote:Yes, that is what I do. The MPI code is pretty simple, I have the master rank sit around and wait for game results, summing them with MPI_Reduce. I start the main machine with one extra core (mpd --ncpus=5) for the master rank.Daniel Shawul wrote:Can you guide a newbie on how to run multiple games on a cluster.
Are you using cutechess-cli for that purpose or do you use your own script ?
For other softwares that I use capapble of running on cluster , I just invoke
"mpirun -np 12 'which xxxx' " to run the command xxxx on 12 processors. I guess I need an MPI script to replace xxxxx , which assign the games to different IP addresses.
Right now when I do "mpirun -np 4 scorpio" , it starts 4 instances of scorpio on 4 nodes.
-
- Posts: 292
- Joined: Tue Jul 07, 2009 4:56 am
Re: interesting test data.
If you want some free cluster job management software to do the job that Bob's using SGE for, Condor might fit the bill. But like almost all cluster software, it's a pain to get it working correctly.
-
- Posts: 1922
- Joined: Thu Mar 09, 2006 12:51 am
- Location: Earth
Re: interesting test data.
Sorry to say that it's all closed source, and since the MPI is integrated into the tester, which is integrated into a bunch of engine code, it's not easy to post just parts of it, but I'll try to explain how it's structured. As I said, the main cluster node has one extra cpu allocated to it, where the first MPI process (as returned by MPI_Rank) sits and distributes the work. This consumes about zero CPU, hence the extra cpu allocated. It sits in a loop, where it broadcasts messages to all other processes for things like sending evaluation terms to use (for autotuning), or running tests. The results are collected in a 6-element array (white wins, black wins, etc.), which is summed together for a version and passed directly to BayesElo (no PGNsDaniel Shawul wrote:Thanks. Can you share your scripts or show me an example for doing such tasks ? I may study MPI later for something else I need , but right now I want to see something that works.
Daniel

-
- Posts: 1922
- Joined: Thu Mar 09, 2006 12:51 am
- Location: Earth
Re: interesting test data.
Ah, well, I have a dedicated cluster, so this is unnecessary.bob wrote:I don't use MPI for this because I wanted to be cluster/user friendly. Once I start something it would be more painful to terminate the stuff on one node. I use the sun grid engine software here, and just submit shell scripts to a que and let SGE schedule things on the nodes. I only keep up with what I have queued up and what others have queued up, and use that to try to let others run as much as possible, while having my testing burn up every "idle" cycle that is available.

MPI seemed like the easiest thing to get working. Some other cluster software probably would've worked, but I already had MPI installed, and I wanted it to all be in C (actually C++). I do admit that for your purposes, dynamically changing the work allocation is a good idea, but not necessary for my setup.
-
- Posts: 4185
- Joined: Tue Mar 14, 2006 11:34 am
- Location: Ethiopia
Re: interesting test data.
I am going to use cutechess-cli for all communication between engines so i suppose i don't need to write a script for that. But I may need to write code for part #2 , that is generating many matches between engines (Round robin or Knock out).(1) is a referee program that starts the two engines via fork()/exec(), uses pipes to talk to both, and monitors the game state to declare a game over after 3 repetitions, 50 moves, or if one side wins. It is similar in function to xboard, but has no GUI of any sort.
(2) is a C shell script that produces a file with a large number of "match" commands. "match" is the name of the referee program. This script is told which opponents to use, and how many positions to use (default = all 4000). It then produces a match command that will pair two opponents using match. It has a parameter to tell it how many games per position (default is 2) and how many positions per match command (I adjust this to load-balance the cluster, faster games make it more efficient to play more games per single match). It produces a large file of commands, any one of which will crank up an instance of "match" and play a series of games using the starting positions from the set of positions I use, where match is told which position in the file to start with, and how many to use.
We are using DQS (Distributed Queueing System) for job management. So I do a "qsub jobscript" to submit jobs. From then on , it is the DQS job I am not going to queue a lot of jobs so I don't think i need to monitor the cluster state.As the queue builds up with jobs from other users, this program will back off and drop down to using about 1/2 of the cluster, so long as there are other users wanting to run things, and as they go away, it ramps back up again. Makes this a user-friendly application as many tests run for several days at a time.
I heard there is a machines file, that has info on number of cpus on a cluster, that I can use. But is it necessary for me to specify how many matches to be done on a node. I expect the DQS to do that for me.No MPI used here at all. On the dual-cpu nodes, I simply submit two match commands per node to use both cpus. On the 8-cpu cluster, I run 8 per node.
I have not distributed this code because it is all highly specific to our cluster software, not a general linux box.
Thanks
-
- Posts: 4185
- Joined: Tue Mar 14, 2006 11:34 am
- Location: Ethiopia
Re: interesting test data.
So you use it to tune zct by playing against different versions of it, and not against other winboard/uci engines,right ?
Ok since you guys are so secretive
, I have started working on an MPI script myself. Looks easy with only 6 or so commands often used commands.
Thanks
Ok since you guys are so secretive

Thanks