zamar wrote:diep wrote:
or to quote Bob: "Who are all these guys?"
I don't know anything about this 50 million dollars stuff you are talking about, but I bought my Quad Core i7 from local computer store

It's the machine (not even overclocked) we used to tune most of Stockfish's parameters. We are still far from optimal though...
And if you want to know who I really am, feel free to pay me a visit

I live in South Finland, not too far away from Helsinki-Vantaa AirPort.
When some years ago i spoke with people around you it was obvious you fished in the dark how rybka got tuned.
Yet it is also obvious that too many chessprogrammers had and used information they could not have had in a legal manner.
Yet show us your tuner simply and the we can run it here and use its principle at rybka and at stockfish itself.
Costalba posted here that you guys play 1000 games per run.
With that you can't even measure at 7.5-10 elopoints accurate, let alone tune well, so that amazed me.
The 100 mln cpunode hours is obviously the Rybka & clones,
not all the projects launched end 2007 when i formulated a new generation tuning approach. Yet there is 2 phases. The oracle and the tuner.
My initial plans were for the tuner, not for the oracle.
The way how crafty and some others tune is simple: just play games.
That can work for a few parameters but not for hundreds nor even thousands.
Also funny is of course the crap posted by Remi Coulom. He obviously started some months ago thinking about how to tune and has produced past few months some source codes how to tune in the most inefficient manner planet earth has seen, in fact already beaten in the math world around 1983 by other tuning approaches.
Yet if i may remember you all, the kick butt go engines that suddenly beated the asiatic engines, they did do this years ago already.
So who tuned THAT if he only in 2010 shows up with some codes?
Note this program also got easily thousands of cores, just to play a few testgames of go and thousands of cores, real easily, just to play in the icga world champs. All this from the SAME organisation in Netherlands.
So Diep ran at a few meters away in 2003 at a supercomputer from where that go program ran in world champs ICGA. Why did they get so easily so many cores at so many occasions? Thousands.
Well you know, i needed to wait for a year before i had permission and write loads of paper and did not get systm time to test at all.
We know Remi, he publishes this directly after he has programmed it...
Now if you don't play games but tune in a real manner, which obviously has happened with the rybka & co, then the biggest problem is creating a good oracle.
If you have that and a good working tuner, then after having tuned the first engine, relative easy you can of course produce engines like Thinker, Naum, Pandix, DeepSjeng (speaking of another top programmer who has no clue how his engine got tuned, and as he just tests rapid games at home he for sure could not have tuned it that way).
Naum: "exactly like rybka except it's 32 bits"
Thinker: "exactly similar in datastructure like rybka and also eval,
just slightly different tuned everywhere, and a 32MB hashtable"
Why lobotomize something to 32 bits or to a 32MB hashtable if i may ask so, except when that is by CONTRACTUAL appointments?
So the real question is not 'who wrote the code'. The problem is the tuner.
Who owns the tuner? As that is modifying very crucial parameters of your engine and with the total unreadable bitboard code, it's a crucial thing also to debug your engine. That gibberish bitboard code you can't read simply even.
Did you try to READ the material evaluation of the rybka clones?
It's total unreadable bitshifting, seemingly a neural network.
Yet that requires an oracle.
And to produce all that together you soon look at bunches of programmers and attempts, at a very large budget and my estimate is 100 million cpunodehours.
Yet after having invested that with a fraction of that effort you can generate all those similar tuned engines.
It's a lot easier to create an engine that's some elo's weaker than to create something that's real strong. Modelling something with "dna weaknesses" so to speak is a lot easier than building the real thing.
That's what you need that 100 mln cpunode hours for.
I get supported bigtime by experimental outcomes of a dozen very clever and intelligent programmers who have tried all sorts of attempts and it is obvious some attempts might be succesful, if you throw BIG hardware at it.
Until then all those attempts FAIL.
Thousands of cores we speak about.
Thanks,
Vincent