Scaling from FGRL results with top 3 engines

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

Dann Corbit
Posts: 12537
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: Scaling from FGRL results with top 3 engines

Post by Dann Corbit »

Uri Blass wrote:
Dann Corbit wrote:
Uri Blass wrote:
Dann Corbit wrote:If an engine scales better, it is most likely search that is better (lower branching factor).

The second most likely thing would be the SMP implementation.

The evaluation will not affect scaling much, except for improvement in the move ordering.
I think that better search does not mean lower branching factor

It is easy to get lower branching factor by dubious pruning.

I think that evaluation is important and I expect top engines not to scale well if you change their evaluation to simple piece square table evaluation.
Every single great advancement is chess engines has been due to a reduction in branching factor. While it is obviously a mistake to prune away good stuff let's take a quick look at the list:

1) Alpha-Beta : Enormous improvement over mini-max
2) Null move reduction: Enormous improvement over plain alpha-beta
3) PVS search: Modest improvement over null move reduction due to zero window searches
4) History Reductions: (As pioneered by Fruit) - huge improvent over plain PVS search
5) Smooth scaling reductions in null move pruning (As, for instance, Stockfish) - significant improvement over ordinary null move
6) Razoring (like Rybka and Strelka): Enormous improvement over plain pvs search
7) Late Move Reductions: (with Tord taking the lead in both effectiveness and publication) -- a huge improvement over not having LMR.

There are, of course, many others that I did not mention here.

It is not a coincidence that the top ten engines all have branching factors of about 2, and it is not a coincidence that most weak engines have a large branching factor.

Now, your point in well taken with individual cases. For instance, ExChess had the best branching factor of all engines at one point. But it was not the strongest engine by far. So poorly tuned reductions are not nearly so beneficial as properly tuned reductions.

But almost every big advancement comes from a reduction in branching factor and the next revolution will come from a reduction in branching factor.

There are, of course, some exceptions. The material imbalance table in Rybka was another revolution, and almost entirely due to evaluation improvement in that case (as a 'for instance'). We can thank Larry Kaufman for that, I think.
I agree about the history.
I do not think it means that always the future is going to be reduction of the branching factor.

The target is to play better and not to reduce the branching factor and I see no reason to assume that the next improvement is going to be more reductions and it also can be more extensions of the right lines.
Branching factor improvement is exponential improvement.
Other improvements will not be as astounding.
Until branching factor becomes one, it will always be possible to improve it.

I also agree that a perfect evaluation would lead to a branching factor of 1.
It is just that a perfect evaluation is probably many times more difficult to do and exponential improvements via search happen all the time.

In fact, I think that the key to beating SF is simple. Stop focusing on eval and focus on search. The SF team spends way too much time looking at eval and not enough time looking at search.

In this sense, you can call me a disciple of Christophe Theron, who said:
"Search is also knowledge."
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
Uri Blass
Posts: 10267
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: Scaling from FGRL results with top 3 engines

Post by Uri Blass »

Dann Corbit wrote:
Uri Blass wrote:
Dann Corbit wrote:
Uri Blass wrote:
Dann Corbit wrote:If an engine scales better, it is most likely search that is better (lower branching factor).

The second most likely thing would be the SMP implementation.

The evaluation will not affect scaling much, except for improvement in the move ordering.
I think that better search does not mean lower branching factor

It is easy to get lower branching factor by dubious pruning.

I think that evaluation is important and I expect top engines not to scale well if you change their evaluation to simple piece square table evaluation.
Every single great advancement is chess engines has been due to a reduction in branching factor. While it is obviously a mistake to prune away good stuff let's take a quick look at the list:

1) Alpha-Beta : Enormous improvement over mini-max
2) Null move reduction: Enormous improvement over plain alpha-beta
3) PVS search: Modest improvement over null move reduction due to zero window searches
4) History Reductions: (As pioneered by Fruit) - huge improvent over plain PVS search
5) Smooth scaling reductions in null move pruning (As, for instance, Stockfish) - significant improvement over ordinary null move
6) Razoring (like Rybka and Strelka): Enormous improvement over plain pvs search
7) Late Move Reductions: (with Tord taking the lead in both effectiveness and publication) -- a huge improvement over not having LMR.

There are, of course, many others that I did not mention here.

It is not a coincidence that the top ten engines all have branching factors of about 2, and it is not a coincidence that most weak engines have a large branching factor.

Now, your point in well taken with individual cases. For instance, ExChess had the best branching factor of all engines at one point. But it was not the strongest engine by far. So poorly tuned reductions are not nearly so beneficial as properly tuned reductions.

But almost every big advancement comes from a reduction in branching factor and the next revolution will come from a reduction in branching factor.

There are, of course, some exceptions. The material imbalance table in Rybka was another revolution, and almost entirely due to evaluation improvement in that case (as a 'for instance'). We can thank Larry Kaufman for that, I think.
I agree about the history.
I do not think it means that always the future is going to be reduction of the branching factor.

The target is to play better and not to reduce the branching factor and I see no reason to assume that the next improvement is going to be more reductions and it also can be more extensions of the right lines.
Branching factor improvement is exponential improvement.
Other improvements will not be as astounding.
Until branching factor becomes one, it will always be possible to improve it.

I also agree that a perfect evaluation would lead to a branching factor of 1.
It is just that a perfect evaluation is probably many times more difficult to do and exponential improvements via search happen all the time.

In fact, I think that the key to beating SF is simple. Stop focusing on eval and focus on search. The SF team spends way too much time looking at eval and not enough time looking at search.

In this sense, you can call me a disciple of Christophe Theron, who said:
"Search is also knowledge."
I agree that search is also knowledge but search is not only about reductions but also about extensions or conditions that you do not reduce.

stockfish is blind in some positions because of some reductions and
maybe you can improve stockfish at long time control by better conditions when not to reduce.
Dann Corbit
Posts: 12537
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: Scaling from FGRL results with top 3 engines

Post by Dann Corbit »

Uri Blass wrote:
Dann Corbit wrote:
Uri Blass wrote:
Dann Corbit wrote:
Uri Blass wrote:
Dann Corbit wrote:If an engine scales better, it is most likely search that is better (lower branching factor).

The second most likely thing would be the SMP implementation.

The evaluation will not affect scaling much, except for improvement in the move ordering.
I think that better search does not mean lower branching factor

It is easy to get lower branching factor by dubious pruning.

I think that evaluation is important and I expect top engines not to scale well if you change their evaluation to simple piece square table evaluation.
Every single great advancement is chess engines has been due to a reduction in branching factor. While it is obviously a mistake to prune away good stuff let's take a quick look at the list:

1) Alpha-Beta : Enormous improvement over mini-max
2) Null move reduction: Enormous improvement over plain alpha-beta
3) PVS search: Modest improvement over null move reduction due to zero window searches
4) History Reductions: (As pioneered by Fruit) - huge improvent over plain PVS search
5) Smooth scaling reductions in null move pruning (As, for instance, Stockfish) - significant improvement over ordinary null move
6) Razoring (like Rybka and Strelka): Enormous improvement over plain pvs search
7) Late Move Reductions: (with Tord taking the lead in both effectiveness and publication) -- a huge improvement over not having LMR.

There are, of course, many others that I did not mention here.

It is not a coincidence that the top ten engines all have branching factors of about 2, and it is not a coincidence that most weak engines have a large branching factor.

Now, your point in well taken with individual cases. For instance, ExChess had the best branching factor of all engines at one point. But it was not the strongest engine by far. So poorly tuned reductions are not nearly so beneficial as properly tuned reductions.

But almost every big advancement comes from a reduction in branching factor and the next revolution will come from a reduction in branching factor.

There are, of course, some exceptions. The material imbalance table in Rybka was another revolution, and almost entirely due to evaluation improvement in that case (as a 'for instance'). We can thank Larry Kaufman for that, I think.
I agree about the history.
I do not think it means that always the future is going to be reduction of the branching factor.

The target is to play better and not to reduce the branching factor and I see no reason to assume that the next improvement is going to be more reductions and it also can be more extensions of the right lines.
Branching factor improvement is exponential improvement.
Other improvements will not be as astounding.
Until branching factor becomes one, it will always be possible to improve it.

I also agree that a perfect evaluation would lead to a branching factor of 1.
It is just that a perfect evaluation is probably many times more difficult to do and exponential improvements via search happen all the time.

In fact, I think that the key to beating SF is simple. Stop focusing on eval and focus on search. The SF team spends way too much time looking at eval and not enough time looking at search.

In this sense, you can call me a disciple of Christophe Theron, who said:
"Search is also knowledge."
I agree that search is also knowledge but search is not only about reductions but also about extensions or conditions that you do not reduce.

stockfish is blind in some positions because of some reductions and
maybe you can improve stockfish at long time control by better conditions when not to reduce.
Indeed. That is the crux of both extensions and reductions.

I believe it was Kenny Rogers who said, "Knowing what to throw away, and knowing what to keep."
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
Dann Corbit
Posts: 12537
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: Scaling from FGRL results with top 3 engines

Post by Dann Corbit »

Guenther wrote:
Dann Corbit wrote:
Lyudmil Tsvetkov wrote:
Dann Corbit wrote:It is also true that better evaluation will reduce branching factor, principally by improvement in move ordering (which is very important to the fundamental alpha-beta step).

There are other things that tangentially improve branching factor like hash tables and IID.

It is also true that pure wood counting is not good enough. But examine the effectiveness of Olithink, which has an incredibly simply eval. It has more than just wood, but an engine can be made very strong almost exclusively through search. I guess that grafting Stockfish evaluation into a minimax engine you will get less than 2000 Elo.

I guess that grafting Olithink eval into Stockfish you will still get more than 3000 Elo.

Note that I did not test this, it is only a gedankenexperiment.
so, no search without eval.

I guess you are grossly wrong about both the 2000 and 3000 elo mark.

wanna try one of the 2?

Olithink eval into SF will play something like 1500 elo, wanna bet? :)

I guess it is time to change gedankenexperiment for realitaetsueberpruefung... :)
From CCRL 40/40;
216 OliThink 5.3.2 64-bit 2372 +19 −19 48.3% +12.5 25.6% 1011

With a super simple eval and a fairly simple search, it is already 2372.
Adding the incredible, sophisticated search of Stockfish will lower the eval by more than 872 points?
Discussing with him is not very fruitful, don't forget that.
Well, he can be a little bombastic, but so can I.
He is wrong sometimes, but so am I.

In any case, he clearly has interesting ideas from time to time, so I won't simply ignore him.
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
Lyudmil Tsvetkov
Posts: 6052
Joined: Tue Jun 12, 2012 12:41 pm

Re: Scaling from FGRL results with top 3 engines

Post by Lyudmil Tsvetkov »

Dann Corbit wrote:
Lyudmil Tsvetkov wrote:
Dann Corbit wrote:It is also true that better evaluation will reduce branching factor, principally by improvement in move ordering (which is very important to the fundamental alpha-beta step).

There are other things that tangentially improve branching factor like hash tables and IID.

It is also true that pure wood counting is not good enough. But examine the effectiveness of Olithink, which has an incredibly simply eval. It has more than just wood, but an engine can be made very strong almost exclusively through search. I guess that grafting Stockfish evaluation into a minimax engine you will get less than 2000 Elo.

I guess that grafting Olithink eval into Stockfish you will still get more than 3000 Elo.

Note that I did not test this, it is only a gedankenexperiment.
so, no search without eval.

I guess you are grossly wrong about both the 2000 and 3000 elo mark.

wanna try one of the 2?

Olithink eval into SF will play something like 1500 elo, wanna bet? :)

I guess it is time to change gedankenexperiment for realitaetsueberpruefung... :)
From CCRL 40/40;
216 OliThink 5.3.2 64-bit 2372 +19 −19 48.3% +12.5 25.6% 1011

With a super simple eval and a fairly simple search, it is already 2372.
Adding the incredible, sophisticated search of Stockfish will lower the eval by more than 872 points?
of course, it is all about tuning.

we are not speaking here of downgrading SF, leaving all its search and using just a dozen basic eval terms, in which case SF will still be somewhat strong, but of patching an entirely alien eval onto SF search.

as the eval and search will not be tuned to each other, you will mostly get completely random results.
Lyudmil Tsvetkov
Posts: 6052
Joined: Tue Jun 12, 2012 12:41 pm

Re: Scaling from FGRL results with top 3 engines

Post by Lyudmil Tsvetkov »

Guenther wrote:
Dann Corbit wrote:
Lyudmil Tsvetkov wrote:
Dann Corbit wrote:It is also true that better evaluation will reduce branching factor, principally by improvement in move ordering (which is very important to the fundamental alpha-beta step).

There are other things that tangentially improve branching factor like hash tables and IID.

It is also true that pure wood counting is not good enough. But examine the effectiveness of Olithink, which has an incredibly simply eval. It has more than just wood, but an engine can be made very strong almost exclusively through search. I guess that grafting Stockfish evaluation into a minimax engine you will get less than 2000 Elo.

I guess that grafting Olithink eval into Stockfish you will still get more than 3000 Elo.

Note that I did not test this, it is only a gedankenexperiment.
so, no search without eval.

I guess you are grossly wrong about both the 2000 and 3000 elo mark.

wanna try one of the 2?

Olithink eval into SF will play something like 1500 elo, wanna bet? :)

I guess it is time to change gedankenexperiment for realitaetsueberpruefung... :)
From CCRL 40/40;
216 OliThink 5.3.2 64-bit 2372 +19 −19 48.3% +12.5 25.6% 1011

With a super simple eval and a fairly simple search, it is already 2372.
Adding the incredible, sophisticated search of Stockfish will lower the eval by more than 872 points?
Discussing with him is not very fruitful, don't forget that.
gosh, you made my name part of your signature. :)

thank you very much, Guenther!
Lyudmil Tsvetkov
Posts: 6052
Joined: Tue Jun 12, 2012 12:41 pm

Re: Scaling from FGRL results with top 3 engines

Post by Lyudmil Tsvetkov »

Dann Corbit wrote:
Lyudmil Tsvetkov wrote:
Dann Corbit wrote:
Uri Blass wrote:
Dann Corbit wrote:If an engine scales better, it is most likely search that is better (lower branching factor).

The second most likely thing would be the SMP implementation.

The evaluation will not affect scaling much, except for improvement in the move ordering.
I think that better search does not mean lower branching factor

It is easy to get lower branching factor by dubious pruning.

I think that evaluation is important and I expect top engines not to scale well if you change their evaluation to simple piece square table evaluation.
Every single great advancement is chess engines has been due to a reduction in branching factor. While it is obviously a mistake to prune away good stuff let's take a quick look at the list:

1) Alpha-Beta : Enormous improvement over mini-max
2) Null move reduction: Enormous improvement over plain alpha-beta
3) PVS search: Modest improvement over null move reduction due to zero window searches
4) History Reductions: (As pioneered by Fruit) - huge improvent over plain PVS search
5) Smooth scaling reductions in null move pruning (As, for instance, Stockfish) - significant improvement over ordinary null move
6) Razoring (like Rybka and Strelka): Enormous improvement over plain pvs search
7) Late Move Reductions: (with Tord taking the lead in both effectiveness and publication) -- a huge improvement over not having LMR.

There are, of course, many others that I did not mention here.

It is not a coincidence that the top ten engines all have branching factors of about 2, and it is not a coincidence that most weak engines have a large branching factor.

Now, your point in well taken with individual cases. For instance, ExChess had the best branching factor of all engines at one point. But it was not the strongest engine by far. So poorly tuned reductions are not nearly so beneficial as properly tuned reductions.

But almost every big advancement comes from a reduction in branching factor and the next revolution will come from a reduction in branching factor.

There are, of course, some exceptions. The material imbalance table in Rybka was another revolution, and almost entirely due to evaluation improvement in that case (as a 'for instance'). We can thank Larry Kaufman for that, I think.
so, what makes you think Komodo has better BF than SF?
I did not say that. I do not think it is clear which is better, but both have very good branching factors.
what is the connection to LTC scaling?
Suppose that engine A evaluates twice as many nodes to advance one ply. BF=2

Suppose that engine B evaluates three times as many nodes to advance one ply. BF=3

To get to 30 ply how many more nodes will B examine than A in orders of magnitude?
on the contrary, you said that Komodo might scale well, because BF is conducive to good scaling. that would presume Komodo has lower BF.

I asked you a question, and you reply with a riddle.

LTC or STC, BF always applies, so how does lower BF perform better at LTC?
Lyudmil Tsvetkov
Posts: 6052
Joined: Tue Jun 12, 2012 12:41 pm

Re: Scaling from FGRL results with top 3 engines

Post by Lyudmil Tsvetkov »

Dann Corbit wrote:
Uri Blass wrote:
Dann Corbit wrote:
Uri Blass wrote:
Dann Corbit wrote:If an engine scales better, it is most likely search that is better (lower branching factor).

The second most likely thing would be the SMP implementation.

The evaluation will not affect scaling much, except for improvement in the move ordering.
I think that better search does not mean lower branching factor

It is easy to get lower branching factor by dubious pruning.

I think that evaluation is important and I expect top engines not to scale well if you change their evaluation to simple piece square table evaluation.
Every single great advancement is chess engines has been due to a reduction in branching factor. While it is obviously a mistake to prune away good stuff let's take a quick look at the list:

1) Alpha-Beta : Enormous improvement over mini-max
2) Null move reduction: Enormous improvement over plain alpha-beta
3) PVS search: Modest improvement over null move reduction due to zero window searches
4) History Reductions: (As pioneered by Fruit) - huge improvent over plain PVS search
5) Smooth scaling reductions in null move pruning (As, for instance, Stockfish) - significant improvement over ordinary null move
6) Razoring (like Rybka and Strelka): Enormous improvement over plain pvs search
7) Late Move Reductions: (with Tord taking the lead in both effectiveness and publication) -- a huge improvement over not having LMR.

There are, of course, many others that I did not mention here.

It is not a coincidence that the top ten engines all have branching factors of about 2, and it is not a coincidence that most weak engines have a large branching factor.

Now, your point in well taken with individual cases. For instance, ExChess had the best branching factor of all engines at one point. But it was not the strongest engine by far. So poorly tuned reductions are not nearly so beneficial as properly tuned reductions.

But almost every big advancement comes from a reduction in branching factor and the next revolution will come from a reduction in branching factor.

There are, of course, some exceptions. The material imbalance table in Rybka was another revolution, and almost entirely due to evaluation improvement in that case (as a 'for instance'). We can thank Larry Kaufman for that, I think.
I agree about the history.
I do not think it means that always the future is going to be reduction of the branching factor.

The target is to play better and not to reduce the branching factor and I see no reason to assume that the next improvement is going to be more reductions and it also can be more extensions of the right lines.
Branching factor improvement is exponential improvement.
Other improvements will not be as astounding.
Until branching factor becomes one, it will always be possible to improve it.

I also agree that a perfect evaluation would lead to a branching factor of 1.
It is just that a perfect evaluation is probably many times more difficult to do and exponential improvements via search happen all the time.

In fact, I think that the key to beating SF is simple. Stop focusing on eval and focus on search. The SF team spends way too much time looking at eval and not enough time looking at search.

In this sense, you can call me a disciple of Christophe Theron, who said:
"Search is also knowledge."
if it were that simple, someone would already have done it. :)

from SF framework stats, search and eval patches are split about equal.

so what to do more with search, without improving eval on a par?

someone says, well, about the most important thing in chess programming
is move ordering. well, how do you achieve better move ordering without necessarily resorting to a more advanced move ordering function, which one way or another has to deal with a more refined eval?
Dann Corbit
Posts: 12537
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: Scaling from FGRL results with top 3 engines

Post by Dann Corbit »

Lyudmil Tsvetkov wrote:
Dann Corbit wrote:
Lyudmil Tsvetkov wrote:
Dann Corbit wrote:It is also true that better evaluation will reduce branching factor, principally by improvement in move ordering (which is very important to the fundamental alpha-beta step).

There are other things that tangentially improve branching factor like hash tables and IID.

It is also true that pure wood counting is not good enough. But examine the effectiveness of Olithink, which has an incredibly simply eval. It has more than just wood, but an engine can be made very strong almost exclusively through search. I guess that grafting Stockfish evaluation into a minimax engine you will get less than 2000 Elo.

I guess that grafting Olithink eval into Stockfish you will still get more than 3000 Elo.

Note that I did not test this, it is only a gedankenexperiment.
so, no search without eval.

I guess you are grossly wrong about both the 2000 and 3000 elo mark.

wanna try one of the 2?

Olithink eval into SF will play something like 1500 elo, wanna bet? :)

I guess it is time to change gedankenexperiment for realitaetsueberpruefung... :)
From CCRL 40/40;
216 OliThink 5.3.2 64-bit 2372 +19 −19 48.3% +12.5 25.6% 1011

With a super simple eval and a fairly simple search, it is already 2372.
Adding the incredible, sophisticated search of Stockfish will lower the eval by more than 872 points?
of course, it is all about tuning.

we are not speaking here of downgrading SF, leaving all its search and using just a dozen basic eval terms, in which case SF will still be somewhat strong, but of patching an entirely alien eval onto SF search.

as the eval and search will not be tuned to each other, you will mostly get completely random results.
You are mostly right about that.
While good programming technique demands encapsulation, it is so ultra tempting to pierce that veil and get chummy with other parts of the program and show them your innards that virtually all programs do it.

I must mention Bas Hamstra's program, which was so beautifully crafted. But that is neither here nor there.

I guess that point I wanted to make is that branching factor (DONE PROPERLY) is the golden nail to better program success.

You point to eval. And eval has its place. But once (for instance) the fail high rate goes over 95% on the pv node, the rest is fluff, as far as BF goes. Now, there can be things to aim the engine better, I think everyone agrees on that. But if you are going to shock the world (and look at every world shocker) it is BF gains that drop the jaws and make the eyes bug out.

As I have said elsewhere, you are an interesting person and you know a lot about chess. But until you understand the complete implication of the branching factor, you cannot properly advice chess programmers.

The branching factor is the golden nail upon which all the kings will drape their mantles.

Mark my words,
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
Dann Corbit
Posts: 12537
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: Scaling from FGRL results with top 3 engines

Post by Dann Corbit »

Lyudmil Tsvetkov wrote:
Dann Corbit wrote:
Lyudmil Tsvetkov wrote:
Dann Corbit wrote:
Uri Blass wrote:
Dann Corbit wrote:If an engine scales better, it is most likely search that is better (lower branching factor).

The second most likely thing would be the SMP implementation.

The evaluation will not affect scaling much, except for improvement in the move ordering.
I think that better search does not mean lower branching factor

It is easy to get lower branching factor by dubious pruning.

I think that evaluation is important and I expect top engines not to scale well if you change their evaluation to simple piece square table evaluation.
Every single great advancement is chess engines has been due to a reduction in branching factor. While it is obviously a mistake to prune away good stuff let's take a quick look at the list:

1) Alpha-Beta : Enormous improvement over mini-max
2) Null move reduction: Enormous improvement over plain alpha-beta
3) PVS search: Modest improvement over null move reduction due to zero window searches
4) History Reductions: (As pioneered by Fruit) - huge improvent over plain PVS search
5) Smooth scaling reductions in null move pruning (As, for instance, Stockfish) - significant improvement over ordinary null move
6) Razoring (like Rybka and Strelka): Enormous improvement over plain pvs search
7) Late Move Reductions: (with Tord taking the lead in both effectiveness and publication) -- a huge improvement over not having LMR.

There are, of course, many others that I did not mention here.

It is not a coincidence that the top ten engines all have branching factors of about 2, and it is not a coincidence that most weak engines have a large branching factor.

Now, your point in well taken with individual cases. For instance, ExChess had the best branching factor of all engines at one point. But it was not the strongest engine by far. So poorly tuned reductions are not nearly so beneficial as properly tuned reductions.

But almost every big advancement comes from a reduction in branching factor and the next revolution will come from a reduction in branching factor.

There are, of course, some exceptions. The material imbalance table in Rybka was another revolution, and almost entirely due to evaluation improvement in that case (as a 'for instance'). We can thank Larry Kaufman for that, I think.
so, what makes you think Komodo has better BF than SF?
I did not say that. I do not think it is clear which is better, but both have very good branching factors.
what is the connection to LTC scaling?
Suppose that engine A evaluates twice as many nodes to advance one ply. BF=2

Suppose that engine B evaluates three times as many nodes to advance one ply. BF=3

To get to 30 ply how many more nodes will B examine than A in orders of magnitude?
on the contrary, you said that Komodo might scale well, because BF is conducive to good scaling. that would presume Komodo has lower BF.

I asked you a question, and you reply with a riddle.

LTC or STC, BF always applies, so how does lower BF perform better at LTC?
Ah, I see.
You are a mathematical ignorant.
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.