Some fun with Komodo 8

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

Uri Blass
Posts: 10282
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: Some fun with Komodo 8

Post by Uri Blass »

Laskos wrote:
Uri Blass wrote:
Laskos wrote:
Uri Blass wrote:
Thanks for the information.

It may be interesting to know the effective branching factor with the default version also for stockfish and also for higher depths than depths 11-15

It may be interesting to know if the EBF tend to go down when the depth go up to get some formula of the average nodes that chess programs need to get depth n both for stockfish and komodo.
I am on my weak notebook, so the depths achieved are not very high.

Code: Select all

Komodo 8
  TotTime: 121:01m    SolTime: 121:01m
  Ply: 0   Positions:150   Avg Nodes:       0   Branching = 0.00
  Ply: 1   Positions:150   Avg Nodes:     145   Branching = 0.00
  Ply: 2   Positions:150   Avg Nodes:     361   Branching = 2.49
  Ply: 3   Positions:150   Avg Nodes:     735   Branching = 2.04
  Ply: 4   Positions:150   Avg Nodes:    1604   Branching = 2.18
  Ply: 5   Positions:150   Avg Nodes:    2925   Branching = 1.82
  Ply: 6   Positions:150   Avg Nodes:    5034   Branching = 1.72
  Ply: 7   Positions:150   Avg Nodes:    9015   Branching = 1.79
  Ply: 8   Positions:150   Avg Nodes:   16481   Branching = 1.83
  Ply: 9   Positions:150   Avg Nodes:   32833   Branching = 1.99
  Ply:10   Positions:150   Avg Nodes:   64039   Branching = 1.95
  Ply:11   Positions:150   Avg Nodes:  130712   Branching = 2.04
  Ply:12   Positions:150   Avg Nodes:  258195   Branching = 1.98
  Ply:13   Positions:150   Avg Nodes:  493481   Branching = 1.91
  Ply:14   Positions:150   Avg Nodes:  942114   Branching = 1.91
  Ply:15   Positions:150   Avg Nodes: 1706669   Branching = 1.81
  Ply:16   Positions:150   Avg Nodes: 3093132   Branching = 1.81
  Ply:17   Positions:150   Avg Nodes: 5904301   Branching = 1.91


SF 21092014
  TotTime: 99:42m    SolTime: 99:42m
  Ply: 0   Positions:150   Avg Nodes:       0   Branching = 0.00
  Ply: 1   Positions:150   Avg Nodes:     143   Branching = 0.00
  Ply: 2   Positions:150   Avg Nodes:     454   Branching = 3.17
  Ply: 3   Positions:150   Avg Nodes:     920   Branching = 2.03
  Ply: 4   Positions:150   Avg Nodes:    1716   Branching = 1.87
  Ply: 5   Positions:150   Avg Nodes:    2994   Branching = 1.74
  Ply: 6   Positions:150   Avg Nodes:    5161   Branching = 1.72
  Ply: 7   Positions:150   Avg Nodes:    8765   Branching = 1.70
  Ply: 8   Positions:150   Avg Nodes:   15862   Branching = 1.81
  Ply: 9   Positions:150   Avg Nodes:   32596   Branching = 2.05
  Ply:10   Positions:150   Avg Nodes:   64130   Branching = 1.97
  Ply:11   Positions:150   Avg Nodes:  114509   Branching = 1.79
  Ply:12   Positions:150   Avg Nodes:  214187   Branching = 1.87
  Ply:13   Positions:150   Avg Nodes:  387621   Branching = 1.81
  Ply:14   Positions:150   Avg Nodes:  642514   Branching = 1.66
  Ply:15   Positions:150   Avg Nodes: 1131855   Branching = 1.76
  Ply:16   Positions:150   Avg Nodes: 1895303   Branching = 1.67
  Ply:17   Positions:150   Avg Nodes: 3085415   Branching = 1.63
  Ply:18   Positions:150   Avg Nodes: 4856014   Branching = 1.57
  Ply:19   Positions:150   Avg Nodes: 7714003   Branching = 1.59
1/ If we take EBF as Nodes^(1/Depth) then we will get misleading EBF Komodo 2.50 and EBF SF 2.30. That's because of Ply 1, which suddenly rises to large values.
2/ Better take EBF of last 5 plies, which are better predictor for higher depths. Keep in mind that I used Hash of 1GB, which was never fully filled during the test.

So, for EBF in the last 5 plies:

EBF Komodo 8: 1.87
EBF SF: 1.64

And their respective predictions for higher depths (with enough Hash) are:

Komodo 8: Nodes=5904301*1.87^(depth-17)
SF 21092014: Nodes=7714003*1.64^(depth-19)
You assume constant branching factor but I suspect that the branching factor tends to goes down with more nodes so it is going to be less than it
and you may need a different formula

Somebody claimed that
the amount N of nodes to depth d in the opening position
fits the formula

https://groups.google.com/forum/?fromgr ... y7WosULKWk

Sergey Morozov suggested the following formula as an estimate based on analysis of the opening position
Nodes= 1.5*15^(depth^0.6)

Of course a single position may be misleading but it may be interesting to find the best A,B,C for a formula of the type
Nodes=C*A^(depth^B).
I fitted with least squares the results for SF (ply 1 to 19) and Komodo 8 (ply 1 to 17).


SF: Nodes = 51.1*3.908^(depth^0.7367)
The branching factor here indeed goes down with depth (with unlimited Hash size).

But for Komodo 8: Nodes = 268.2*1.621^(depth^1.069)
The branching factor here goes very mildly up with depth (with unlimited Hash size).
Thanks
I think that least squares may be misleading here and give too much weight for high depths because the biggest branching factor numbers for komodo are at depth 2-4.

Maybe it is better to try to find least squares for the formula
log(nodes)=log(C*A^(depth^B))
or
log(nodes)=log(C)+depth^B*log(A)
User avatar
Laskos
Posts: 10948
Joined: Wed Jul 26, 2006 10:21 pm
Full name: Kai Laskos

Re: Some fun with Komodo 8

Post by Laskos »

Uri Blass wrote:
Thanks
I think that least squares may be misleading here and give too much weight for high depths because the biggest branching factor numbers for komodo are at depth 2-4.

Maybe it is better to try to find least squares for the formula
log(nodes)=log(C*A^(depth^B))
or
log(nodes)=log(C)+depth^B*log(A)
Correct. I fitted with:

log(nodes) = c+a*depth^b.

The least squares fit for SF is:
log(nodes) = 4.14+1.00*depth^0.839
Pretty significant decrease in EBF with depth.

The least squares fit for Komodo 8 is:
log(nodes) = 4.38+0.755*depth^0.952
Mild decrease in EBF with depth.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Some fun with Komodo 8

Post by bob »

Laskos wrote:1/ Effective branching factors with and without LMR and Null Move

Komodo 8

Ply:11 Positions:150 Avg Nodes: 130966 Branching = 2.03
Ply:12 Positions:150 Avg Nodes: 256428 Branching = 1.96
Ply:13 Positions:150 Avg Nodes: 496702 Branching = 1.94
Ply:14 Positions:150 Avg Nodes: 952252 Branching = 1.92
Ply:15 Positions:150 Avg Nodes: 1722201 Branching = 1.81
EBF 1.93



Komodo 8 no LMR

Ply: 7 Positions:150 Avg Nodes: 15755 Branching = 2.13
Ply: 8 Positions:150 Avg Nodes: 36239 Branching = 2.30
Ply: 9 Positions:150 Avg Nodes: 88718 Branching = 2.45
Ply:10 Positions:150 Avg Nodes: 229883 Branching = 2.59
Ply:11 Positions:150 Avg Nodes: 596821 Branching = 2.60
EBF 2.41



Komodo 8 no LMR no Null Move


Ply: 5 Positions:150 Avg Nodes: 5305 Branching = 2.44
Ply: 6 Positions:150 Avg Nodes: 11355 Branching = 2.14
Ply: 7 Positions:150 Avg Nodes: 29124 Branching = 2.56
Ply: 8 Positions:150 Avg Nodes: 77321 Branching = 2.65
Ply: 9 Positions:150 Avg Nodes: 265114 Branching = 3.43
EBF 2.61



2/ Fixed depth Elo loss due to LMR and Null Move

Fixed depth 12:
Score of K8 vs K8 no LMR: 8 - 59 - 33 [0.24] 100
ELO difference: -196

Fixed depth 12:
Score of K8 vs K8 no Null Move: 17 - 47 - 36 [0.35] 100
ELO difference: -108



3/ Fixed time Elo gain due to LMR and Null Move

Fixed time 10''+0.1''
Score of K8 vs K8 no LMR: 48 - 11 - 41 [0.69] 100
ELO difference: 135


Fixed time 10''+0.1''
Score of K8 vs K8 no Null Move: 51 - 17 - 32 [0.67] 100
ELO difference: 123


Fixed time 10''+0.1''
Score of K8 no Null Move vs K 8 no LMR: 39 - 26 - 35 [0.56] 100
ELO difference: 45



/4 Legendary Komodo widening on parallel search

Fixed depth 11:
Score of K8 8 threads vs K8 1 thread: 39 - 15 - 46 [0.62] 100
ELO difference: 85
The branching factor stuff is interesting, the fixed depth elo loss is worthless. The whole idea of using null-move or LMR is to go deeper. If you don't allow that by using a fixed depth search, why would you expect anything other than LMR or Null move or both to be worse, since you eliminated the advantage of more depth, and only include the disadvantage of more pruning errors.

I've run a bunch of these tests previously, but using timed tests rather than fixed depth. Typical results are removing either LMR or null move reduces Elo by about 80, removing the other takes away another 40 Elo. Those are somewhat like your last numbers, but I've not tried to test LMR vs NULL.

As far as this "widening" stuff. ALL programs search wider in the parallel search. This is usually called "parallel search overhead" and it is NOT considered a "good thing."
User avatar
Laskos
Posts: 10948
Joined: Wed Jul 26, 2006 10:21 pm
Full name: Kai Laskos

Re: Some fun with Komodo 8

Post by Laskos »

bob wrote:
Laskos wrote:1/ Effective branching factors with and without LMR and Null Move

Komodo 8

Ply:11 Positions:150 Avg Nodes: 130966 Branching = 2.03
Ply:12 Positions:150 Avg Nodes: 256428 Branching = 1.96
Ply:13 Positions:150 Avg Nodes: 496702 Branching = 1.94
Ply:14 Positions:150 Avg Nodes: 952252 Branching = 1.92
Ply:15 Positions:150 Avg Nodes: 1722201 Branching = 1.81
EBF 1.93



Komodo 8 no LMR

Ply: 7 Positions:150 Avg Nodes: 15755 Branching = 2.13
Ply: 8 Positions:150 Avg Nodes: 36239 Branching = 2.30
Ply: 9 Positions:150 Avg Nodes: 88718 Branching = 2.45
Ply:10 Positions:150 Avg Nodes: 229883 Branching = 2.59
Ply:11 Positions:150 Avg Nodes: 596821 Branching = 2.60
EBF 2.41



Komodo 8 no LMR no Null Move


Ply: 5 Positions:150 Avg Nodes: 5305 Branching = 2.44
Ply: 6 Positions:150 Avg Nodes: 11355 Branching = 2.14
Ply: 7 Positions:150 Avg Nodes: 29124 Branching = 2.56
Ply: 8 Positions:150 Avg Nodes: 77321 Branching = 2.65
Ply: 9 Positions:150 Avg Nodes: 265114 Branching = 3.43
EBF 2.61



2/ Fixed depth Elo loss due to LMR and Null Move

Fixed depth 12:
Score of K8 vs K8 no LMR: 8 - 59 - 33 [0.24] 100
ELO difference: -196

Fixed depth 12:
Score of K8 vs K8 no Null Move: 17 - 47 - 36 [0.35] 100
ELO difference: -108



3/ Fixed time Elo gain due to LMR and Null Move

Fixed time 10''+0.1''
Score of K8 vs K8 no LMR: 48 - 11 - 41 [0.69] 100
ELO difference: 135


Fixed time 10''+0.1''
Score of K8 vs K8 no Null Move: 51 - 17 - 32 [0.67] 100
ELO difference: 123


Fixed time 10''+0.1''
Score of K8 no Null Move vs K 8 no LMR: 39 - 26 - 35 [0.56] 100
ELO difference: 45



/4 Legendary Komodo widening on parallel search

Fixed depth 11:
Score of K8 8 threads vs K8 1 thread: 39 - 15 - 46 [0.62] 100
ELO difference: 85
The branching factor stuff is interesting, the fixed depth elo loss is worthless. The whole idea of using null-move or LMR is to go deeper. If you don't allow that by using a fixed depth search, why would you expect anything other than LMR or Null move or both to be worse, since you eliminated the advantage of more depth, and only include the disadvantage of more pruning errors.

I've run a bunch of these tests previously, but using timed tests rather than fixed depth. Typical results are removing either LMR or null move reduces Elo by about 80, removing the other takes away another 40 Elo. Those are somewhat like your last numbers, but I've not tried to test LMR vs NULL.

As far as this "widening" stuff. ALL programs search wider in the parallel search. This is usually called "parallel search overhead" and it is NOT considered a "good thing."
Fixed depth I did just for fun, to see how much more it prunes with LMR and Null Move. Your last statement I don't understand, you systematically called the widening at fixed depth either non-existent or a bug. Now ALL are doing that? Not all, Crafty does not, Houdini does not, etc. If ALL are doing that, then TTD for ALL is meaningless in measuring the effective speed-up from one to several threads.
User avatar
Ajedrecista
Posts: 1968
Joined: Wed Jul 13, 2011 9:04 pm
Location: Madrid, Spain.

Re: Some fun with Komodo 8.

Post by Ajedrecista »

Hello Kai:
Laskos wrote:
Uri Blass wrote:
Thanks
I think that least squares may be misleading here and give too much weight for high depths because the biggest branching factor numbers for komodo are at depth 2-4.

Maybe it is better to try to find least squares for the formula
log(nodes)=log(C*A^(depth^B))
or
log(nodes)=log(C)+depth^B*log(A)
Correct. I fitted with:

log(nodes) = c+a*depth^b.

The least squares fit for SF is:
log(nodes) = 4.14+1.00*depth^0.839
Pretty significant decrease in EBF with depth.

The least squares fit for Komodo 8 is:
log(nodes) = 4.38+0.755*depth^0.952
Mild decrease in EBF with depth.
Interesting thread. Just for work with BFs instead of nodes, if BF(d) = [n(d)]/[n(d-1)] and ln[n(d)] = c + a*d^b, then:

ln[BF(d)] = ln{[n(d)]/[n(d-1)]} = ln[n(d)] - ln[n(d-1)] = a*[d^b - (d-1)^b]; BF(d) = exp.{a*[d^b - (d-1)^b]} = exp.(H)

And the decrease of BF with d can be approximated with the first derivative BF'(d) = (H')*exp.(H) = a*b*[d^(b-1) - (d-1)^(b-1)]*BF(d) < 0; as a predictor: BF(d+1) ~ BF(d) + [(d+1) - d]*BF'(d) = BF(d) + BF'(d) < BF(d), if we only take the first derivative of Taylor series.

Regards from Spain.

Ajedrecista.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Some fun with Komodo 8

Post by bob »

Laskos wrote:
bob wrote:
Laskos wrote:1/ Effective branching factors with and without LMR and Null Move

Komodo 8

Ply:11 Positions:150 Avg Nodes: 130966 Branching = 2.03
Ply:12 Positions:150 Avg Nodes: 256428 Branching = 1.96
Ply:13 Positions:150 Avg Nodes: 496702 Branching = 1.94
Ply:14 Positions:150 Avg Nodes: 952252 Branching = 1.92
Ply:15 Positions:150 Avg Nodes: 1722201 Branching = 1.81
EBF 1.93



Komodo 8 no LMR

Ply: 7 Positions:150 Avg Nodes: 15755 Branching = 2.13
Ply: 8 Positions:150 Avg Nodes: 36239 Branching = 2.30
Ply: 9 Positions:150 Avg Nodes: 88718 Branching = 2.45
Ply:10 Positions:150 Avg Nodes: 229883 Branching = 2.59
Ply:11 Positions:150 Avg Nodes: 596821 Branching = 2.60
EBF 2.41



Komodo 8 no LMR no Null Move


Ply: 5 Positions:150 Avg Nodes: 5305 Branching = 2.44
Ply: 6 Positions:150 Avg Nodes: 11355 Branching = 2.14
Ply: 7 Positions:150 Avg Nodes: 29124 Branching = 2.56
Ply: 8 Positions:150 Avg Nodes: 77321 Branching = 2.65
Ply: 9 Positions:150 Avg Nodes: 265114 Branching = 3.43
EBF 2.61



2/ Fixed depth Elo loss due to LMR and Null Move

Fixed depth 12:
Score of K8 vs K8 no LMR: 8 - 59 - 33 [0.24] 100
ELO difference: -196

Fixed depth 12:
Score of K8 vs K8 no Null Move: 17 - 47 - 36 [0.35] 100
ELO difference: -108



3/ Fixed time Elo gain due to LMR and Null Move

Fixed time 10''+0.1''
Score of K8 vs K8 no LMR: 48 - 11 - 41 [0.69] 100
ELO difference: 135


Fixed time 10''+0.1''
Score of K8 vs K8 no Null Move: 51 - 17 - 32 [0.67] 100
ELO difference: 123


Fixed time 10''+0.1''
Score of K8 no Null Move vs K 8 no LMR: 39 - 26 - 35 [0.56] 100
ELO difference: 45



/4 Legendary Komodo widening on parallel search

Fixed depth 11:
Score of K8 8 threads vs K8 1 thread: 39 - 15 - 46 [0.62] 100
ELO difference: 85
The branching factor stuff is interesting, the fixed depth elo loss is worthless. The whole idea of using null-move or LMR is to go deeper. If you don't allow that by using a fixed depth search, why would you expect anything other than LMR or Null move or both to be worse, since you eliminated the advantage of more depth, and only include the disadvantage of more pruning errors.

I've run a bunch of these tests previously, but using timed tests rather than fixed depth. Typical results are removing either LMR or null move reduces Elo by about 80, removing the other takes away another 40 Elo. Those are somewhat like your last numbers, but I've not tried to test LMR vs NULL.

As far as this "widening" stuff. ALL programs search wider in the parallel search. This is usually called "parallel search overhead" and it is NOT considered a "good thing."
Fixed depth I did just for fun, to see how much more it prunes with LMR and Null Move. Your last statement I don't understand, you systematically called the widening at fixed depth either non-existent or a bug. Now ALL are doing that? Not all, Crafty does not, Houdini does not, etc. If ALL are doing that, then TTD for ALL is meaningless in measuring the effective speed-up from one to several threads.
It is STILL a bug. But all programs have the problem of searching wider when they do a parallel search, because for the same depth, they examine significantly more nodes. Which detracts from going deeper. So you gain a little from the extra nodes in this so-called "widening" but you gain MORE from the extra depth you earn with a faster search and you LOSE some of this depth due to that overhead (which you are calling widening). You won't find a parallel programmer around that will voluntarily trade some of the depth to go "wider". If that is a good idea, it would be a good idea for the non-parallel search as well, which is the key point here. So yes, you might gain from the extra width, but you would gain more if that were trimmed away and traded for more depth, which is what we ALL are trying to do. It is the very reason YBW is used, in fact.
mjlef
Posts: 1494
Joined: Thu Mar 30, 2006 2:08 pm

Re: Some fun with Komodo 8

Post by mjlef »

Just imagine how much stronger we could make Komodo if we could remove that "bug"! :-)

And yes, that is a joke. We do things in Komodo to make it stronger, even if they are not what everyone else is doing.

Mark
mjlef
Posts: 1494
Joined: Thu Mar 30, 2006 2:08 pm

Re: Some fun with Komodo 8

Post by mjlef »

Thanks for running all of these. They are very interesting and quite useful.

This kind of experimentation can tell a lot about a program, and we encourage it.

Mark
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Some fun with Komodo 8

Post by bob »

mjlef wrote:Just imagine how much stronger we could make Komodo if we could remove that "bug"! :-)

And yes, that is a joke. We do things in Komodo to make it stronger, even if they are not what everyone else is doing.

Mark
OK, just to get us on the same page, here's the past discussion about the concept of "widening" the SMP search rather than "deepening" it.

1. Everybody I know of agrees that depth is the reason for a parallel search. Ideally with 4 cores you would run 4x faster, and if your EBF is 2.0, you would gain 2 plies of search. This has been discussed ad-finitum. Including Don, BTW.

2. Everybody ALSO knows that there is no perfect SMP scaling, because move ordering is not perfect. Which means you will split and then get a fail-high at that node, wasting a lot of effort. The tree grows. Most report a number around 30%. This is not desirable, it is just an effect of imperfect move ordering and we have to live with it.

3. Kai introduced this concept of "widening" the search, and observed the effect by playing fixed-depth searches with komodo 1 cpu vs komodo 4 cpu and such. And he noticed that the 4 cpu version was stronger. Which I don't consider that unusual, since the 4 cpu version, searching to fixed depth, will certainly search far more nodes than the 1 cpu version for the same depth. And those extra nodes can be a benefit as his numbers have shown.

4. My claim is that this is not something intentional, (more about an offline discussion with Don in a minute) but is something that is a direct result of producing excessive search overhead. Something that almost everyone would be happy to reduce, in order to go a bit deeper.

5. I also claim that if the widening is _intentional_ and it is shown to improve the results, then that is a flaw in the serial search. It is pruning things it should not prune. And that flaw could be fixed by doing the same "widening" that is done in the parallel search, but in the sequential search. No rational person would do that however, unless they know their serial search is too speculative and they choose to invest the extra hardware (cores) in an effort to offset the speculative search, rather than going even deeper with that speculation.

I talked to Don offline and the first comment he ever made was something like this, which I would assume you can verify by looking at Komodo's search, unless it has been improved on...

He explained that he had done a quick and dirty parallel implementation, and that one particular issue he had was that at nodes where splits are done, his LMR and such is not done very well, because of how moves being searched are counted. I have a counter at each split point for this so that the moves in the move list get reduced or pruned the same whether it is a split point or not, since the move's position in the move list still directs how much it is reduced or when it is pruned. He pointed out that the way he was doing things (at that time, which I would guess was in the 3-4 month before he passed away) simply reduced/pruned/etc the split point moves less than what would be expected. And he did indicate that he had plans to fix that. But as he said, "after all the other things are fixed from the rewrite."

So, "widening happens". No doubt. But is it intentional? You can certainly answer or not as you choose. But my intuition is, and you know how long I have been doing parallel search, depth is the game of parallel search, NOT a bushier/fatter tree to the same depth. I could see a program just going for bushiness if the programmer really can't figure out how to do a reasonable parallel alpha/beta search. But there's plenty of literature on that topic, and plenty of programs to look at (it appears that the majority are "crafty-ish" because it is a straightforward way of doing the YBW search as opposed to the much more complex Cray Blitz stuff. So with plenty to look at, not doing a decent parallel search seems like something Don would NOT do, because he'd been doing parallel search for quite a while as well, I think starting with *Socrates in the middle 90's or so.

So, is your widening really intentional, or accidental? That's what this discussion has been about. Don certainly said "accidental" several months back. Online he tried to be a bit cagier about his answer, but that seemed to hold up, particularly during our offline emails (we discussed several different things from time usage to parallel search to reductions/pruning/null-move/etc...)

That brings you up to date, and you have my personal opinion that is based on nothing more than a few statements from Don, and a ton of experience with parallel search. Mine dates back to Washington DC, 1978 ACM computer chess tournament, Univac 1100/20 dual-cpu machine.

As Paul Harvey used to say, "Now you know the rest of the story."
mjlef
Posts: 1494
Joined: Thu Mar 30, 2006 2:08 pm

Re: Some fun with Komodo 8

Post by mjlef »

Unfortunately, I will have to keep what Komodo does as a kind of "trade secret" for now, since it seems to give us an advantage over other programs. As others have discovered, the methods used in Komodo lead to a strength gain at a given depth, on top of the shorter time to completion of that depth. We changed part of it for Komodo 8 over the scheme Don came up with, and these changes seem to have improved efficiency and scaling.

I think you would agree that scaling using more processors using traditional schemes scales more and more poorly as the processors increases. At some point, doubling processors will reach a point where it only gives a few more elo. We want something better than that.

I make no claim that what we do is optimal, and we hope to make further improvements in the future. Better MP use is definitely something we need to work on.