This is not the point...
The idea is to gather information, not necessarily the same information you get with time-based testing.
I will ask again. What useful information do you get from fixed node searching? This kind of testing has a built-in bias, since now all nodes are create equal, and are assumed to take the same amount of time to deal with. Unfortunately, that is not an accurate assessment. If a program slows down at some point, normally, where its NPS simply drops due to doing some sort of extra work, it doesn't slow down in a fixed node search, And effectively runs faster than it should. Since many programs have a 2x-3x NPS variance over the course of a game, giving a program a 2x-3x time handicap or advantage is a big Elo change. And it isn't accounted for. And that's more than enough to make the results suspect. So now you have to test again to see if you got burned by the NPS variance issue. Whereas with timed testing, you discover this immediately because the handicap/advantage goes away when everyone has the same amount of _time_ to use.
If you can learn from fixed node searching, you can learn the same thing from fixed depth searching. Yet we all know that plies are not a constant between programs. And the time to complete a ply is not even constant on a single program. Again, bias works its way in.
If you can figure out a way to factor the bias out of the equation, then I'd agree it will work. But if your solution is to also use a timed match, then why not start there in the first place?