I cannot give you a straight answer, so I'll answer in parts and try to keep those parts understandable:
* some chess positions (and in particular, my intuition tells me, relationships between pieces) and their evaluations will be mapped into multidimensional space
* I will use an algorithm I am building to fit a polynomial that is good in the machine learning sense (accuracy de-emphasised (hence ranges for evaluation scores rather than exact numbers), simplicity emphasised - lowest degree and smallest number of terms possible)
* hopefully, some of the positions will then be able to be discarded without lowering the standard of the resulting EF
* hopefully, the resulting EF will evaluate some positions well
* where it evaluates a position badly, new position/evaluation data will be added to strengthen it there
* repeat
IMO, there are two reasons to be hopeful:
1. my intuition tells me that the size of the polynomial will grow less quickly than the number of data rows (if the polynomial grows exponentially with extra data rows, that will be very disappointing)
2. my experience with CBR tells me that the number of data rows needed is likely to be lower than most people are expecting. In CBR, the number of cases needed to make a good system is almost always a lot lower than people expect, and my intuition tells me that there are similarities with what I am doing here
The ultimate achievement would be, to put it simply, to look at the polynomial, spot where it is converging to, and hence write an EF that evaluates all chess positions correctly. A way to go to get to that point!