The inference code is quite simple, since you understood what you have to do. It took me only few days (back in August) to have my own C implementation. At the opposite, the training part is highly more difficult to write from scratch... I finally managed to do it a few days ago, with great results (I'm very happy about that : I learned something new and very challenging !).
I'm not familiar at all with licenses (that's one of the reason why Orion's source code is not available, the others being that the engine is too weak to be helpful, and code is maybe not so "elegant"

), but if people are interested in and if it is possible to find the most "public domain" license possible, I would be happy to share my work (both inference code in C, with incremental updates, and training code in Python, using Pytorch).
Two attention points :
* the training code produces networks that are very similar to SF's ones, but not compatible (strength seems very good, but I haven't tested intensively so far) ;
* the networks produce floats, which means 40 MB for the "standard" NNUE architecture. But this leaves a lot of interesting enhancement possibilities : quantization, smaller architectures, etc.
Finally, I wonder what the community would thought about releasing an "official" version of Orion, using my own implementation of inference code, using a net trained with my own (home-made) trainer, but... using SF's eval to train the all thing

I think some people would highly "disapprove", but on the other hand, a lot of work has been done and, as humans, to learn, we need a teacher. I think it's the same for engines : they need to be trained by the best experts
