Computing the gradient of the MSE cost function

Discussion of chess software programming and technical issues.

Moderator: Ras

Henk
Posts: 7251
Joined: Mon May 27, 2013 10:31 am

Re: Computing the gradient of the MSE cost function

Post by Henk »

clayt wrote: Mon Jun 13, 2022 7:33 pm
Henk wrote: Mon Jun 13, 2022 12:50 pm I remember backrpropgation was invented to compute gradient efficiently. So why don't you use a network.
This isn't exactly correct: backpropagation is a technique for computing the gradient of multi-layer neural nets. What we're doing here is actually just a single-layer neural net (also known as an adaptive linear element, or Adaline), which does not require backpropagation at all.
Complex iis bad. Simpel is good. So best would be to use simpel formulas. Or complex for training our brains and get insane (if we don't watch out)