In the evaluation function is it better to have a few weights but well tuned or more weights not well tuned?
What does your experience say?
Evaluation doubt
Moderators: hgm, Rebel, chrisw
-
- Posts: 217
- Joined: Fri Apr 11, 2014 10:45 am
- Full name: Fabio Gobbato
-
- Posts: 1334
- Joined: Sun Jul 17, 2011 11:14 am
Re: Evaluation doubt
Fewer, but better. Fewer weights helps with orthogonality, and the tuning may well make up for the missing terms. Plus, the terms will harmonise themselves with your search, increasing search speed.
Some believe in the almighty dollar.
I believe in the almighty printf statement.
I believe in the almighty printf statement.
-
- Posts: 396
- Joined: Sat May 05, 2012 2:48 pm
- Full name: Oliver Roese
Re: Evaluation doubt
Read chapter 5 of this book
http://www.deeplearningbook.org/
anf you can dicuss the question yourself. Or read any introduction into "Machine Learning".
http://www.deeplearningbook.org/
anf you can dicuss the question yourself. Or read any introduction into "Machine Learning".
-
- Posts: 4833
- Joined: Sun Aug 10, 2008 3:15 pm
- Location: Philippines
Re: Evaluation doubt
Good info, thanks.BeyondCritics wrote:Read chapter 5 of this book
http://www.deeplearningbook.org/
anf you can dicuss the question yourself. Or read any introduction into "Machine Learning".
-
- Posts: 217
- Joined: Fri Apr 11, 2014 10:45 am
- Full name: Fabio Gobbato
Re: Evaluation doubt
Of course less weights are easier to tune, but even if they are well tuned they can't express all the knowledge of a more complex evaluation.
So the answer is not so easy.
So the answer is not so easy.