In the evaluation function is it better to have a few weights but well tuned or more weights not well tuned?
What does your experience say?
Evaluation doubt
Moderators: bob, hgm, Harvey Williamson
Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
- Fabio Gobbato
- Posts: 132
- Joined: Fri Apr 11, 2014 8:45 am
- Contact:
-
- Posts: 1327
- Joined: Sun Jul 17, 2011 9:14 am
Re: Evaluation doubt
Fewer, but better. Fewer weights helps with orthogonality, and the tuning may well make up for the missing terms. Plus, the terms will harmonise themselves with your search, increasing search speed.
Some believe in the almighty dollar.
I believe in the almighty printf statement.
I believe in the almighty printf statement.
-
- Posts: 353
- Joined: Sat May 05, 2012 12:48 pm
- Location: Bergheim
Re: Evaluation doubt
Read chapter 5 of this book
http://www.deeplearningbook.org/
anf you can dicuss the question yourself. Or read any introduction into "Machine Learning".
http://www.deeplearningbook.org/
anf you can dicuss the question yourself. Or read any introduction into "Machine Learning".
Re: Evaluation doubt
Good info, thanks.BeyondCritics wrote:Read chapter 5 of this book
http://www.deeplearningbook.org/
anf you can dicuss the question yourself. Or read any introduction into "Machine Learning".
- Fabio Gobbato
- Posts: 132
- Joined: Fri Apr 11, 2014 8:45 am
- Contact:
Re: Evaluation doubt
Of course less weights are easier to tune, but even if they are well tuned they can't express all the knowledge of a more complex evaluation.
So the answer is not so easy.
So the answer is not so easy.