@harry wrote:
I am currently studying about the base- variance Tradeoff of the model.
In statistics and machine learning, the** bias–variance tradeoff** (or dilemma) is the problem of simultaneously minimizing two sources of error that prevent supervised learning algorithms from generalizing beyond their training set.
The flexibility of function increases, its variance increases, and its bias decreases.
But I am not able to understand how the flexibility of the model affected by the number of predictions and number of observation.
Posts: 1
Participants: 1