ArtsAutosBooksBusinessEducationEntertainmentFamilyFashionFoodGamesGenderHealthHolidaysHomeHubPagesPersonal FinancePetsPoliticsReligionSportsTechnologyTravel

Understanding the Bias and Variance Tradeoff

Updated on July 3, 2015

A useful way of understanding under fitting overfitting is as a balance between bias and variance. The term inductive bias has set of assumptions that let us extrapolate from examples we have seen to similar examples that we have not. Bias is what makes is prefer one model over another usually framed as types of functions that we think might be likely (like polynomial functions or assumptions about functions like smoothness). Some notion a bias is intrinsically necessary for learning since there are many possible perfectly valid explanations to the data.

For example these data points might be explained equally well by a simple function that uses noise to explain the errors in its predictions or as a complex function that needs no noise to explain them. Inductive bias is what let us select between these equally valid explanations for the thing we think is most likely.

The variance of an estimator is another important effect. Imagine we had several draws a datasets perhaps made in different universes keeping everything else the same. So we'd go out to collect our data and in these different universes we happen to poll a different set of people or other examples.

So the system that give us these data is exactly the same but the measurement set is a little bit different so in universe one we get red points, universe two we get green points and universe three we get these blue points but alI of them from the same data generating system.

In each universe we go out and we learn a model for these data the question is how different are they?!.

For a simple model will find that they are almost the same. so if we learn a constant model we find the Blue Line, Green Line and red line are all quite close to one another. In this case the prediction is the average to the data and the averages are all quite similar.

More complex model however will fit the training data more closely and as a result will tend to differ more and more across the datasets. so for example these lines red, green and blue are slightly more different and for cubic or higher order functions we find that the shapes of the functions we learn are quite different across the datasets since we could just as easily have ended up with any of these sets if our predictions vary to wildly that suggests that our performance on new data will also be poor sense for example the green curve is not close to the red curve and hence it's also not close to the red points which we could easily have gotten for test data in the future.

High variance is exactly the overfitting effect we do better on the data that we see than we will in future data. To balance this affects we need to choose the right model complexity.

As we've seen one approach is to hold up data so we split the data into a validation or test set that's not seen by the model and then use it to estimate the models future performance. We can then compare several different models and complexities and choose the one that performs the best.

All good competitions use this formulation often with multiple splits. For example one test set may be used to give feedback like a leaderboard but since this can then be optimized because the predictors can see its value and select models that will do well on it. Another test set needs to be held out for the final scoring. Furthermore even within the training set you may want to split the data one or more times to do your own model selection and evaluation as well.

So what can we do about under and overfitting?. if we believe our models underfitting we can reduce the underfitting by increasing the complexity of the model. For example we get out extra features and hence increase the number of parameters.

To reduce overfitting we need to decrease the complexity of the model often by increasing our bias. By reducing the number of features like feature selection or even Just forcing our model to underperform and hence fails to memorize the data. One trivial historical way of doing this is a technique called early stopping during optimization. We simply don't fully optimize the function but we stop after a fixed number of iterations. But a more principled and common way is to add regularization penalty.

Comments

    0 of 8192 characters used
    Post Comment

    • FitnezzJim profile image

      FitnezzJim 2 years ago from Fredericksburg, Virginia

      A-priori information and assumptions about both the data collection system and the system being observed are also helpful in making a guess as to whether the data is explained by noise, bias, or a combination of both.