ArtsAutosBooksBusinessEducationEntertainmentFamilyFashionFoodGamesGenderHealthHolidaysHomeHubPagesPersonal FinancePetsPoliticsReligionSportsTechnologyTravel

Understanding the Bias and Variance Tradeoff

Updated on July 3, 2015

A useful way of understanding under fitting overfitting is as a balance between bias and variance. The term inductive bias has set of assumptions that let us extrapolate from examples we have seen to similar examples that we have not. Bias is what makes is prefer one model over another usually framed as types of functions that we think might be likely (like polynomial functions or assumptions about functions like smoothness). Some notion a bias is intrinsically necessary for learning since there are many possible perfectly valid explanations to the data.

For example these data points might be explained equally well by a simple function that uses noise to explain the errors in its predictions or as a complex function that needs no noise to explain them. Inductive bias is what let us select between these equally valid explanations for the thing we think is most likely.

The variance of an estimator is another important effect. Imagine we had several draws a datasets perhaps made in different universes keeping everything else the same. So we'd go out to collect our data and in these different universes we happen to poll a different set of people or other examples.

So the system that give us these data is exactly the same but the measurement set is a little bit different so in universe one we get red points, universe two we get green points and universe three we get these blue points but alI of them from the same data generating system.

In each universe we go out and we learn a model for these data the question is how different are they?!.

For a simple model will find that they are almost the same. so if we learn a constant model we find the Blue Line, Green Line and red line are all quite close to one another. In this case the prediction is the average to the data and the averages are all quite similar.

More complex model however will fit the training data more closely and as a result will tend to differ more and more across the datasets. so for example these lines red, green and blue are slightly more different and for cubic or higher order functions we find that the shapes of the functions we learn are quite different across the datasets since we could just as easily have ended up with any of these sets if our predictions vary to wildly that suggests that our performance on new data will also be poor sense for example the green curve is not close to the red curve and hence it's also not close to the red points which we could easily have gotten for test data in the future.

High variance is exactly the overfitting effect we do better on the data that we see than we will in future data. To balance this affects we need to choose the right model complexity.

As we've seen one approach is to hold up data so we split the data into a validation or test set that's not seen by the model and then use it to estimate the models future performance. We can then compare several different models and complexities and choose the one that performs the best.

All good competitions use this formulation often with multiple splits. For example one test set may be used to give feedback like a leaderboard but since this can then be optimized because the predictors can see its value and select models that will do well on it. Another test set needs to be held out for the final scoring. Furthermore even within the training set you may want to split the data one or more times to do your own model selection and evaluation as well.

So what can we do about under and overfitting?. if we believe our models underfitting we can reduce the underfitting by increasing the complexity of the model. For example we get out extra features and hence increase the number of parameters.

To reduce overfitting we need to decrease the complexity of the model often by increasing our bias. By reducing the number of features like feature selection or even Just forcing our model to underperform and hence fails to memorize the data. One trivial historical way of doing this is a technique called early stopping during optimization. We simply don't fully optimize the function but we stop after a fixed number of iterations. But a more principled and common way is to add regularization penalty.

Comments

    0 of 8192 characters used
    Post Comment

    • FitnezzJim profile image

      FitnezzJim 

      3 years ago from Fredericksburg, Virginia

      A-priori information and assumptions about both the data collection system and the system being observed are also helpful in making a guess as to whether the data is explained by noise, bias, or a combination of both.

    working

    This website uses cookies

    As a user in the EEA, your approval is needed on a few things. To provide a better website experience, hubpages.com uses cookies (and other similar technologies) and may collect, process, and share personal data. Please choose which areas of our service you consent to our doing so.

    For more information on managing or withdrawing consents and how we handle data, visit our Privacy Policy at: https://hubpages.com/privacy-policy#gdpr

    Show Details
    Necessary
    HubPages Device IDThis is used to identify particular browsers or devices when the access the service, and is used for security reasons.
    LoginThis is necessary to sign in to the HubPages Service.
    Google RecaptchaThis is used to prevent bots and spam. (Privacy Policy)
    AkismetThis is used to detect comment spam. (Privacy Policy)
    HubPages Google AnalyticsThis is used to provide data on traffic to our website, all personally identifyable data is anonymized. (Privacy Policy)
    HubPages Traffic PixelThis is used to collect data on traffic to articles and other pages on our site. Unless you are signed in to a HubPages account, all personally identifiable information is anonymized.
    Amazon Web ServicesThis is a cloud services platform that we used to host our service. (Privacy Policy)
    CloudflareThis is a cloud CDN service that we use to efficiently deliver files required for our service to operate such as javascript, cascading style sheets, images, and videos. (Privacy Policy)
    Google Hosted LibrariesJavascript software libraries such as jQuery are loaded at endpoints on the googleapis.com or gstatic.com domains, for performance and efficiency reasons. (Privacy Policy)
    Features
    Google Custom SearchThis is feature allows you to search the site. (Privacy Policy)
    Google MapsSome articles have Google Maps embedded in them. (Privacy Policy)
    Google ChartsThis is used to display charts and graphs on articles and the author center. (Privacy Policy)
    Google AdSense Host APIThis service allows you to sign up for or associate a Google AdSense account with HubPages, so that you can earn money from ads on your articles. No data is shared unless you engage with this feature. (Privacy Policy)
    Google YouTubeSome articles have YouTube videos embedded in them. (Privacy Policy)
    VimeoSome articles have Vimeo videos embedded in them. (Privacy Policy)
    PaypalThis is used for a registered author who enrolls in the HubPages Earnings program and requests to be paid via PayPal. No data is shared with Paypal unless you engage with this feature. (Privacy Policy)
    Facebook LoginYou can use this to streamline signing up for, or signing in to your Hubpages account. No data is shared with Facebook unless you engage with this feature. (Privacy Policy)
    MavenThis supports the Maven widget and search functionality. (Privacy Policy)
    Marketing
    Google AdSenseThis is an ad network. (Privacy Policy)
    Google DoubleClickGoogle provides ad serving technology and runs an ad network. (Privacy Policy)
    Index ExchangeThis is an ad network. (Privacy Policy)
    SovrnThis is an ad network. (Privacy Policy)
    Facebook AdsThis is an ad network. (Privacy Policy)
    Amazon Unified Ad MarketplaceThis is an ad network. (Privacy Policy)
    AppNexusThis is an ad network. (Privacy Policy)
    OpenxThis is an ad network. (Privacy Policy)
    Rubicon ProjectThis is an ad network. (Privacy Policy)
    TripleLiftThis is an ad network. (Privacy Policy)
    Say MediaWe partner with Say Media to deliver ad campaigns on our sites. (Privacy Policy)
    Remarketing PixelsWe may use remarketing pixels from advertising networks such as Google AdWords, Bing Ads, and Facebook in order to advertise the HubPages Service to people that have visited our sites.
    Conversion Tracking PixelsWe may use conversion tracking pixels from advertising networks such as Google AdWords, Bing Ads, and Facebook in order to identify when an advertisement has successfully resulted in the desired action, such as signing up for the HubPages Service or publishing an article on the HubPages Service.
    Statistics
    Author Google AnalyticsThis is used to provide traffic data and reports to the authors of articles on the HubPages Service. (Privacy Policy)
    ComscoreComScore is a media measurement and analytics company providing marketing data and analytics to enterprises, media and advertising agencies, and publishers. Non-consent will result in ComScore only processing obfuscated personal data. (Privacy Policy)
    Amazon Tracking PixelSome articles display amazon products as part of the Amazon Affiliate program, this pixel provides traffic statistics for those products (Privacy Policy)