Overfitting and underfitting are often shown as progress during stages of training (not enough training-underfit, too much training-overfit) or as a function of model complexity (not enough complexity-underfit, too much complexity-overfit). Like this image, it seems to suggest that underfitting and overfitting can’t happen at the same time but in theory, shouldn’t it be possible for a model to be both at the same time?

  • planish
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    I think you sort of can if the model just can’t express the function you are trying to learn. You can draw a squiggle under some constraint that has weird carve-outs for individual points while still failing to really capture the underlying relationships in the data, and which won’t generalize to the test set as well as a simpler, still-very-wrong model without those carve-outs.

    • ShadowAetherOPM
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      That’s a good point and it could be likely to happen in only in certain types of datasets (like ones prone to model bias)