Not OP. This question is being reposted to preserve technical content removed from elsewhere. Feel free to add your own answers/discussion.

Original question:

The more I read, the more I am confused as to how to interpret the validation and training loss graphs, so therefore I would like to ask for some guidance on how to interpret these values here in the picture. I am training a basic UNet architecture. I am now wondering if I need a more complex network model, or that I just need more data to improve the accuracy.

Historical note: I had the issue where validation loss was exploding after a few epochs, but I added dropout layers and that seems to have fixed the situation.

My current interpretation is that the validation loss is slowly increasing, so does that mean that it’s useless to train further? Or should I rather let it train further because the validation accuracy seems to sometimes jump up a little bit?

  • ShadowAetherOPM
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 years ago

    Original answer:

    It’s hard to give you an answer to your first question with just those graphs because it looks like one run on one dataset split. To address this part specifically:

    My current interpretation is that the validation loss is slowly increasing, so does that mean that it’s useless to train further? Or should I rather let it train further because the validation accuracy seems to sometimes jump up a little bit?

    The overall trend is what’s important, not small variations, try to imagine the validation loss curve smoothed out and you don’t want to go beyond the minimum of that curve. Technically overfitting is indicated by a significant difference in loss between training and testing.