keras: what do we do when val_loss and loss differ markedly?

I am new to Deep learning. I want to understand when it is a good time to stop training and the tweaks that should be made. Thank you.

1) I understand that if val_loss increases but val is flat, we are over-fitting. Eg in Image 1:

So do we tweak the network to be less complex (eg reduce the layers, neurons), or reduce the epoch?

2) If both the val_loss and loss have flatten, but the val_loss remains above the loss.

Does this mean we are over-fitting? Do we increase the epoch until the val_loss and loss are almost the same?

Topic keras deep-learning

Category Data Science


In second scenario, it seems your model can still perform better on the validation set. So increasing the model iteration also making model more complex till the time val loss and train loss are close to each other is a good idea.

For scenario1 you can use early stopping to ensure model stops training once you reach optimal loss i.e. train loss is close to val loss. reducing model complexity will also be a good idea

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.