keras: what do we do when val_loss and loss differ markedly?
I am new to Deep learning. I want to understand when it is a good time to stop training and the tweaks that should be made. Thank you.
1) I understand that if val_loss increases but val is flat, we are over-fitting. Eg in
Image 1:
So do we tweak the network to be less complex (eg reduce the layers, neurons), or reduce the epoch?
2) If both the val_loss and loss have flatten, but the val_loss remains above the loss.
Does this mean we are over-fitting? Do we increase the epoch until the val_loss and loss are almost the same?
Topic keras deep-learning
Category Data Science