Validation data shall be in broken down into batches or not?
I am using fit_generator
to train the model. The training dataset is being read from a generator function which gives data in a constant batch size. Now I want to know what approach shall I adopt for validation data. Shall I make a generator for the validation set or load it completely into memory and use it in fit_generator
?
Note: Validation dataset is fitting into memory quite easily therefore batching is an option for me.
Topic training
Category Data Science