My validation loss is too much higher than the training loss is that overfitting?
I am new to data deep learning. I am educating myself but I don't understand this situation. Where Validation loss is much much higher than the training loss. Can someone please interpret this?
inputs = keras.Input((width, height, depth, 1))
x = layers.Conv3D(filters=64, kernel_size=3, activation=relu)(inputs)
x = layers.MaxPool3D(pool_size=2)(x)
x = layers.BatchNormalization()(x)
x = layers.Conv3D(filters=64, kernel_size=3, activation=relu)(x)
x = layers.MaxPool3D(pool_size=2)(x)
x = layers.BatchNormalization()(x)
x = layers.Conv3D(filters=128, kernel_size=3, activation=relu)(x)
x = layers.MaxPool3D(pool_size=2)(x)
x = layers.BatchNormalization()(x)
x = layers.Conv3D(filters=256, kernel_size=3, activation=relu)(x)
x = layers.MaxPool3D(pool_size=2)(x)
x = layers.BatchNormalization()(x)
x = layers.GlobalAveragePooling3D()(x)
x = layers.Dense(units=512, activation=relu)(x)
x = layers.Dropout(0.3)(x)
outputs = layers.Dense(units=1, activation=sigmoid)(x)
Topic cnn deep-learning
Category Data Science