I am trying to establish a relationship b/n hcanopy & 18 predictors (vv_name & vh_name) using CNN but my model isn't learning. How can I resolve it?
Before building and running the model, I have rescaled and normalized the data.
Here is my model -
#Build the model
from keras_tuner import RandomSearch, BayesianOptimization
def build_model(hp):
model = keras.Sequential([
keras.layers.Conv1D(
filters=hp.Int('conv_1_filter', min_value=16, max_value=256, step=16),
kernel_size=hp.Choice('conv_1_kernel', values = [2,3]),
activation= hp.Choice('conv_1_activ', values = ['relu', 'sigmoid']),
#strides = hp.Int('conv_1_strides', min_value=1, max_value=5, step=1),
padding = 'same',
input_shape=(18,1)),
keras.layers.MaxPooling1D(
pool_size = hp.Choice('maxpool_1_kernel', values = [2,3]),
padding = 'same',
#strides = hp.Int('maxpool_1_strides', min_value=1, max_value=5, step=1),
),
keras.layers.Flatten(),
keras.layers.Dense(
units=hp.Int('dense_1_units', min_value=16, max_value=256, step=16),
activation='relu'
),
keras.layers.Dense(1)
])
model.compile(optimizer = 'adam',
loss = 'mean_squared_error',
metrics=['mae','mse','mape']),
return model
stop_early = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience = 100)
tuner = BayesianOptimization(build_model,
objective= 'loss',
overwrite=True,
max_trials = 3
)
tuner.search(x_train, y_train, epochs= 75, validation_split = 0.1, callbacks=[stop_early])
The output is -
Trial 3 Complete [00h 00m 12s]
loss: 48.644569396972656
Best loss So Far: 47.83784103393555
Total elapsed time: 00h 00m 57s
INFO:tensorflow:Oracle triggered exit
Model summary -
model = tuner.get_best_models(num_models=1)[0]
model.summary()
Output -
Model: sequential
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d (Conv1D) (None, 18, 128) 512
_________________________________________________________________
max_pooling1d (MaxPooling1D) (None, 9, 128) 0
_________________________________________________________________
flatten (Flatten) (None, 1152) 0
_________________________________________________________________
dense (Dense) (None, 144) 166032
_________________________________________________________________
dense_1 (Dense) (None, 1) 145
=================================================================
Total params: 166,689
Trainable params: 166,689
Non-trainable params: 0
Tuner results -
tuner.results_summary(num_trials = 1)
Output -
Results summary
Results in .\untitled_project
Showing 1 best trials
keras_tuner.engine.objective.Objective object at 0x00000291C1867988
Trial summary
Hyperparameters:
conv_1_filter: 128
conv_1_kernel: 3
conv_1_activ: relu
maxpool_1_kernel: 2
dense_1_units: 144
Score: 47.83784103393555
Training the model -
hist = model.fit(x_train, y_train,
batch_size=32, epochs=50,
validation_data=(x_val, y_val))
Output -
Epoch 1/50
66/66 [==============================] - 0s 4ms/step - loss: 47.5273 - mae: 5.7137 - mse: 47.5273 - mape: 35.8597 - val_loss: 46.6480 - val_mae: 5.6488 - val_mse: 46.6480 - val_mape: 35.7228
Epoch 2/50
66/66 [==============================] - 0s 2ms/step - loss: 47.9865 - mae: 5.7127 - mse: 47.9865 - mape: 35.6715 - val_loss: 46.4261 - val_mae: 5.6358 - val_mse: 46.4261 - val_mape: 36.3317
Epoch 3/50
66/66 [==============================] - 0s 2ms/step - loss: 47.6181 - mae: 5.7116 - mse: 47.6181 - mape: 35.6373 - val_loss: 46.6012 - val_mae: 5.6349 - val_mse: 46.6012 - val_mape: 37.0938
Epoch 4/50
66/66 [==============================] - 0s 2ms/step - loss: 47.7654 - mae: 5.7327 - mse: 47.7654 - mape: 35.8039 - val_loss: 46.5765 - val_mae: 5.6362 - val_mse: 46.5765 - val_mape: 37.0167
Epoch 5/50
66/66 [==============================] - 0s 2ms/step - loss: 47.5977 - mae: 5.7065 - mse: 47.5977 - mape: 35.6634 - val_loss: 46.7648 - val_mae: 5.6395 - val_mse: 46.7648 - val_mape: 37.4207
Epoch 6/50
66/66 [==============================] - 0s 2ms/step - loss: 47.9229 - mae: 5.7281 - mse: 47.9229 - mape: 35.6491 - val_loss: 47.9091 - val_mae: 5.6982 - val_mse: 47.9091 - val_mape: 38.8689
Epoch 7/50
66/66 [==============================] - 0s 2ms/step - loss: 48.6753 - mae: 5.7809 - mse: 48.6753 - mape: 36.0142 - val_loss: 47.0650 - val_mae: 5.6719 - val_mse: 47.0650 - val_mape: 35.2718
Epoch 8/50
66/66 [==============================] - 0s 2ms/step - loss: 47.7816 - mae: 5.7201 - mse: 47.7816 - mape: 35.6205 - val_loss: 46.5548 - val_mae: 5.6402 - val_mse: 46.5548 - val_mape: 36.2725
Epoch 9/50
66/66 [==============================] - 0s 2ms/step - loss: 47.5515 - mae: 5.7125 - mse: 47.5515 - mape: 35.8965 - val_loss: 46.8259 - val_mae: 5.6435 - val_mse: 46.8259 - val_mape: 37.4353
Epoch 10/50
66/66 [==============================] - 0s 2ms/step - loss: 48.2340 - mae: 5.7469 - mse: 48.2340 - mape: 35.7792 - val_loss: 46.5494 - val_mae: 5.6370 - val_mse: 46.5494 - val_mape: 36.6935
Epoch 11/50
66/66 [==============================] - 0s 2ms/step - loss: 48.0736 - mae: 5.7422 - mse: 48.0736 - mape: 35.7871 - val_loss: 48.8150 - val_mae: 5.7490 - val_mse: 48.8150 - val_mape: 39.7336
Epoch 12/50
66/66 [==============================] - 0s 2ms/step - loss: 47.8089 - mae: 5.7249 - mse: 47.8089 - mape: 35.8476 - val_loss: 46.6710 - val_mae: 5.6374 - val_mse: 46.6710 - val_mape: 37.1594
Epoch 13/50
66/66 [==============================] - 0s 2ms/step - loss: 47.6514 - mae: 5.7188 - mse: 47.6514 - mape: 35.6219 - val_loss: 46.6198 - val_mae: 5.6365 - val_mse: 46.6198 - val_mape: 36.9988
Epoch 14/50
66/66 [==============================] - 0s 2ms/step - loss: 47.9598 - mae: 5.7380 - mse: 47.9598 - mape: 35.8961 - val_loss: 46.5852 - val_mae: 5.6360 - val_mse: 46.5852 - val_mape: 36.8492
Epoch 15/50
66/66 [==============================] - 0s 2ms/step - loss: 48.0100 - mae: 5.7264 - mse: 48.0100 - mape: 35.6530 - val_loss: 46.5335 - val_mae: 5.6374 - val_mse: 46.5335 - val_mape: 36.6377
Epoch 16/50
66/66 [==============================] - 0s 2ms/step - loss: 48.1748 - mae: 5.7597 - mse: 48.1748 - mape: 35.8947 - val_loss: 46.5379 - val_mae: 5.6377 - val_mse: 46.5379 - val_mape: 36.5539
Epoch 17/50
66/66 [==============================] - 0s 2ms/step - loss: 47.8319 - mae: 5.7259 - mse: 47.8319 - mape: 35.9112 - val_loss: 47.6115 - val_mae: 5.6968 - val_mse: 47.6115 - val_mape: 34.8782
Epoch 18/50
66/66 [==============================] - 0s 2ms/step - loss: 47.7261 - mae: 5.7027 - mse: 47.7261 - mape: 35.5727 - val_loss: 46.5608 - val_mae: 5.6381 - val_mse: 46.5608 - val_mape: 36.6718
Epoch 19/50
66/66 [==============================] - 0s 2ms/step - loss: 48.1502 - mae: 5.7359 - mse: 48.1502 - mape: 35.7811 - val_loss: 46.5789 - val_mae: 5.6406 - val_mse: 46.5789 - val_mape: 36.4617
Epoch 20/50
66/66 [==============================] - 0s 2ms/step - loss: 47.5542 - mae: 5.7166 - mse: 47.5542 - mape: 35.8178 - val_loss: 46.5834 - val_mae: 5.6373 - val_mse: 46.5834 - val_mape: 36.8342
Epoch 21/50
66/66 [==============================] - 0s 2ms/step - loss: 48.2993 - mae: 5.7591 - mse: 48.2993 - mape: 36.0031 - val_loss: 46.9345 - val_mae: 5.6497 - val_mse: 46.9345 - val_mape: 37.5970
Epoch 22/50
66/66 [==============================] - 0s 2ms/step - loss: 47.6633 - mae: 5.7196 - mse: 47.6633 - mape: 35.6636 - val_loss: 46.5652 - val_mae: 5.6381 - val_mse: 46.5652 - val_mape: 36.4345
Epoch 23/50
66/66 [==============================] - 0s 2ms/step - loss: 48.1486 - mae: 5.7336 - mse: 48.1486 - mape: 35.7699 - val_loss: 46.5387 - val_mae: 5.6365 - val_mse: 46.5387 - val_mape: 36.6563
Epoch 24/50
66/66 [==============================] - 0s 2ms/step - loss: 47.7966 - mae: 5.7314 - mse: 47.7966 - mape: 35.7822 - val_loss: 46.6103 - val_mae: 5.6459 - val_mse: 46.6103 - val_mape: 36.1079
Epoch 25/50
66/66 [==============================] - 0s 2ms/step - loss: 47.6674 - mae: 5.7143 - mse: 47.6674 - mape: 35.6924 - val_loss: 46.8109 - val_mae: 5.6423 - val_mse: 46.8109 - val_mape: 37.3310
Epoch 26/50
66/66 [==============================] - 0s 2ms/step - loss: 47.6319 - mae: 5.7194 - mse: 47.6319 - mape: 35.7683 - val_loss: 47.2469 - val_mae: 5.6788 - val_mse: 47.2469 - val_mape: 35.1502
Epoch 27/50
66/66 [==============================] - 0s 2ms/step - loss: 47.6906 - mae: 5.7180 - mse: 47.6906 - mape: 35.5854 - val_loss: 47.0632 - val_mae: 5.6693 - val_mse: 47.0632 - val_mape: 35.3303
Epoch 28/50
66/66 [==============================] - 0s 2ms/step - loss: 47.6055 - mae: 5.7063 - mse: 47.6055 - mape: 35.8161 - val_loss: 46.6349 - val_mae: 5.6390 - val_mse: 46.6349 - val_mape: 36.7966
Epoch 29/50
66/66 [==============================] - 0s 2ms/step - loss: 47.5738 - mae: 5.7218 - mse: 47.5738 - mape: 35.6970 - val_loss: 47.2732 - val_mae: 5.6638 - val_mse: 47.2732 - val_mape: 38.0565
Epoch 30/50
66/66 [==============================] - 0s 2ms/step - loss: 47.9710 - mae: 5.7426 - mse: 47.9710 - mape: 35.8676 - val_loss: 46.6260 - val_mae: 5.6438 - val_mse: 46.6260 - val_mape: 36.1087
Epoch 31/50
66/66 [==============================] - 0s 2ms/step - loss: 47.6284 - mae: 5.6966 - mse: 47.6284 - mape: 35.4182 - val_loss: 53.3327 - val_mae: 5.9974 - val_mse: 53.3327 - val_mape: 42.8343
Epoch 32/50
66/66 [==============================] - 0s 2ms/step - loss: 48.3973 - mae: 5.7358 - mse: 48.3973 - mape: 35.7515 - val_loss: 47.7311 - val_mae: 5.6877 - val_mse: 47.7311 - val_mape: 38.6067
Epoch 33/50
66/66 [==============================] - 0s 2ms/step - loss: 47.7636 - mae: 5.7173 - mse: 47.7636 - mape: 35.8557 - val_loss: 46.6124 - val_mae: 5.6411 - val_mse: 46.6124 - val_mape: 36.4876
Epoch 34/50
66/66 [==============================] - 0s 2ms/step - loss: 47.5866 - mae: 5.7174 - mse: 47.5866 - mape: 35.7719 - val_loss: 46.7480 - val_mae: 5.6529 - val_mse: 46.7480 - val_mape: 35.8001
Epoch 35/50
66/66 [==============================] - 0s 2ms/step - loss: 47.8320 - mae: 5.7262 - mse: 47.8320 - mape: 35.7437 - val_loss: 47.8147 - val_mae: 5.6924 - val_mse: 47.8147 - val_mape: 38.7123
Epoch 36/50
66/66 [==============================] - 0s 2ms/step - loss: 47.3806 - mae: 5.7014 - mse: 47.3806 - mape: 35.7290 - val_loss: 50.0591 - val_mae: 5.8108 - val_mse: 50.0591 - val_mape: 33.9856
Epoch 37/50
66/66 [==============================] - 0s 2ms/step - loss: 47.8113 - mae: 5.7266 - mse: 47.8113 - mape: 35.5384 - val_loss: 46.8719 - val_mae: 5.6453 - val_mse: 46.8719 - val_mape: 37.4223
Epoch 38/50
66/66 [==============================] - 0s 2ms/step - loss: 47.5519 - mae: 5.7107 - mse: 47.5519 - mape: 35.7515 - val_loss: 47.0551 - val_mae: 5.6550 - val_mse: 47.0551 - val_mape: 37.6985
Epoch 39/50
66/66 [==============================] - 0s 2ms/step - loss: 47.4893 - mae: 5.7142 - mse: 47.4893 - mape: 35.7414 - val_loss: 46.6206 - val_mae: 5.6432 - val_mse: 46.6206 - val_mape: 36.3426
Epoch 40/50
66/66 [==============================] - 0s 2ms/step - loss: 47.6880 - mae: 5.7080 - mse: 47.6880 - mape: 35.5187 - val_loss: 46.6771 - val_mae: 5.6471 - val_mse: 46.6771 - val_mape: 36.2686
Epoch 41/50
66/66 [==============================] - 0s 2ms/step - loss: 48.0667 - mae: 5.7550 - mse: 48.0667 - mape: 35.9099 - val_loss: 48.3135 - val_mae: 5.7203 - val_mse: 48.3135 - val_mape: 39.1846
Epoch 42/50
66/66 [==============================] - 0s 2ms/step - loss: 47.7572 - mae: 5.7238 - mse: 47.7572 - mape: 35.5588 - val_loss: 47.4548 - val_mae: 5.6737 - val_mse: 47.4548 - val_mape: 38.2647
Epoch 43/50
66/66 [==============================] - 0s 2ms/step - loss: 47.5496 - mae: 5.7165 - mse: 47.5496 - mape: 35.7636 - val_loss: 46.7017 - val_mae: 5.6406 - val_mse: 46.7017 - val_mape: 36.9240
Epoch 44/50
66/66 [==============================] - 0s 2ms/step - loss: 47.6393 - mae: 5.7125 - mse: 47.6393 - mape: 35.8036 - val_loss: 46.8099 - val_mae: 5.6573 - val_mse: 46.8099 - val_mape: 35.7648
Epoch 45/50
66/66 [==============================] - 0s 2ms/step - loss: 47.8176 - mae: 5.7279 - mse: 47.8176 - mape: 35.7993 - val_loss: 46.6256 - val_mae: 5.6459 - val_mse: 46.6256 - val_mape: 36.2989
Epoch 46/50
66/66 [==============================] - 0s 2ms/step - loss: 47.8337 - mae: 5.7294 - mse: 47.8337 - mape: 35.6939 - val_loss: 46.8156 - val_mae: 5.6454 - val_mse: 46.8156 - val_mape: 37.2952
Epoch 47/50
66/66 [==============================] - 0s 2ms/step - loss: 47.8467 - mae: 5.7260 - mse: 47.8467 - mape: 35.6264 - val_loss: 46.7016 - val_mae: 5.6521 - val_mse: 46.7016 - val_mape: 35.9996
Epoch 48/50
66/66 [==============================] - 0s 2ms/step - loss: 48.3067 - mae: 5.7516 - mse: 48.3067 - mape: 35.7113 - val_loss: 47.2476 - val_mae: 5.6647 - val_mse: 47.2476 - val_mape: 38.0185
Epoch 49/50
66/66 [==============================] - 0s 2ms/step - loss: 47.6687 - mae: 5.7250 - mse: 47.6687 - mape: 35.8703 - val_loss: 47.7418 - val_mae: 5.6884 - val_mse: 47.7418 - val_mape: 38.5845
Epoch 50/50
66/66 [==============================] - 0s 2ms/step - loss: 47.5635 - mae: 5.7140 - mse: 47.5635 - mape: 35.7001 - val_loss: 46.9595 - val_mae: 5.6642 - val_mse: 46.9595 - val_mape: 35.5249
As seen above, the model is not learning with each epoch. I would be grateful for any pointers to resolve this issue. Thanks in advance.
P.S.: I checked for multicollinearity of the predictors, found it to be high among a few of them, and merged them into a single predictor using PCA but the issue persists. I also experimented by changing the number of epochs, by adding/ removing layers from the model architecture.
Topic cnn regression deep-learning
Category Data Science