Constant trend for test error

I'm using keras functional API in order to model a multi-input/multi-output relation in the form (y1,y2) = f(x1,x2,x3).

I have a really simple architecture composed by 2 hidden layer of 64 nodes and one hidden layer of 16 nodes. The code is implemented as follow

# split into train and test datasets
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.25)

print(x_train.shape, x_test.shape, y_train.shape, y_test.shape)

n_features = x_train.shape[1]
# -----------------------------------------------------------------------
  
inputs  = keras.Input(shape=(n_features,))

dense   = layers.Dense(64, activation=relu)
x       = dense(inputs)

x       = layers.Dense(64, activation=relu)(x)
x       = layers.Dense(16, activation=relu)(x)

outputs = layers.Dense(2)(x)

model = keras.Model(inputs=inputs, outputs=outputs)
model.summary()

# validationile the model
model.compile(optimizer='adam', loss='mse')

# fit the model
history = model.fit(x_train, y_train, batch_size=64, epochs=200, validation_data=(x_test, y_test))

plt.figure()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train','val'], loc = 'upper left')
plt.savefig('error.png')

I plotted the training error and the test error to evaluate possible overfitting or underfitting and I found that the test error is almost constant after a certain number of epochs. I expected the testing error to rise at a certain point, but this is not happening. I have difficulties in interpreting if there is a problem in my model or not.

Topic keras python machine-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.