Too optimistic results with efficientnet v2 with cifar10

I'm trying to run efficient net with cifar-10 which should get high results, but not 100%.

Here is my code:

import tensorflow as tf
import tensorflow_datasets as tfds
import os , sys
os.chdir('./models/efficientnetv2')
sys.path.append('.')
import  effnetv2_model

# Construct a tf.data.Dataset
ds = tfds.load('cifar10', split='train')

model = tf.keras.models.Sequential([
    #tf.keras.layers.InputLayer(input_shape=[32, 32, 3],name=image),
    effnetv2_model.get_model('efficientnetv2-s', include_top=False),
    tf.keras.layers.Dropout(rate=0.2),
    tf.keras.layers.Dense(10, activation='softmax'),
])
metrics = [tf.keras.metrics.Precision(), tf.keras.metrics.Recall(), tf.keras.metrics.AUC(),tf.keras.metrics.TopKCategoricalAccuracy(k=1)]

dst=tfds.load('cifar10', split='test')
model.compile(loss=categorical_crossentropy, optimizer=adam, metrics=metrics)
sample=None
#take only 100 first samples
images=[]
labels=[]
for i in range(50000):
    sample=next(iter(ds))
    images.append(sample['image'])
    labels.append(sample['label'])

#convert images to tensor
images=tf.convert_to_tensor(images,dtype=tf.float32)
labels=tf.convert_to_tensor(labels,dtype=tf.int32)

imagest=[]
labelst=[]
for i in range(10000):
    sample=next(iter(dst))
    imagest.append(sample['image'])
    labelst.append(sample['label'])
imagest=tf.convert_to_tensor(imagest,dtype=tf.float32)
labelst=tf.convert_to_tensor(labelst,dtype=tf.int32)


labels=tf.one_hot(labels,10)
labelst=tf.one_hot(labelst,10)

model.fit(images,labels, epochs=10, validation_steps=100,validation_data=(imagest,labelst))

Does anyone see the mistake? the results I get is too optimistic.

The problem is I get with the first training very high results, which does not make any sense:

1563/1563 [==============================] - 162s 93ms/step - loss: 0.0103 - precision: 0.9996 - recall: 0.9955 - auc: 1.0000 - top_k_categorical_accuracy: 0.9978 - val_loss: 0.0000e+00 - val_precision: 1.0000 - val_recall: 1.0000 - val_auc: 1.0000 - val_top_k_categorical_accuracy: 1.0000
Epoch 2/10
1563/1563 [==============================] - 104s 66ms/step - loss: 2.5801e-05 - precision: 1.0000 - recall: 1.0000 - auc: 1.0000 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.0000e+00 - val_precision: 1.0000 - val_recall: 1.0000 - val_auc: 1.0000 - val_top_k_categorical_accuracy: 1.0000
Epoch 3/10
1563/1563 [==============================] - 107s 68ms/step - loss: 8.2729e-06 - precision: 1.0000 - recall: 1.0000 - auc: 1.0000 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.0000e+00 - val_precision: 1.0000 - val_recall: 1.0000 - val_auc: 1.0000 - val_top_k_categorical_accuracy: 1.0000
Epoch 4/10
1563/1563 [==============================] - 102s 66ms/step - loss: 3.1936e-06 - precision: 1.0000 - recall: 1.0000 - auc: 1.0000 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.0000e+00 - val_precision: 1.0000 - val_recall: 1.0000 - val_auc: 1.0000 - val_top_k_categorical_accuracy: 1.0000
Epoch 5/10
1563/1563 [==============================] - 100s 64ms/step - loss: 1.2820e-06 - precision: 1.0000 - recall: 1.0000 - auc: 1.0000 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.0000e+00 - val_precision: 1.0000 - val_recall: 1.0000 - val_auc: 1.0000 - val_top_k_categorical_accuracy: 1.0000
Epoch 6/10
1563/1563 [==============================] - 145s 92ms/step - loss: 5.5819e-07 - precision: 1.0000 - recall: 1.0000 - auc: 1.0000 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.0000e+00 - val_precision: 1.0000 - val_recall: 1.0000 - val_auc: 1.0000 - val_top_k_categorical_accuracy: 1.0000
Epoch 7/10
1563/1563 [==============================] - 124s 79ms/step - loss: 2.4103e-07 - precision: 1.0000 - recall: 1.0000 - auc: 1.0000 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.0000e+00 - val_precision: 1.0000 - val_recall: 1.0000 - val_auc: 1.0000 - val_top_k_categorical_accuracy: 1.0000
Epoch 8/10
1563/1563 [==============================] - 142s 91ms/step - loss: 9.3105e-08 - precision: 1.0000 - recall: 1.0000 - auc: 1.0000 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.0000e+00 - val_precision: 1.0000 - val_recall: 1.0000 - val_auc: 1.0000 - val_top_k_categorical_accuracy: 1.0000
Epoch 9/10
1563/1563 [==============================] - 111s 71ms/step - loss: 3.8140e-08 - precision: 1.0000 - recall: 1.0000 - auc: 1.0000 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.0000e+00 - val_precision: 1.0000 - val_recall: 1.0000 - val_auc: 1.0000 - val_top_k_categorical_accuracy: 1.0000
Epoch 10/10
1563/1563 [==============================] - 103s 66ms/step - loss: 1.5209e-08 - precision: 1.0000 - recall: 1.0000 - auc: 1.0000 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.0000e+00 - val_precision: 1.0000 - val_recall: 1.0000 - val_auc: 1.0000 - val_top_k_categorical_accuracy: 1.0000

Topic keras tensorflow neural-network machine-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.