How to get tf tensor value computed in loss function in keras train_on_batch without computing it twice or writing custom loop?
I have a model and I've implemented a custom loss function something along the lines:
def custom_loss(labels, predictions):
global diff
#actual code uses decorator so no globals
diff = labels - predictions
return tf.square(diff)
model.compile(loss=custom_loss, optimizer=opt.RMSprop())
...
model.train_on_batch(input, labels)
#
How to get diff after I've run train_on_batch without causing it to rerun predict a second time behind the scenes(unnecessary slowdown) and mess up with trainable/batchnorm etc(possible problems)?
I want to avoid making a manual raw tensorflow train_op loop etc, keeping track of learning phase and whatnot. Is this my only choice?
I'm using tensorflow 1.14's keras module
Topic keras tensorflow
Category Data Science