Forward and backward process in pyTorch
When I write a network, do I have to write the whole forward property in nn.Module.forward()
? I mean if I do some operations outside the net, does grad correctly flow?
For example, I have two networks, in which the output of one is the input of the other(net1 - midresults - net2
).
If I do some operations on midresults(net1 - midresults - operations - net2
), can (net1+net2) be trained end to end?
Topic pytorch deep-learning
Category Data Science