How neural style transfer work in pytorch?
I am using this pytorch script to learn and understand neural style transfer. I understood most part of the code but having some hard time understanding some parts of the code.
In line 15
Its not clear to me how model_activations
work. I made a sample style tensor of the shape style.shape - torch.Size([3, 300, 374])
and tried this sample code first without layers dict.
x = style
x = x.unsqueeze(0)
for name,layer in model._modules.items():
x = layer(x)
print(x.shape)
Output:
torch.Size([1, 64, 300, 374])
torch.Size([1, 64, 300, 374])
torch.Size([1, 64, 300, 374])
torch.Size([1, 64, 300, 374])
torch.Size([1, 64, 150, 187])
torch.Size([1, 128, 150, 187])
torch.Size([1, 128, 150, 187])
torch.Size([1, 128, 150, 187])
torch.Size([1, 128, 150, 187])
torch.Size([1, 128, 75, 93])
torch.Size([1, 256, 75, 93])
torch.Size([1, 256, 75, 93])
torch.Size([1, 256, 75, 93])
torch.Size([1, 256, 75, 93])
torch.Size([1, 256, 75, 93])
torch.Size([1, 256, 75, 93])
torch.Size([1, 256, 75, 93])
torch.Size([1, 256, 75, 93])
torch.Size([1, 256, 37, 46])
torch.Size([1, 512, 37, 46])
torch.Size([1, 512, 37, 46])
torch.Size([1, 512, 37, 46])
torch.Size([1, 512, 37, 46])
torch.Size([1, 512, 37, 46])
torch.Size([1, 512, 37, 46])
torch.Size([1, 512, 37, 46])
torch.Size([1, 512, 37, 46])
torch.Size([1, 512, 18, 23])
torch.Size([1, 512, 18, 23])
torch.Size([1, 512, 18, 23])
torch.Size([1, 512, 18, 23])
torch.Size([1, 512, 18, 23])
torch.Size([1, 512, 18, 23])
torch.Size([1, 512, 18, 23])
torch.Size([1, 512, 18, 23])
torch.Size([1, 512, 18, 23])
torch.Size([1, 512, 9, 11])
and then with layers
layers = {
'0' : 'conv1_1',
'5' : 'conv2_1',
'10': 'conv3_1',
'19': 'conv4_1',
'21': 'conv4_2',
'28': 'conv5_1'
}
x = style
x = x.unsqueeze(0)
for name,layer in model._modules.items():
if name in layers:
x = layer(x)
print(x.shape)
Output:
torch.Size([1, 64, 300, 374])
torch.Size([1, 128, 300, 374])
torch.Size([1, 256, 300, 374])
torch.Size([1, 512, 300, 374])
torch.Size([1, 512, 300, 374])
torch.Size([1, 512, 300, 374])
- My question is how the second method maintained the height and widght of style tensor
(300, 374)
?
The second confusion is in the line 91
where optimizer was set as optimizer = torch.optim.Adam([target],lr=0.007)
. In most pytorch tutorials I have seen them doing some thing like this optimizer = torch.optim.Adam(model.parameters(), lr = 0.01)
- Why is the optimizer initialization in neural style transfer different from other neural network tutorials?
- What is reason behind this
optimizer = torch.optim.Adam([target],lr=0.007)
>