ConvNets and sliding windows

(Rephrased) I am starting to play with the Deep Learning Toolbox and deepNetworkDesigner. An example of what I'd like to be able to do is to take a CNN classifier that has already been trained for 30x30 input images, but now use it to do classification on every 30x30 sub-block of a 400x400 image A.
The naive way to do this would be to loop over the sub-blocks in A and feed them to the CNN one at a time, but that is inefficient. The more efficient technique that I have seen recommended is to convert the CNN to a fully-convolutional network, which means adjusting the final layers as follows:
(1) Converting the fully-connected layer to a convolutional layer where the weights are now that of a 30x30 filter, stride=1, Padding=[0,0].
(2) Applying the softmax operation pixel-wise to the output of (1).
After doing the above, every layer in the network is now a shift-invariant operation, and should be able to process input images of any size. If I input a 400x400 image A, the output of the network should be an N-channel image of size 371x371 where each pixel contains the N class probabilities of a particular 30x30 sub-block.
I am wondering if it possible to make the kind of adjustments described above to an already trained CNN?

Respuestas (1)

Hrishikesh Borate
Hrishikesh Borate el 23 de Jul. de 2021

0 votos

Hi,
It's my understanding that you are trying to define a convolutional neural network without predeclaring the input image size. One possible approach is to develop an uninitialized dlnetwork object without an input layer as shown here.

1 comentario

Matt J
Matt J el 23 de Jul. de 2021
Editada: Matt J el 23 de Jul. de 2021
Thank you @Hrishikesh Borate, but I'm not sure that this is what I'm after. What I'd like to be able to do, for example, is take a CNN classifier that has already been trained for 30x30 input images, but now use it to do classification on every 30x30 sub-block of a 400x400 image A.
The naive way to do this would be to loop over the sub-blocks in A and feed them to the CNN one at a time, but that is inefficient. The more efficient technique that I have seen recommended is to convert the CNN to a fully-convolutional network, which means adjusting the final layers as follows:
(1) Converting the fully-connected layer to a convolutional layer where the weights are now that of a 30x30 filter, stride=1, Padding=[0,0].
(2) Applying the softmax operation pixel-wise to the output of (1).
If you do the above, then every layer in the network is now a shift-invariant operation, and should be able to process input images of any size. If I input a 400x400 image A, the output of the network should be an N-channel image of size 371x371 where each pixel contains the N class probabilities of a particular 30x30 sub-block.
Is it possible to make these kind of adjustments to an already trained CNN?

Iniciar sesión para comentar.

Categorías

Más información sobre Deep Learning Toolbox en Centro de ayuda y File Exchange.

Productos

Versión

R2020a

Etiquetas

Preguntada:

el 18 de Jul. de 2021

Editada:

el 23 de Jul. de 2021

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by