ConvNets and sliding windows
5 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
(Rephrased) I am starting to play with the Deep Learning Toolbox and deepNetworkDesigner. An example of what I'd like to be able to do is to take a CNN classifier that has already been trained for 30x30 input images, but now use it to do classification on every 30x30 sub-block of a 400x400 image A.
The naive way to do this would be to loop over the sub-blocks in A and feed them to the CNN one at a time, but that is inefficient. The more efficient technique that I have seen recommended is to convert the CNN to a fully-convolutional network, which means adjusting the final layers as follows:
(1) Converting the fully-connected layer to a convolutional layer where the weights are now that of a 30x30 filter, stride=1, Padding=[0,0].
(2) Applying the softmax operation pixel-wise to the output of (1).
After doing the above, every layer in the network is now a shift-invariant operation, and should be able to process input images of any size. If I input a 400x400 image A, the output of the network should be an N-channel image of size 371x371 where each pixel contains the N class probabilities of a particular 30x30 sub-block.
I am wondering if it possible to make the kind of adjustments described above to an already trained CNN?
0 comentarios
Respuestas (1)
Hrishikesh Borate
el 23 de Jul. de 2021
Hi,
It's my understanding that you are trying to define a convolutional neural network without predeclaring the input image size. One possible approach is to develop an uninitialized dlnetwork object without an input layer as shown here.
1 comentario
Ver también
Categorías
Más información sobre Recognition, Object Detection, and Semantic Segmentation en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!