faster lean bilinear imresize / improved gpuArray/imresize
1 visualización (últimos 30 días)
Mostrar comentarios más antiguos
Hi,
I'm currently processing lots of images in a convolutional neural network, and for that the resize.m function is currently the major bottleneck (Googling this resulted in a few other people complaining about imresize as well). However, digging a bit into the code there is about 60% overhead for this function by checking arguments and performing extra function calls. So I made a leaner version, but this requires access to the private imresizemex function. Is it possible in a future release to, for example:
- create a lean imresize_bilinear function (as attached here)?
- move imresizemex outside of the private directory so it can be accessed? (I work on several different servers, often with a different matlab version, so copying imresizemex does not work too well for me).
Related:
- Are you working on making gpuArray/imresize work in format: out = imresize(im, [numRows numCols])?
Example code is attached. Output:
>> testImresize
original bilinear resize: 2.364755
lean bilinear resize: 0.922745
Thanks, Jasper
0 comentarios
Respuestas (0)
Ver también
Categorías
Más información sobre Introduction to Installation and Licensing en Help Center y File Exchange.
Productos
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!