How to use lsqnonlin on a function which is mostly linear?
3 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Iain McClatchie
el 2 de Abr. de 2019
I have a (large) least squares minimization problem which is almost linear.
I want to find a set of weights for a small (7x7) kernel which best filters noise from a greyscale image. I have a reference no-noise image which has spatial correlation from both lens blur and structure in the scene. I also know how much measurement noise (approximately gaussian) will be added to each pixel.
My approach is to produce a matrix A with hundreds of thousands of rows (n), one for each pixel, and 49 columns, one for each pixel in the neighborhood of a filtered pixel. A contains the reference no-noise image data. I also have the desired output Y, which has the hundreds of thousands of rows (one for each pixel) and a single column (the center pixel), also no-noise image data. I'm looking for a 49x1 weight vector x.
When taking a picture, every pixel will have error added with variance V to each pixel.
So I'm looking for the x that least squares minimizes sum((A*x - Y).^2) + n * x'*x * V
The trivial filter weights are zero for everything but the center pixel. These give A*x-Y=0, and sees n*V summed variance across the entire picture. I use this as my starting point. I have the optimization working now with lsqnonlin, in which the optimizer figures out how much to weight the surrounding pixels to trade reduction in variance for an increase in estimation inaccuracy.
Yes, there are linear constraints on the weights (must sum to 1, center of mass must be centered). I actually have fewer than 49 optimization variables that I transform into the weights to ensure these constraints, but (a) that part works, and (b) it's an unnecessary complication to the problem I'd like help with.
The trouble is that my objective function is giving lsqnonlin a gigantic result with n+1 terms (one extra term is sqrt(n*x'*x*V)), so that the Jacobian is enormous (40-odd optimization variables by hundreds of thousands of rows). The optimizer (trust-region-reflective) is stopping after it exceeds the function evaluation limit (97 iterations, 4400 evaluations), while it still has a largish stepsize, so I don't think I've found a good minimum yet. It's also slow and I'd like to run thousands of these and expand the number of rows a hundredfold.
I think my approach is very inefficient and there is probably a better way. What is it?
0 comentarios
Respuesta aceptada
Más respuestas (0)
Ver también
Productos
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!