Stochastic Gradient Descent (SGD) for Image Processing

24 visualizaciones (últimos 30 días)
Ibraheem Al-Dhamari
Ibraheem Al-Dhamari el 24 de En. de 2017
Comentada: Dan el 23 de Jun. de 2017
Dear all,
I am trying to apply SGD to solve a classical image processing problem as in this link . I am not sure what should I change. Here is the Gradient Descent Code:
niter = 500; % number of iterations
x = u; % initial value for x, u is the input noisy image
for i=1:niter
% smoothed total variation of the image
gdx = grad(x).^2;
sgdx=gdx(:,:,1)+gdx(:,:,2);
NormEps = sqrt( epsilon^2 + sgdx );
J = sum(NormEps(:)) ; % this is a scalar value
% functional to minimize, lambda is weight of J
nm=sum((x(:)-u(:)).^2);
f = 1/2 * nm^2 + lambda * J;
% normalized gradient of J
GradJ =-div( grad(x)./repmat(NormEps, [1 1 2]) );
% Gradient Descent update equation
% the gradient of the functional function f is:
% x - y + lambda * GradJ
x = x - tau * ( x - u + lambda * GradJ);
end
clf;
imageplot(clamp(x)); % this is the result denoised image
I understand that in SGD we took only random part of the image at each iteration then we compute the minimum, but if I apply this on the input noisy image, I will denoise (badly) small part of the image at each iteration, right? an explanation based on the code above would be excellent!
Best regards,
Ibraheem
  1 comentario
Dan
Dan el 23 de Jun. de 2017
Hi Ibraheem, I have updated an existing algorithm to apply SGD : https://www.google.co.il/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&cad=rja&uact=8&ved=0ahUKEwjI9I_R1tTUAhUsCMAKHc0HB0MQFgg5MAM&url=https%3A%2F%2Fwww.mathworks.com%2Fmatlabcentral%2Ffileexchange%2F62921-ecc-registration-100x-faster&usg=AFQjCNGFD-abP0KTXE9Lvsi7dfId74JIUw
Enjoy, Dan

Iniciar sesión para comentar.

Respuesta aceptada

Xilin Li
Xilin Li el 2 de Feb. de 2017
Update a random part of the image at each iteration is not SGD. In SGD, the parameter, say x, you want to optimize for all iterations is the same x, but the gradient used to update x is noisy due to replacing expectation with sample average. I checked your image denoising problem. It is a standard convex optimization, and there are many efficient solvers. You left a comment on my psgd post, and I showed how to use psgd to converge to better solutions faster.
  1 comentario
Ibraheem Al-Dhamari
Ibraheem Al-Dhamari el 3 de Feb. de 2017
Editada: Ibraheem Al-Dhamari el 3 de Feb. de 2017
many thanks for your answer. I will check your code and try to understand the difference.
[update]
I checked the code and still do not see where is the stochastic part. I saw these lines:
dX = sqrt(eps)*randn(size(x));
dG = Gradf(y,x+dX,epsilon) - Gradf(y,x,epsilon);
I think the first line add random noise to x (the filtered image) but still, you need to compute the gradient of the whole image not part of it.
From wiki:
"In stochastic (or "on-line") gradient descent, the true gradient of Q(w) is approximated by a gradient at a single example".
I understood that we need to compute the gradient of part of the image instead of the whole image, right?

Iniciar sesión para comentar.

Más respuestas (0)

Categorías

Más información sobre Image Segmentation and Analysis en Help Center y File Exchange.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by