Preconditioned stochastic gradient descent

Versión 1.2.0.0 (568 KB) por Xilin Li
Upgrading stochastic gradient descent method to second order optimization method
755 Descargas
Actualizado 23 jul 2016

Ver licencia

This package demonstrates the method proposed in paper http://arxiv.org/abs/1512.04202 which shows how to upgrade a stochastic gradient descent (SGD) method to a second order optimization method by preconditioning. More materials (pseudo code, more examples and papers) are put on https://sites.google.com/site/lixilinx/home/psgd.

Descriptions of enclosed files
binary_pattern.m
This file generates the zebra stripe like binary pattern to be learned by our four tested algorithms.

plain_SGD.m
This demo shows how to use a standard SGD to train a neural network by minimizing logistic loss. As usual, SGD requires some tuning work. Convergence is too slow for small step sizes, too bad for large step sizes.

preconditioned_SGD_dense.m
This demo shows how to precondition a SGD to improve its convergence using a dense preconditioner. We do need to calculate the gradient twice at each iteration, but the convergence is much faster, and less tuning effort is required. The step size is normalized, and a value in range [0.01, 0.1] seems good.

preconditioned_SGD_sparse.m
This demo shows how to approximate a preconditioner as direct sums and/or Kronecker products of smaller matrices. In practice, the scales of problem can be so large that we need to sparsely represent a preconditioner to make its estimation affordable.

preconditioner_kron.m
This function shows how to adaptively estimate a Kronecker product approximation of a preconditioner for parameters in matrix form.

preconditioner.m
This function shows how to adaptively estimate a preconditioner via gradient perturbation analysis.

RMSProp_SGD.m
This demo implements a popular variation of SGD for neural network training: RMSProp. Similar to the standard SGD, its tuning could be difficult.

Citar como

Xilin Li (2026). Preconditioned stochastic gradient descent (https://la.mathworks.com/matlabcentral/fileexchange/54525-preconditioned-stochastic-gradient-descent), MATLAB Central File Exchange. Recuperado .

Compatibilidad con la versión de MATLAB
Se creó con R2015a
Compatible con cualquier versión
Compatibilidad con las plataformas
Windows macOS Linux
Versión Publicado Notas de la versión
1.2.0.0

The step size normalization factor in preconditioner estimation is changed to max(max(abs(grad))).

1.1.0.0

revised preconditioner estimation method. Specifically,
Q = Q - step_size*grad*Q/(max(abs(diag(grad))) + eps);
is changed to
Q = Q - step_size*grad*Q/max(max(abs(diag(grad))), 1);

1.0.0.0