Hello everyone, I have a question in optimization

3 visualizaciones (últimos 30 días)
Ahmed Galal
Ahmed Galal el 16 de Nov. de 2020
Comentada: Bruno Luong el 23 de Nov. de 2020
Please find the attached question
  3 comentarios
Bruno Luong
Bruno Luong el 16 de Nov. de 2020
Editada: Bruno Luong el 16 de Nov. de 2020
In this pdf: r_m c_m cannot be vectors since they are the bounds of the norms, which must be scalars.
Ahmed Galal
Ahmed Galal el 17 de Nov. de 2020
Sorry, they are not identical only the constraints are identical. The objective function is L1, L2, and Linf, not only L1. I did a mistake

Iniciar sesión para comentar.

Respuesta aceptada

Bruno Luong
Bruno Luong el 16 de Nov. de 2020
Editada: Bruno Luong el 17 de Nov. de 2020
The constraint (1c)
norm(R11*w+r00, Inf) >= rm
can be transformed as a union of 42 halfplanes
R11(i,:)*w+r00(i,:) >= rm
or
R11(i,:)*w+r00(i,:) <= -rm
for i=1,2,...21.
I would then suggest to solve 42 sub linear-programing problems by replacing the (1c) with one of those conditions. The sub problem can be solved with intlinprog, then we just take the argmin of those 42 problem solutions.
It must be more preditable and robust than GA.
  12 comentarios
Matt J
Matt J el 23 de Nov. de 2020
Editada: Matt J el 23 de Nov. de 2020
so no the gradient does not use parethesis like your.
OK, but they should do it that way, no? They missed the opportunity to take advantage of the special form of the problem.
Bruno Luong
Bruno Luong el 23 de Nov. de 2020
Does it really matter for accuracy? At the cost of 2 matrix-vector product per gradient evaluation instead of one?

Iniciar sesión para comentar.

Más respuestas (0)

Categorías

Más información sobre Linear Least Squares en Help Center y File Exchange.

Etiquetas

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by