Exponential contraint MIP optimization
Mostrar comentarios más antiguos
Is there any trick to have exponential constraint in mixed integer programming optimization?
Respuestas (1)
Matt J
el 29 de Mayo de 2016
If an exponential constraint means
exp(A*x)<=b
then the trick would be to take logs to make them linear
A*x<=log(b)
7 comentarios
Nana
el 29 de Mayo de 2016
Matt J
el 29 de Mayo de 2016
Taking the log of the left and right sides of
exp(A*x)<=b
gives
A*x<=log(b)
Nana
el 1 de Jun. de 2016
Walter Roberson
el 2 de Jun. de 2016
... well, to within numeric round-off. There is the question of which has worse round-off, exp(A*x) or log(b) . That is, of course, going to depend upon the range of values being used, which we are not given any information about.
I just ran some experimental tests with creating random numbers over log(eps(realmin)) to log(realmax), taking care to be able to generate all possible numbers (i.e., not just rand() times something.)
I find that in over 98.8% of the cases, exp() of the number is the same as exp() of the next adjacent representable number. On the other hand, I find that in 99.2% of the cases, log() is different for a number and its adjacent representable number.
Together, these means that the problem with using A*x<=log(b) is that it would have more precision than the exp(A*x)<=b form.
I did find one interesting case: for the denormalized numbers, x is not always equal to log(exp(r)) . This is the only case in which the exp(A*x)<=b form would be more accurate, if A*x is one of the denormalized numbers. The denormalized numbers are the non-zero numbers that are smaller than realmin, and they are essentially fixed-point numbers rather than floating point numbers. They are in the range 4.94065645841247e-324 to 2.2250738585072e-308.
I find that in over 98.8% of the cases, exp() of the number is the same as exp() of the next adjacent representable number. On the other hand, I find that in 99.2% of the cases, log() is different for a number and its adjacent representable number.
I'm not sure that's the right comparison, because the 98.8% and the 99.2% occur over different sets of b values. I think instead you have to compare, in the neighborhood of each fixed b, whether log(b) gives more precision than exp(log(b)).
The numerical precision of the constraint function will only be relevant when the solution lies near the boundary of the feasible set, i.e., where A*x is nearly equal to log(b). That is the scenario where the iterative optimizer will be challenged to resolve feasible from non-feasible solutions. Precision will also only be relevant for b "sufficiently greater than 0". When b gets too close to zero, the feasible set boundary, whether expressed as exp(A*x)=b or A*x=log(b) is very sensitive to b and the problem becomes ill-conditioned.
So, for a given b, it is really a question of whether the constraint violation as measured by log(b)-q evaluates with more numerical error than with b-exp(q), where q is the actual value of log(b) free of numerical error. And this must be answered for relevant b (i.e., "far enough" from zero).
Walter Roberson
el 3 de Jun. de 2016
Editada: Matt J
el 4 de Jun. de 2016
The test code I used was
randd = @() typecast(randi([min32,max32],[1 2],'uint32'),'double');
lnmax = log(realmax);
lnmin = log(eps(realmin));
cnt = 0;
tries = 0;
tot = zeros(3,3,2);
while cnt < 5E6;
tries = tries + 1;
r = randd();
if isnan(r) || isinf(r) || r > lnmax || r < lnmin;
continue;
end;
cnt = cnt + 1;
row = 2 + sign(r);
col1 = 2 + sign(exp(r) - exp(r*(1+eps)));
col2 = 2 + sign(r - log(exp(r)*(1+eps)));
tot(row,col1,1) = tot(row,col1,1)+1;
tot(row,col2,2) = tot(row,col2,2) + 1;
end
Here, 5E6 is the number of successful random numbers to require; cnt is how many successful random number so far; tries is the number of random numbers tried so far (some will be out of range or NaN); r is the random number. tot will be a 3 x 3 x 2 array. First index is by sign(r), so whether the random number is negative, 0, or positive. Second index is sign() of the comparison being done, so smaller/equal/higher. Third index is first test vs second test. The sum over the (:,:,1) should be cnt, as should be the sum over (:,:,2)
lnmax is to prevent exp(r) from overflowing to inf. lnmin is to prevent exp(r) from underflowing to 0 because log(0) is -inf.
First test is how exp(r) compares to exp(r*(1+eps)), which measures whether the precision of A*x is important when you are going to take exp() of it; the answer is that the two usually compare the same, indicating that you are "wasting accuracy" of A*x when you exp() it.
Second test is how r compares to log(exp(r)*(1+eps)), which measures whether the accuracy of b is important if you compare A*x <= log(b) . The answer is that it usually is important. This has to be read in conjunction with the test1 results to see that this means that the A*x <= log(b) approach is more accurate than exp(A*x) <= b
You could adapt the test framework for exp(log()) but you would need different selection bounds, eps(realmin) and realmax()
Categorías
Más información sobre Random Number Generation en Centro de ayuda y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!