YALMIP outputs "Infeasible" for an easy, feasible SDP - matlab

I want to determine whether a given 3x3 matrix is positive-semidefinite or not. To do so, I write the following SDP in YALMIP
v=0.2;
a=sdpvar(1);
b=sdpvar(1);
M=[1 a -v/4 ; b 1 0 ; -v/4 0 0.25];
x=sdpvar(1);
optimize([M+x*eye(3)>=0],x,sdpsettings('solver','sedumi'))
This program gives me the error "Dual infeasible, primal improving direction found". This happens for any value of v in the interval (0,1].
Given that this problem is tractable, I diagonalized the matrix directly obtaining that the three eigenvalues are the three roots of the following polynomial
16*t^3 - 36*t^2 + (24 - 16*a*b - v^2)*t + (-4 + 4*a*b + v^2)
Computing the values of the three roots numerically I see that the three of them are positive for sign(a)=sign(b) (except for a small region in the neighborhood of a,b=+-1), for any value of v. Therefore, the SDP should run without problems and output a negative value of x without further complications.
To make things more interesting, I ran the same code with the following matrix
M=[1 a v/4 ; b 1 0 ; v/4 0 0.25];
This matrix has the same eigenvalues as the previous one, and in this case the program runs without any issues, confirming that the matrix is indeed positive-semidefinite.
I am really curious about the nature of this issue, any help would be really appreciated.
EDIT: I also tried the SDPT3 solver, and results are very similar. In fact, the program runs smoothly for the case of +v, but when i put a minus sign I get the following error
'Unknown problem in solver (Turn on 'debug' in sdpsettings) (Error using & …'
Furthermore, when I add some restrictions to the variables, i.e., I run the following command
optimize([total+w*eye(3)>=0,-1<=a<=1,-1<=b<=1],w,sdpsettings('solver','sdpt3'))
Then the error turns to an 'Infeasible problem' error.

Late answer, but anyway. The matrix you have specified is not symmetric. Semidefinite programming is about optimization over the set of symmetric positive semi-definite matrices.
When you define this unsymmetric matrix constraint in YALMIP, it is simply interpreted as a set of 9 linear elementwise constraints, and for that linear program, the optimal x is unbounded.

Related

Maple unable to evalute function in whole range of plot

Maple helpfully can work out the solution to Laplace's equation in a square region and give me the answer in closed form (in terms of an infinite sum). If I try to plot the function of two variables as a 3d plot it gives me most of the surface but not all of it:
Here is the Maple code which produces the solution and turns it into an expression suitable for plotting
lapeq:=diff(v(x,y),x$2)+diff(v(x,y),y$2)=0;
bcs:=v(x,0)=0,v(0,y)=0,v(1,y)=0,v(x,1)=100;
sol1:=pdsolve({lapeq,bcs});
vxy:=eval(v(x,y),sol1);
the result of which is
All good so far. Plotting it via
plot3d(vxy,x=0..1,y=0..1);
gives a result which is fine for x in the full range (0<x<1) but only for y between 0 and around 0.9:
I have tried to evalf some point in the unknown region and Maple can't tell me numerical values there. Is there any way to get Maple to "try a bit harder" to evaluate those numbers?
You could try setting the number of terms in the sum
Compare
lapeq:=diff(v(x,y),x$2)+diff(v(x,y),y$2)=0;
bcs:=v(x,0)=0,v(0,y)=0,v(1,y)=0,v(x,1)=100;
sol1:=pdsolve({lapeq,bcs});
vxy:=subs(infinity=100,sol1);
plot3d(rhs(vxy),x=0..1,y=0..1);
With
restart;
lapeq:=diff(v(x,y),x$2)+diff(v(x,y),y$2)=0;
bcs:=v(x,0)=0,v(0,y)=0,v(1,y)=0,v(x,1)=100;
sol1:=pdsolve({lapeq,bcs});
vxy:=eval(v(x,y),sol1);
plot3d(vxy,x=0..1,y=0..1);
I'm not a huge fan of chopping the infinite sum at some value of the upper bound for n, without at least demonstrating either symbolically or numerically that it is justified. Ie, that chopping does not provide a false idea of convergence.
So, you asked how to make it work "harder". I'll take that to mean that you too might prefer to let evalf/Sum itself decide whether each infinite numeric sum converges -- rather than manually truncate it yourself at some finite value for the upper value of the range for n.
For fun, and caution, I also divide both numerator and denominator of K by the potentially large exp call (potentially much larger than 1). That may not be necessary here.
restart;
lapeq:=diff(v(x,y),x$2)+diff(v(x,y),y$2)=0:
bcs:=v(x,0)=0,v(0,y)=0,v(1,y)=0,v(x,1)=100:
sol1:=pdsolve({lapeq,bcs}):
vxy:=eval(v(x,y),sol1):
K:=op(1,vxy):
J:=simplify(combine(numer(K)/exp(2*Pi*n)))
/simplify(combine(denom(K)/exp(2*Pi*n))):
F:=subs(__d=J,
proc(x,y) local k, m, n, r;
if y<0.8 then
r:=Sum(__d,n=1..infinity);
else
UseHardwareFloats:=false;
m := ceil(1*abs(y/0.80)^16);
r:=add(Sum(eval(__d,n=m*n-k),n=1..infinity),
k=0..m-1);
end if;
evalf(r);
end proc):
plot3d( F, 0..1, 0..0.99 );
Naturally this is slower than mere chopping of terms to obtain a finite sum. And you might be satisfied with some technique that establishes that the excluded terms' sums are negligible.

Matlab function NNZ, numerical zero

I am working on a code in Least Square Non Negative solution recovery context on Matlab, and I need (with no more details because it's not that important for this question) to know the number of non zero elements in my matrices and arrays.
The function NNZ on matlab does exactly what I want, but it happens that I need more information about what Matlab thinks of a "zero element", it could be 0 itself, or the numerical zero like 1e-16 or less.
Does anybody has this information about the NNZ function, cause I couldn't get the original script
Thanks.
PS : I am not an expert on Matlab, so accept my apologies if it's a really simple task.
I tried "open nnz", on Matlab but I only get a small script of commented code lines...
Since nnz counts everything that isn't an exact zero (i.e. 1e-100 is non-zero), you just have to apply a relational operator to your data first to find how many values exceed some tolerance around zero. For a matrix A:
n = nnz(abs(A) > 1e-16);
Also, this discussion of floating-point comparison might be of interest to you.
You can add in a tolerance by doing something like:
nnz(abs(myarray)>tol);
This will create a binary array that is 1 when abs(myarray)>tol and 0 otherwise and then count the number of non-zero entries.

Matlab non-linear binary Minimisation

I have to set up a phoneme table with a specific probability distribution for encoding things.
Now there are 22 base elements (each with an assigned probability, sum 100%), which shall be mapped on a 12 element table, which has desired element probabilities (sum 100%).
So part of the minimisation is to merge several base elements to get 12 table elements. Each base element must occur exactly once.
In addition, the table has 3 rows. So the same 12 element composition of the 22 base elements must minimise the error for 3 target vectors. Let's say the given target vectors are b1,b2,b3 (dimension 12x1), the given base vector is x (dimension 22x1) and they are connected by the unknown matrix A (12x22) by:
b1+err1=Ax
b2+err2=Ax
b3+err3=Ax
To sum it up: A is to be found so that dot_prod(err1+err2+err3, err1+err2+err3)=min (least squares). And - according to the above explanation - A must contain only 1's and 0's, while having exactly one 1 per column.
Unfortunately I have no idea how to approach this problem. Can it be expressed in a way different from the matrix-vector form?
Which tools in matlab could do it?
I think I found the answer while parsing some sections of the Matlab documentation.
First of all, the problem can be rewritten as:
errSum=err1+err2+err3=3Ax-b1-b2-b3
=> dot_prod(errSum, errSum) = min(A)
Applying the dot product (least squares) yields a quadratic scalar expression.
Syntax-wise, the fmincon tool within the optimization box could do the job. It has constraints parameters, which allow to force Aij to be binary and each column to be 1 in sum.
But apparently fmincon is not ideal for binary problems algorithm-wise and the ga tool should be used instead, which can be called in a similar way.
Since the equation would be very long in my case and needs to be written out, I haven't tried yet. Please correct me, if I'm wrong. Or add further solution-methods, if available.

ILMath.Vec() Appears to be Generation Slightly Wrong Output (Potential Bug?)

I'm comparing ILMath.Vec() with MatLab's and I'm seeing significant rounding errors.
For example, if I create a vector (using Start:0, Step:1.2635048525911006, End:20700) for each system:
MatLab: [Start:Step:End]
ILNumerics: Vec<double>(Start, Step, End)
And then take the average abs difference, I get an average error of 1.56019608343883E-09. However, I if create the vector by hand (using multiplication), I get an average error of only 3.10973469197506E-13, 4 magnitudes smaller error.
After looking at ILNumerics' vec function (using Reflector), I think I know why the average error value is so large. The ILMath.vec() function is using addition vs. multiplication. Summing the step value 16,384 times is not the same thing as multiplying the step value x N (where N is the current loop count) 16,384 times! The addition's rounding errors add up very quickly!
Please consider fixing this issue.

Dot Product: * Command vs. Loop gives different results

I have two vectors in Matlab, z and beta. Vector z is a 1x17:
1 0.430742139435890 0.257372971229541 0.0965909090909091 0.694329541928697 0 0.394960106863064 0 0.100000000000000 1 0.264704325268675 0.387774594078319 0.269207605609567 0.472226643323253 0.750000000000000 0.513121013402805 0.697062571025173
... and beta is a 17x1:
6.55269487769363e+26
0
0
-56.3867588816768
-2.21310778926413
0
57.0726052009847
0
3.47223691057151e+27
-1.00249317882651e+27
3.38202232046686
1.16425987969027
0.229504956512063
-0.314243264212449
-0.257394312588330
0.498644243389556
-0.852510642195370
I'm dealing with some singularity issues, and I noticed that if I want to compute the dot product of z*beta, I potentially get 2 different solutions. If I use the * command, z*beta = 18.5045. If I write a loop to compute the dot product (below), I get a solution of 0.7287.
summation=0;
for i=1:17
addition=z(1,i)*beta(i);
summation=summation+addition;
end
Any idea what's going on here?
Here's a link to the data: https://dl.dropboxusercontent.com/u/16594701/data.zip
The problem here is that addition of floating point numbers is not associative. When summing a sequence of numbers of comparable magnitude, this is not usually a problem. However, in your sequence, most numbers are around 1 or 10, while several entries have magnitude 10^26 or 10^27. Numerical problems are almost unavoidable in this situation.
The wikipedia page http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems shows a worked example where (a + b) + c is not equal to a + (b + c), i.e. demonstrating that the order in which you add up floating point numbers does matter.
I would guess that this is a homework assignment designed to illustrate these exact issues. If not, I'd ask what the data represents to suss out the appropriate approach. It would probably be much more productive to find out why such large numbers are being produced in the first place than trying to make sense of the dot product that includes them.