I'm getting start with cplex and yalmip. I got a 'Integer infeasible column 'x2'.' for the code below.
N = 2;
O = binvar(N,N);
F = [];
for i = 1:N
for j = 1:N
if i ==j
F = F + (O(i,i) ==0);
else
F = F+ (O(i,j) + O(j,i) ==1);
end
end
end
optimize(F);
diagnostics = optimize(F);
if diagnostics.problem == 0
disp('Feasible');
elseif diagnostics.problem == 1
disp('Infeasible');
else
disp('Something else happened');
disp(diagnostics.problem);
end
I'm not sure what's wrong here.... The constraints looks quite feasible to me?
Square matrices are symmetric by default, hence your model is infeasible
https://yalmip.github.io/tutorial/basics/
YALMIP-specific questions are much better addressed on YALMIP specific forums
https://github.com/yalmip/YALMIP/discussions
https://groups.google.com/forum/?fromgroups=#!forum/yalmip
Related
Given a function f(k',w') = ((k'-k)^2+(w'-w)^2)^(1/2), where k and w are known real parameters. The objective is to find a couple (k',w') such that f(k',w') is minimal under the following constraints
b(v,s,w') < 10s <=> w'< 10s
b(v,s,w')< a(v,s,k')^2 <=> (w'-10s)-(k'-3)^2/s < 0
q( a(v,s,k'),b(v,s,w')) < s[v^(1/2)]
where b(v,s,w') = (v/s)(w'-10s) and a(v,s,k') = (1/s)(k'-3)v^(1/2). Above that, v (=vari) > 0 and s (=skew) < 0 are known parameters. Furthermore q(a,b) is a root of the quartic polynomial:
(48a^2 + 16b)x^4 + (-40a^3 - 168ab)x^3 + (-45a^4 + 225a^2b + 72b^2)x^2 + (27a^3b - 162ab^2)x + 27b^3.
To be more precise, whenever the quartic has four real roots then q is the second greatest root. If the quartic has two real and two complex roots then q is the greatest real root. The problem is that the algebraic expressions for q are quite monstrous. Ideally, I would like an analytical solution for the above non-linear constrained optimization problem. However, I think that might turn out quite ugly. Therefore, I thought it was better to do it numerically with Matlab by using constrained optimizers such as fmincon.
f = #(x) sqrt((x(1)-hyp_skew)^2+(x(2)-kurt)^2);
A = [1,0];
d = 10*skew;
Aeq = [];
beq = [];
ub = [];
lb = [];
[x,fval,exitflag] = fmincon(f,[w,k],A,d,Aeq,beq,lb,ub,#(x)quarticcondition2(x,skew,vari),options);
where the (non-linear) optimization function is given by
function [c, ceq] = quarticcondition2(x,skew,vari)
av = ((x(2)-3)*sqrt(vari))/skew;
bv = (vari/skew)*(x(1)-10*skew);
A = (-40*av^3-168*av*bv)/(48*av^2+16*bv);
B = (-45*av^4+225*av^2*bv+72*bv^2)/(48*av^2+16*bv);
C = (27*av^3*bv-162*av*bv^2)/(48*av^2+16*bv);
D = (27*bv^3)/(48*av^2+16*bv);
roots_quartic = roots([1,A,B,C,D]);
z = imag(roots_quartic);
in = find(z ~= 0);
if isempty(in)
r = sort(roots_quartic);
c2 = r(2)-skew*sqrt(vari);
else
index = find(z == 0);
c2 = max(roots_quartic(index))-skew*sqrt(vari);
end
c1 = ((x(2)-3)^2/skew)-(x(1)-10*skew);
c = [c1 c2];
ceq = [];
end
My code works for some initial parameter sets [w,k]. However, finding this initial parameter set turns out to be quite difficult (since the constraints are hard to handle). I need to run the program for quite some possible scenarios, hence it would be nice to have some logic in choosing my starting values. I know this is a well known issue when using optimization solvers. However, is there a good/proper way to find start values?
Thanks!
Cheers
I'm trying to optimize the performance (e.g. speed) of my code. I 'm new to vectorization and tried myself to vectorize, but unsucessful ( also try bxsfun, parfor, some kind of vectorization, etc ). Can anyone help me optimize this code, and a short description of how to do this?
% for simplify, create dummy data
Z = rand(250,1)
z1 = rand(100,100)
z2 = rand(100,100)
%update missing param on the last updated, thanks #Bas Swinckels and #Daniel R
j = 2;
n = length(Z);
h = 0.4;
tic
[K1, K2] = size(z1);
result = zeros(K1,K2);
for l = 1 : K1
for m = 1: K2
result(l,m) = sum(K_h(h, z1(l,m), Z(j+1:n)).*K_h(h, z2(l,m), Z(1:n-j)));
end
end
result = result ./ (n-j);
toc
The K_h.m function is the boundary kernel and defined as (x is scalar and y can be vector)
function res = K_h(h, x,y)
res = 0;
if ( x >= 0 & x < h)
denominator = integral(#kernelFunc,-x./h,1);
res = 1./h.*kernelFunc((x-y)/h)/denominator;
elseif (x>=h & x <= 1-h)
res = 1./h*kernelFunc((x-y)/h);
elseif (x > 1 - h & x <= 1)
denominator = integral(#kernelFunc,-1,(1-x)./h);
res = 1./h.*kernelFunc((x-y)/h)/denominator;
else
fprintf('x is out of [0,1]');
return;
end
end
It takes a long time to obtain the results: \Elapsed time is 13.616413 seconds.
Thank you. Any comments are welcome.
P/S: Sorry for my lack of English
Some observations: it seems that Z(j+1:n)) and Z(1:n-j) are constant inside the loop, so do the indexing operation before the loop. Next, it seems that the loop is really simple, every result(l, m) depends on z1(l, m) and z2(l, m). This is an ideal case for the use of arrayfun. A solution might look something like this (untested):
tic
% do constant stuff outside of the loop
Zhigh = Z(j+1:n);
Zlow = Z(1:n-j);
result = arrayfun(#(zz1, zz2) sum(K_h(h, zz1, Zhigh).*K_h(h, zz2, Zlow)), z1, z2)
result = result ./ (n-j);
toc
I am not sure if this will be a lot faster, since I guess the running time will not be dominated by the for-loops, but by all the work done inside the K_h function.
Hello guys I am writing program to compute determinant(this part i already did) and Inverse matrix with GEPP. Here problem arises since i have completely no idea how to inverse Matrix using GEPP, i know how to inverse using Gauss Elimination ([A|I]=>[I|B]). I have searched through internet but still no clue, could you please explain me?
Here is my matlab code (maybe someone will find it useful), as of now it solves AX=b and computes determinant:
function [det1,X ] = gauss_czesciowy( A, b )
%GEPP
perm=0;
n = length(b);
if n~=m
error('vector has wrong size');
end
for j = 1:n
p=j;
% choice of main element
for i = j:n
if abs(A(i,j)) >= abs(A(p,j))
p = i;
end
end
if A(p,j) == 0
error('Matrix A is singular');
end
%rows permutation
t = A(p,:);
A(p,:) = A(j,:);
A(j,:) = t;
t = b(p);
b(p) = b(j);
b(j) = t;
if~(p==i)
perm=perm+1;
end
% reduction
for i = j+1:n
t = (A(i,j)/A(j,j));
A(i,:) = A(i,:)-A(j,:)*t;
b(i) = b(i)-b(j)*t;
end
end
%determinant
mn=1;
for i=1:n
mn=mn*A(i,i);
end
det1=mn*(-1)^perm;
% solution
X = zeros(1,n);
X(n) = b(n)/A(n,n);
if (det1~=0)
for i = 1:n
s = sum( A(i, (i+1):n) .* X((i+1):n) );
X(i) = (b(i) - s) / A(i,i);
end
end
end
Here is the algorithm for Guassian elimination with partial pivoting. Basically you do Gaussian elimination as usual, but at each step you exchange rows to pick the largest-valued pivot available.
To get the inverse, you have to keep track of how you are switching rows and create a permutation matrix P. The permutation matrix is just the identity matrix of the same size as your A-matrix, but with the same row switches performed. Then you have:
[A] --> GEPP --> [B] and [P]
[A]^(-1) = [B]*[P]
I would try this on a couple of matrices just to be sure.
EDIT: Rather than empirically testing this, let's reason it out. Basically what you are doing when you switch rows in A is you are multiplying it by your permutation matrix P. You could just do this before you started GE and end up with the same result, which would be:
[P*A|I] --> GE --> [I|B] or
(P*A)^(-1) = B
Due to the properties of the inverse operation, this can be rewritten:
A^(-1) * P^(-1) = B
And you can multiply both sides by P on the right to get:
A^(-1) * P^(-1)*P = B*P
A^(-1) * I = B*P
A^(-1) = B*P
I am studying for cumulative exam I have tomorrow and I got the following question wrong on a previous exam. I was hoping someone could explain this question to me? What does (~m) mean?
The question says:
After executing the following script, what is the value of m?
a=1; b=2; m=0;
if (~m)
m = m+1;
if (a-b > 0)
m = m+1;
else
m = m -1;
end
elseif (m > 1)
m = m + 2;
else
m = m - 2;
end
The correct answer is 0, but why? I would have guessed that m = -2
The ~ means NOT. However, numeric values are all considered TRUE unless they are identically equal to 0.
So, the commands which are actually executed by this logic are:
m = m+1; %Following if (~m)
m = m-1; $Following else
Also, there is a nested if statement in the code. It will be easier to read if you used multiple level indentations.
I've searched a lot but didn't find any solution to my problem, could you please help me vectorizing (or just a way to make it way faster) these loops ?
% n is the size of C
h = 1/(n-1)
dt = 1e-6;
a = 1e-2;
F=zeros(n,n);
F2=zeros(n,n);
C2=zeros(n,n);
t = 0.0;
for iter=1:12000
F2=F.^3-F;
for i=1:n
for j=1:n
F2(i,j)=F2(i,j)-(C(ij(i-1),j)+C(ij(i+1),j)+C(i,ij(j-1))+C(i,ij(j+1))-4*C(i,j)).*(a.^2)./(h.^2);
end
end
F=F2;
for i=1:n
for j=1:n
C2(i,j)=C(i,j)+(F(ij(i-1),j)+F(ij(i+1),j)+F(i,ij(j-1))+F(i,ij(j+1))-4*F(i,j)).*dt./(h^2);
end
end
C=C2;
t = t + dt;
end
function i=ij(i) %Just to have a matrix as loop (the n+1 th cases are the 1 th and 0 the 0th are nth)
if i==0
i=n;
return
elseif i==n+1
i=1;
end
return
end
thanks a lot
EDIT: Found an answer, it was totally ridiculous and I was searching way too far
%n is still the size of C
h = 1/((n-1))
dt = 1e-6;
a = 1e-2;
F=zeros(n,n);
var1=(a^2)/(h^2); %to make a bit less calculus
var2=dt/(h^2); % the same
t = 0.0;
for iter=1:12000
F=C.^3-C-var1*(C([n 1:n-1],1:n) + C([2:n 1], 1:n) + C(1:n, [n 1:n-1]) + C(1:n, [2:n 1]) - 4*C);
C = C + var2*(F([n 1:n-1], 1:n) + F([2:n 1], 1:n) + F(1:n, [n 1:n-1]) + F(1:n,[2:n 1]) - 4*F);
t = t + dt;
end
Found an answer, it was totally ridiculous and I was searching way too far
%n is still the size of C
h = 1/((n-1))
dt = 1e-6;
a = 1e-2;
F=zeros(n,n);
var1=(a^2)/(h^2); %to make a bit less calculus
var2=dt/(h^2); % the same
prev = [n 1:n-1];
next = [2:n 1];
t = 0.0;
for iter=1:12000
F = C.*C.*C - C - var1*(C(:,next)+C(:,prev)+C(next,:)+C(prev,:)-4*C);
C = C + var2*(F(:,next)+F(:,prev)+F(next,:)+F(prev,:)-4*F);
t = t + dt;
end
The behavior of the inner loop looks like a 2-dimensional circular convolution. That's the same as multiplication in the FFT domain. Subtraction is invariant across a linear operation such as FFT.
You'll want to use the fft2 and ifft2 functions.
Once you do that, I think you'll find that the repeated convolution can be eliminated by raising the convolution kernel (element-wise) to the power iter. If that optimization is correct, I'm predicting a speedup of 5 orders of magnitude.
You can replace for example C(ij(i-1),j) by using circshift(C,[1,0]) or circshift(C,[1,0]) (i can't figure out witch one of two is correct)
http://www.mathworks.com/help/matlab/ref/circshift.htm