Linear programming - MATLAB - matlab

I've a [8x4] matrix, 'A', and a [8x1] matrix, 'B'. How do I check if there exists a [4x1] matrix 'x' such that A * X = B?
This can be done using linprog in MATLAB, but I'm not sure how to give the constraints. I tried x = linprog([],[],[],A,B);, but this doesn't seem to work.
How to specify the condition x>=0 and optimize it for A*X-B so that, if it returns 0, we know there is X.
Update:
pinv in MATLAB doesn't work in all the cases. Consider the following example:
A= [1 0 0 0
0 1 -1 -1
-1 -1 1 -1
-1 -1 -1 1
0 0 0 0
0 0 0 0
0 0 0 0
1 1 1 1]
B = [0
0
0
-1
0
0
0
1]
using pinv gives the the value of X as:
X = [-2.7756e-017
0.5000
0.5000
0]
but when linear programming is used I get x as:
X = [ 0
0.5000
0.5000
0]
This is the reason why I preferred linprog tool in MATLAB. I just used it the way I mentioned previously but it is throwing a lot of warnings. I still think there is a better way to use this function correctly. It did not throw for this matrix but in general when I loop through a lot of matrices my command window overflow with warnings.

Why use linear programming? You can just solve the system A*x=B directly:
A =[ 1 -1 -1 -1 1 0 0 1
-1 1 -1 0 0 1 0 0
-1 -1 1 0 1 0 0 0
-1 -1 -1 1 1 1 0 0]'; %'#
B = [-1 -1 0 0 0 0 1 1]'; %'#
x = A\B
x =
0.16327
0.097959
0.46531
0.11837
The problem you may face is that A can be rank deficient, but in that case, you'll get infinitely many solutions for x.

But why use a code that will do MORE work than necessary to solve the problem? Just use the pseudo-inverse. If A is of full rank, then backslash will be entirely sufficient.
Compute the solution. If the norm of your residuals is less than some tolerance, then you have a solution. Note that essentially no solution is ever assured to give you truly zero residuals, so you must apply a tolerance. Thus
x = A\B;
if norm(B - A*x) < tol
disp('Eureeka!')
end
Or use x=pinv(A)*B if you are worried about the rank of A.
Trying to throw linprog at the problem will surely not be more efficient than the direct solution itself.
Edit: Since non-negativity of the result has now been added as a requirement, use lsqnonneg instead. Just compare the norm of the residual vector to a tolerance. If the norm is too large, then no solution exists.

Unfortunately, you can not use array division. This is not the same a matrix division. However, you could use the inverse of the Matrix A to multiply it with matrix B to get x x = (A-1)B, but I am not sure if an inverse is possible for non-square matrix A (8x4). Hence, you might not have been able to work that with linprog

Related

How can I create a modified identity matrix?

I have an identity matrix in MATLAB which is used in some regression analysis for joint hypothesis tests. However, when I change the linear restrictions for my tests, I can no longer rely on the identity matrix.
To give a simple example, here is some code which produces an identity matrix depending on the value of y:
for i = [1, 2, 4]
y = i
x = 5;
H = eye(y*x)
end
However, what I need is not the identity matrix, but the first two rows and all others to be zero.
For the first example, the code produces an eye(5):
H =
1 0 0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0 1 0
0 0 0 0 1
I need something that given y does not produce the identity but in fact produces:
H =
1 0 0 0 0
0 1 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
Can I adjust the identity matrix to include zeroes only after the first two rows?
I think the simplest solution is to make a matrix of all zeroes and then just place the two ones by linear indexing:
H = zeros(x*y);
H([1 x*y+2]) = 1;
Generalizing the above to putting the first N ones along the diagonal:
H = zeros(x*y);
H(x*y.*(0:(N-1))+(1:N)) = 1;
As suggested in this comment you can use diag:
diag([ones(2,1); zeros(x*y-2,1)])
This works because diag makes a vector become the main diagonal of a square matrix, so you can simply feed it the diagonal vector, which is your case would be 2 1s and the rest 0s.
Of course if you need a variable amount of 1s, which I was in doubt about hence the comment,
n=2;
diag([ones(n,1); zeros(x*y-n,1)])
Here are some alternatives:
Use blkdiag to diagonally concatenate an identity matrix and a zero matrix:
y = 5; x = 2;
H = blkdiag(eye(x), zeros(y-x));
A more exotic approach is to use element-wise comparisons with singleton expansion and exploit the fact that two NaN's are not equal to each other:
y = 5; x = 2;
H = [1:x NaN(1,y-x)];
H = double(bsxfun(#eq, H, H.'))

How does Y = eye(K)(y, :); replace a "for" loop? Coursera

Working on an assignment from Coursera Machine Learning. I'm curious how this works... From an example, this much simpler code:
% K is the number of classes.
K = num_labels;
Y = eye(K)(y, :);
seems to be a substitute for the following:
I = eye(num_labels);
Y = zeros(m, num_labels);
for i=1:m
Y(i, :)= I(y(i), :);
end
and I have no idea how. I'm having some difficulty Googling this info as well.
Thanks!
Your variable y in this case must be an m-element vector containing integers in the range of 1 to num_labels. The goal of the code is to create a matrix Y that is m-by-num_labels where each row k will contain all zeros except for a 1 in column y(k).
A way to generate Y is to first create an identity matrix using the function eye. This is a square matrix of all zeroes except for ones along the main diagonal. Row k of the identity matrix will therefore have one non-zero element in column k. We can therefore build matrix Y out of rows indexed from the identity matrix, using y as the row index. We could do this with a for loop (as in your second code sample), but that's not as simple and efficient as using a single indexing operation (as in your first code sample).
Let's look at an example (in MATLAB):
>> num_labels = 5;
>> y = [2 3 3 1 5 4 4 4]; % The columns where the ones will be for each row
>> I = eye(num_labels)
I =
1 0 0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0 1 0
0 0 0 0 1
>> Y = I(y, :)
Y =
% 1 in column ...
0 1 0 0 0 % 2
0 0 1 0 0 % 3
0 0 1 0 0 % 3
1 0 0 0 0 % 1
0 0 0 0 1 % 5
0 0 0 1 0 % 4
0 0 0 1 0 % 4
0 0 0 1 0 % 4
NOTE: Octave allows you to index function return arguments without first placing them in a variable, but MATLAB does not (at least, not very easily). Therefore, the syntax:
Y = eye(num_labels)(y, :);
only works in Octave. In MATLAB, you have to do it as in my example above, or use one of the other options here.
The first set of code is Octave, which has some additional indexing functionality that MATLAB does not have. The second set of code is how the operation would be performed in MATLAB.
In both cases Y is a matrix generated by re-arranging the rows of an identity matrix. In both cases it may also be posible to calculate Y = T*y for a suitable linear transformation matrix T.
(The above assumes that y is a vector of integers that are being used as an indexing variables for the rows. If that's not the case then the code most likely throws an error.)

Proof of eigenvector and eigenvalues Matlab

I am finding the eigenvector and eigenvalues of a matrix, then I need to prove that Ax= λx where λ is the eigenvalue. Here is my code:
A = [1 1 -1;1 0 -2; 0 0 -1]
[evecs,evals]=eig(A)
for i = 1:3
A*evecs(:,i)== evals(i,i)*evecs(:,i)
end
Here is my output:
A =
1 1 -1
1 0 -2
0 0 -1
evecs =
0.8507 -0.5257 -0.3015
0.5257 0.8507 0.9045
0 0 0.3015
evals =
1.6180 0 0
0 -0.6180 0
0 0 -1.0000
ans =
0
0
1
ans =
0
1
1
ans =
0
0
1
Why are the ans not all equal to 1 as it should (in order to prove Ax= λx)
The calculations of your eigen solver are performed using finite precision floating point arithmetic. The true eigen values and eigen vectors are not even exactly representable in finite floating point data types.
Check for equality against a small tolerance to allow for this. That is check that Ax - λx is small in absolute value.
Required reading is What Every Computer Scientist Should Know About Floating-Point Arithmetic.

Tridiagonal Matrix in Matlab

Anyone know a quick efficient way in matlab to build the following square matrix
1 -1 0 0 0 0
-1 2 -1 0 0 0
0 -1 2 -1 0 0
0 0 -1 2 -1 0
0 0 0 -1 2 -1
0 0 0 0 -1 1
Which has 2's on the diagonal except at first and last element of the diagonal and -1 on the two adjacent diagonals.
This was a 6x6 examples and I'd like to generate in Matlab nxn one as quick and as efficient as possible. I tried with the built-in function kron but couldn't get away with it.
Thansk a lot
Here is one option
function a = laplacianMatrix(n)
a = toeplitz([2,-1,zeros(1,n-2)]);
a([1,end]) = 1;
end
Whether this version or the version using gallery (see Sam Roberts' answer) is faster seems to depend on the size of the matrix. For small matrices (up to around n = 200 on my machine) it is faster to use toeplitz. For larger matrices it is faster to use gallery.
How about this:
function mymatrix = makemymatrix(n)
mymatrix = full(gallery('tridiag',n,-1,2,-1));
mymatrix([1,end]) = 1;
Does that work for you?
result = conv2(eye(6), [-1 2 -1],'same');
result([1 end]) = 1;
n=6;
B=zeros(n);
for i=1:n
for j=1:n
B(1,1)=1;
if i==j
B(i,j)=2;
elseif i==j+1
B(i,j)=-1;
elseif j==i+1
B(i,j)=-1
else
B(i,j)=0;
end
B(n,n)=1;
end
end;
B

Solving matrices of the form Ax = B ==> error: Matrix is close to singular or badly scaled

I'm having trouble solving a system of the form Ax=B
The solution to the system should be
x = inv(A)*B
However, this doesn't work.
I get the following error message when I try the above line of code:
Warning: Matrix is close to singular or badly scaled.
Results may be inaccurate. RCOND = 1.156482e-018.
It seems that matlab is having trouble inverting the matrix that I've specified. I tried to verify that the inverse function was working properly by typing in inv(A)*A
This should give the identity matrix, however I got the same error and some garbage numbers.
This is the A matrix that I'm using:
A = [5/2 1/2 -1 0 0 -1/2 -1/2 0 0
1/2 1/2 0 0 0 -1/2 -1/2 0 0
-1 0 5/2 -1/2 -1 0 0 -1/2 1/2
0 0 -1/2 1/2 0 0 0 1/2 -1/2
0 0 -1 0 3/2 -1/2 1/2 0 0
-1/2 -1/2 0 0 -1/2 2 0 -1 0
-1/2 -1/2 0 0 1/2 0 1 0 0
0 0 -1/2 1/2 0 -1 0 2 0
0 0 1/2 -1/2 0 0 0 0 1]
Any ideas as to why this isn't working? I also tried to convert A to a sparse matrix (sparse(A)), and then run the inverse command. No dice.
The problem is indeed in your mathematics. The matrix you provided isn't of full rank, so it isn't invertible.
You could verify that manually (haven't taken the time to do so), but MATLAB already points this out by showing that warning.
Since you are working with floating point numbers, this sometimes causes other subtle problems, one of which you can see in the result of det(A), which is in the order of 1e-16, i.e. machine precision or 0 in practice.
You can see that this Matrix is not of full rank by executing the rank function: rank(A) = 8. For a 9x9 matrix, this indeed means that the matrix is not invertible for doubles (as the rank function accounts for machine precision).
If you want to use MATLAB to get a result that corresponds to a manual calculation, you can use the Symbolic Toolbox and its vpa (variable precision arithmetic) to work around possible numerical problems at the cost of a slower calculation.
B = [5 1 -2 0 0 -1 -1 0 0;
1 1 0 0 0 -1 -1 0 0;
-2 0 5 -1 -2 0 0 -1 1;
0 0 -1 1 0 0 0 1 -1;
0 0 -2 0 3 -1 1 0 0;
-1 -1 0 0 -1 4 0 -2 0;
-1 -1 0 0 1 0 2 0 0;
0 0 -1 1 0 -2 0 4 0;
0 0 1 -1 0 0 0 0 2];
A = B/2;
size(A) % = [9 9]
det(A) % = -1.38777878078145e-17
rank(A) % = 8
C = vpa(A);
det(C) % = 0.0
rank(C) % = 8
Both with VPA and floating points you will get that the rank is 8, the size is [9 9] and the determinant is practically 0, i.e. singular or not invertible. Changing a few entries might make your matrix regular (non-singular), but it is not guaranteed to work and it will solve a different problem.
To solve your actual problem A*x=b for x, you can try to use mldivide (a.k.a. the backslash operator) or a Moore-Penrose pseudo-inverse:
x1 = A\b;
x2 = pinv(A)*b;
But do remember that such a system does not have a unique solution, so both the pseudo-inverse and the backslash operator may (and in this case will) return very different solutions, whether any of them is acceptable really depends on your application.
It means exactly what it says. The matrix is singular, which means it can't really be inverted. Not all matrices can.
In geometrical terms, you have a matrix that transforms one 9-dimensional object into another, but flattens one dimension out completely. That can't be undone; there's no way to tell how far to pull things out in that direction.
The matrix is singular, consider B=2*A below:
B = [5 1 -2 0 0 -1 -1 0 0;
1 1 0 0 0 -1 -1 0 0;
-2 0 5 -1 -2 0 0 -1 1;
0 0 -1 1 0 0 0 1 -1;
0 0 -2 0 3 -1 1 0 0;
-1 -1 0 0 -1 4 0 -2 0;
-1 -1 0 0 1 0 2 0 0;
0 0 -1 1 0 -2 0 4 0;
0 0 1 -1 0 0 0 0 2]
det(B)
0
bicgstab(A,b,tol,maxit), an iterative solver, was able to solve a singular linear system A*x=b for a singular matrix A:
size(A)=[162, 162]
rank(A)=14
cond(A)=4.1813e+132
I used:
tol=1e-10;
maxit=100;
None of the above-mentioned (including svd, \, inv, pinv, gmres) worked for me but bicgstab did a good job. bicgstab converged at iteration 4 to a solution with relative residual 1.1e-11. It works fast for sparse matrices.
See documentation here: https://uk.mathworks.com/help/matlab/ref/bicgstab.html