Solving vector equations in Matlab - matlab

How can I solve a system of this kind in matlab?:
a + b + c = 0
a + d + e = 0
...etc
where each of this symbolic variables (they are previously defined with "syms") is in R^3,
for example:
a = [0 R1 R2];
b = [0 R3*cos(alpha) R3*sin(alpha)];
...etc (R1, R2, R3, .. are also symbolic variables, but one dimensional, ie: scalars)
I did search a lot but everyone solves systems of scalars, not of vectors.
In this case every vector equation represents 3 scalar equations
I know I could reformat the whole system into a matrix and solve Ax=0 but that would be a lot of work since these are 39 equations.
Thank you very much

Related

Solving a homogeneous underdetermined system of symbolic linear equations

Let's say I have 2 symbolic equations in 3 variables:
syms u v w
eq1 = u+v+w == 0
eq2 = w == 0
which both should equal 0.
Is there a way to feed these equations to Matlab and have Matlab conclude:
u=-v
w=0
I tried the following:
%Attempt 1:
x=solve([eq1 eq2],[u v w]);
x.u, x.v, x.w
%Outputs 0 for each of these
% Attempt 2:
[A,B]=equationsToMatrix([eq1 eq2],[u v w]);
linsolve(A,B)
%Outputs 0 for all variables and gives a warning "Warning: The system is rank-deficient. Solution is not unique."
So it only seems to return the trivial zero-solution. This is of course an elementary example. I want it to work for 81 intertwined variables.
Since you have two equations you can only solve for two variables, not for three. You want to see u=-v and w=0, that is a solution in u and w but not in v.
For me x = solve([eq1,eq],u,w) works, it gives x.u=-v and x.w=0.

Matlab integral over function of symbolic matrix

In an attempt to speed up for loops (or eliminate all together), I've been trying to pass matrices into functions. I have to use sine and cosine as well. However, when I attempt to find the integral of a matrix where the elements are composed of sines and cosines, it doesn't work and I can't seem to find a way to make it do so.
I have a matrix SI that is composed of sines and cosines with respect to a variable that I have defined using the Symbolic Math Toolbox. As such, it would actually be even better if I could just pass the SI matrix and receive a matrix of values that is the integral of the sine/cosine function at every location in this matrix. I would essentially get a square matrix back. I am not sure if I phrased that very well, but I have the following code below that I have started with.
I = [1 2; 3 4];
J = [5 6; 7 8];
syms o;
j = o*J;
SI = sin(I + j);
%SI(1,1) = sin(5*o + 1)
integral(#(o) o.*SI(1,1), 0,1);
Ideally, I would want to solve integral(#(o) o*SI,0,1) and get a matrix of values. What should I do here?
Given that A, B and C are all N x N matrices, for the moment, let's assume they're all 2 x 2 matrices to make the example I'm illustrating more succinct to understand. Let's also define o as a mathematical symbol based on your comments in your question above.
syms o;
A = [1 2; 3 4];
B = [5 6; 7 8];
C = [9 10; 11 12];
Let's also define your function f according to your comments:
f = o*sin(A + o*B + C)
We thus get:
f =
[ o*sin(5*o + 10), o*sin(6*o + 12)]
[ o*sin(7*o + 14), o*sin(8*o + 16)]
Remember, for each element in f, we take the corresponding elements in A, B and C and add them together. As such, for the first row and first column of each matrix, we have 1, 5 and 9. As such, A + o*B + C for the first row, first column equates to: 1 + 5*o + 9 = 5*o + 10.
Now if you want to integrate, just use the int command. This will find the exact integral, provided that the integral can be solvable in closed form. int also can handle matrices so it will integrate each element in the matrix. You can call it like so:
out = int(f,a,b);
This will integrate f for each element from the lower bound a to the upper bound b. As such, supposing our limits were from 0 to 1 as you said. Therefore:
out = int(f,0,1);
We thus get:
out =
[ sin(15)/25 - sin(10)/25 - cos(15)/5, sin(18)/36 - sin(12)/36 - cos(18)/6]
[ sin(21)/49 - sin(14)/49 - cos(21)/7, sin(24)/64 - sin(16)/64 - cos(24)/8]
Bear in mind that out is defined in the symbolic math toolbox. If you want the actual numerical values, you need to cast the answer to double. Therefore:
finalOut = double(out);
We thus get:
finalOut =
0.1997 -0.1160
0.0751 -0.0627
Obviously, this can generalize for any size M x N matrices, so long as they all share the same dimensions.
Caveat
sin, cos, tan and the other related functions have their units in radians. If you wish for the degrees equivalent, append a d at the end of the function (i.e. sind, cosd, tand, etc.)
I believe this is the answer you're after. Good luck!

best way to obtain one answer that satisfy a linear equation in matlab

I have a linear equation:
vt = v1*x1 + v2*x2 + v3*x3
vt, v1, v2, v3 are scalars with values between 0 and 1. What is the best way to generate one set (any set will be fine) of x1, x2 and x3 that satisfy the equation above. and also satisfy
x1>0
x2>0
x3>0
I have couple thousand sets of vt,v1,v2 and v3, therefore I need to be able to generate x1, x2 and x3 programmatically.
There are two ways you could approach this:
From the method that you have devised in your post. Randomly generate x1 and x2 and ensure that vt < v1*x1 + v2*x2, then go ahead and solve for x3.
Formulate this into linear program. A linear program is basically solving a system of equations that are subject to inequality or equality constraints. In other words:
As such, we can translate your problem to be of a linear programming problem. The "maximize" statement is what is known as the objective function - the overall goal of what you are trying to accomplish. In linear programming problems, we are trying to minimize or maximize this objective. To do this, we must satisfy the inequalities seen in the subject to condition. Usually, this program is represented in canonical form, and so the constraints on each variable should be positive.
The maximize condition can be arbitrary as you don't care about the objective. You just care about any solution. This whole paradigm can be achieved by linprog in MATLAB. What you should be careful with is how linprog is specified. In fact, the objective is minimized instead of maximized. The conditions, however, are the same with the exception of ensuring that all of the variables are positive. We will have to code that in ourselves.
In terms of the arbitrary objective, we can simply do x1 + x2 + x3. As such, c = [1 1 1]. Our equality constraint is: v1*x1 + v2*x2 + v3*x3 = vt. We also must make sure that x is positive. In order to code this in, what we can do is choose a small constant so that all values of x are greater than this value. Right now, linprog does not support strict inequalities (i.e. x > 0) and so we have to circumvent this by doing this trick. Also, to ensure that the values are positive, linprog assumes that the Ax <= b. Therefore, a common trick that is used is to negate the inequality of x >= 0, and so this is equivalent to -x <= 0. To ensure the values are non-zero, we would actually do: -x <= -eps, where eps is a small constant. However, when I was doing experiments, by doing it this way, two of the variables end up to be the same solution. As such, what I would recommend we do is to generate good solutions that are random each time, let's draw b to be from a uniform random distribution as you said. This will then give us a starting point every time we want to solve this problem.
Therefore, our inequalities are:
-x1 <= -rand1
-x2 <= -rand2
-x3 <= -rand3
rand1, rand2, rand3 are three randomly generated numbers that are between 0 and 1. In matrix form, this is:
[-1 0 0][x1] [-rand1]
[0 -1 0][x2] <= [-rand2]
[0 0 -1][x3] [-rand3]
Finally, our equality constraint from before is:
[v1 v2 v3][x1] [vt]
[x2] =
[x3]
Now, to use linprog, you would do this:
X = linprog(c, A, b, Aeq, beq);
c is a coefficient array that is defined for the objective. In this case, it would be defined as [1 1 1], A and b is the matrix and column vector defined for the inequality constraints and Aeq and beq is the matrix and column vector defined for the equality constraints. X would thus give us the solution after linprog converges (i.e. x1, x2, x3). As such, you would do this:
A = -eye(3,3);
b = -rand(3,1);
Aeq = [v1 v2 v3];
beq = vt;
c = [1 1 1];
X = linprog(c, A, b, Aeq, beq);
As an example, suppose v1 = 0.33, v2 = 0.5, v3 = 0.2, and vt = 2.5. Therefore:
rng(123); %// Set seed for reproducibility
v1 = 0.33; v2 = 0.5; v3 = 0.2;
vt = 2.5;
A = -eye(3,3);
b = -rand(3,1);
Aeq = [v1 v2 v3];
beq = vt;
c = [1 1 1];
X = linprog(c, A, b, Aeq, beq);
I get:
X =
0.6964
4.4495
0.2268
To verify that this equals vt, we would do:
s = Aeq*X
s = 2.5000
The above simply does v1*x1 + v2*x2 + v3*x3. This is computed in a dot product form to make things easy as X is a column vector and v1, v2, v3 are already set in Aeq and is a row vector.
As such, either way is good, but at least with linprog, you don't have to keep looping until you get that condition to be satisfied!
Small Caveat
One small caveat that I forgot to mention in the above approach is that you need to make sure that vt >= v1*rand1 + v2*rand2 + v3*rand3 to ensure convergence. Since you said that v1,v2,v3 are bounded between 0 and 1, the worst case is when v1,v2,v3 are all equal to 1. As such, we really need to make sure that vt > rand1 + rand2 + rand3. If this is not the case, then simply take each value of rand1, rand2, rand3, and divide by (rand1 + rand2 + rand3) / vt. As such, this will ensure that the total summation will equal vt assuming that all of the weights are 1, and this will allow the linear program to converge properly.
If you don't, then the solution will not converge due to the inequality conditions placed in for b, and you won't get the right answer. Just some food for thought! As such, do this for b before you run linprog
if sum(-b) > vt
b = b ./ (sum(-b) / vt);
end
Good luck!

What does this prime symbol do - MATLAB?

I am working with some matlab code I inhereted from another person, I dont understand the meaning of the line q =[q; qi']. I feel like i should be able to just remove it, so that q = distribuc...
function [ q ] = ObtainHistogramForEachTarget( state, numberOfTargets, image, q )
for i=1 : numberOfTargets
qi = distribucion_color_bin_RGB2(state(i).xPosition,state(i).yPosition,state(i).size,image,2);
q = [q; qi'];
end
end
Can anyone explain this to me?
MATLAB has several built-in functions to manipulate matrices. The special character, ', for prime denotes the transpose of a matrix.
The statement A = [ 1 2 3;4 5 6;7 8 9]' produces the matrix
A =
1 4 7
2 5 8
3 6 9
hope this helps
From Matlab's help
help ctranspose
' Complex conjugate transpose.
X' is the complex conjugate transpose of X.
B = ctranspose(A) is called for the syntax A' (complex conjugate
transpose) when A is an object.
The [X ; Y] syntax concatenates two matrices vertically. So that line is adding the just-computed results to the already computed q. If you simply reassigned q, you would be discarding all the computations the function had already done each time through the loop.
The forward apostrophe ' does a complex conjugate and transposes a matrix. I would guess that distribucion_color_bin_RGB2 probably returns a real-valued column vector, and the author wanted to flip it to horizontal before appending it to the results matrix.
As #ja72 pointed out, it's better style to use .' (just transpose) by default and ' only when you actually mean a complex conjugate, even if you expect your data to be real.
usually A' is the transpose of matrix A, but it is conjugate transpose. it works for real matrix, doesn't work for complex matrix
transpose(A) is the real transpose function, both work for R matrix and C matrix.
I usually use A', it's easy, but I changed my habit until I face bug in FFT transformation
I came across the same problem and tested it using octave(matlab in ubuntu), and found that to a just complex number a, a' means its conjugate.
octave:2> a = 1 + 1j
a = 1 + 1i
octave:3> a'
ans = 1 - 1i
Besides, to a complex matrix A:
octave:6> A = [1 + 2j 1 - 2j ; 2 - 1j 2 + 1j]
A =
1 + 2i 1 - 2i
2 - 1i 2 + 1i
octave:7> A'
ans =
1 - 2i 2 + 1i
1 + 2i 2 - 1i

Creating and manipulating three dimensional matrices in Matlab

I'm desperately trying to avoid a for loop in Matlab, but I cannot figure out how to do it. Here's the situation:
I have two m x n matrices A and B and two vectors v and w of length d. I want to outer multiply A and v so that I get an m x n x d matrix where the (i,j,k) entry is A_(i,j) * v_k, and similarly for B and w.
Afterward, I want to add the resulting m x n x d matrices, and then take the mean along the last dimension to get back an m x n matrix.
I'm pretty sure I could handle the latter part, but the first part has me completely stuck. I tried using bsxfun to no avail. Anyone know an efficient way to do this? Thanks very much!
EDIT: This revision comes after the three great answers below. gnovice has the best answer to the question I asked without a doubt. However,the question that I meant to ask involves squaring each entry before taking the mean. I forgot to mention this part originally. Given this annoyance, both of the other answers work well, but the clever trick of doing algebra before coding doesn't help this time. Thanks for the help, everyone!
EDIT:
Even though the problem in the question has been updated, an algebraic approach can still be used to simplify matters. You still don't have to bother with 3-D matrices. Your result is just going to be this:
output = mean(v.^2).*A.^2 + 2.*mean(v.*w).*A.*B + mean(w.^2).*B.^2;
If your matrices and vectors are large, this solution will give you much better performance due to the reduced amount of memory required as compared to solutions using BSXFUN or REPMAT.
Explanation:
Assuming M is the m-by-n-by-d matrix that you get as a result before taking the mean along the third dimension, this is what a span along the third dimension will contain:
M(i,j,:) = A(i,j).*v + B(i,j).*w;
In other words, the vector v scaled by A(i,j) plus the vector w scaled by B(i,j). And this is what you get when you apply an element-wise squaring:
M(i,j,:).^2 = (A(i,j).*v + B(i,j).*w).^2;
= (A(i,j).*v).^2 + ...
2.*A(i,j).*B(i,j).*v.*w + ...
(B(i,j).*w).^2;
Now, when you take the mean across the third dimension, the result for each element output(i,j) will be the following:
output(i,j) = mean(M(i,j,:).^2);
= mean((A(i,j).*v).^2 + ...
2.*A(i,j).*B(i,j).*v.*w + ...
(B(i,j).*w).^2);
= sum((A(i,j).*v).^2 + ...
2.*A(i,j).*B(i,j).*v.*w + ...
(B(i,j).*w).^2)/d;
= sum((A(i,j).*v).^2)/d + ...
sum(2.*A(i,j).*B(i,j).*v.*w)/d + ...
sum((B(i,j).*w).^2)/d;
= A(i,j).^2.*mean(v.^2) + ...
2.*A(i,j).*B(i,j).*mean(v.*w) + ...
B(i,j).^2.*mean(w.^2);
Try reshaping the vectors v and w to be 1 x 1 x d:
mean (bsxfun(#times, A, reshape(v, 1, 1, [])) ...
+ bsxfun(#times, B, reshape(w, 1, 1, [])), 3)
Here I am using [] in the argument to reshape to tell it to fill that dimension in based on the product of all the other dimensions and the total number of elements in the vector.
Use repmat to tile the matrix in the third dimension.
A =
1 2 3
4 5 6
>> repmat(A, [1 1 10])
ans(:,:,1) =
1 2 3
4 5 6
ans(:,:,2) =
1 2 3
4 5 6
etc.
You still don't have to resort to any explicit loops or indirect looping using bsxfun et al. for your updated requirements. You can achieve what you want by a simple vectorized solution as follows
output = reshape(mean((v(:)*A(:)'+w(:)*B(:)').^2),size(A));
Since OP only says that v and w are vectors of length d, the above solution should work for both row and column vectors. If they are known to be column vectors, v(:) can be replaced by v and likewise for w.
You can check if this matches Lambdageek's answer (modified to square the terms) as follows
outputLG = mean ((bsxfun(#times, A, reshape(v, 1, 1, [])) ...
+ bsxfun(#times, B, reshape(w, 1, 1, []))).^2, 3);
isequal(output,outputLG)
ans =
1