Particular vector operation without loops - matlab

Given a vector x1, x2, ..., xN, I need to create a vector of (x_i + x_j) for i = 1,...,N, j = i+1,...,N.
E.g., for x1, x2, x3, x4:
x1+x2, x1+x3, x1+x4, x2+x3, x2+x4, x3+x4
How to do it without loops to get good performance?

C = combnk(v,k) returns a matrix containing all possible combinations of the elements of vector v taken k at a time.
So if you call
combnk(x,2)
you get
x3 x4
x2 x4
x2 x3
x1 x4
x1 x3
x1 x2
In case you rely on the order, which is now inverted, use flipud, then call sum
sum(flipud(combnk(x,2)),2)

Related

solve system of linear equations in matlab

I'm new to Matlab. Suppose I want to solve a linear system of 2 equations with 5 variables x1, x2, x3, x4, x5. Can Matlab give me solution for x1 and x2 in terms of the x3, x4, and x5? I also want to assign values to one or more variables, say I want to look at what happens if x3=5 or x3=3 and x5=1. Is there a way to achieve this?
I looked at the help page https://www.mathworks.com/help/symbolic/solve-a-system-of-linear-equations.html#d120e14359, but it does not cover the non-square matrix case
You can use multiple calls of solve to get solutions for x1 and x2. In this problem you can solve the first equation for x1, and then plug that into the second equation to get x2 in terms of x3, x4, and x5. You can then substitute the new value of x2 back into your solution of x1.
The subs function is used to substitute the solved values back into the original equation.
syms x1 x2 x3 x4 x5
eq1 = x1 + 4*x2 - 5*x3 + 2*x4 + x5;
eq2 = 3*x1 + 8*x2 - 3*x3 + x4 - x5;
x1s = solve(eq1, x1); % Solve x1 in term of x2-x5
x2s = solve(subs(eq2, x1, x1s), x2); % Solve x2 in terms of x3-x5
x1s = solve(subs(eq1, x2, x2s), x1); % Resolve x1 in terms of x3-x5
Output:
x1s =
3*x4 - 7*x3 + 3*x5
x2s =
3*x3 - (5*x4)/4 - x5
You can plug in values for x3, x4, and x5 using subs. For example, for x4=3 and x5=4:
subs(x1s, [x4 x5], [3 4])
ans =
21 - 7*x3

Matlab get symbols from matrix to assign value

I am creating a matrix of symbols and I declare a function using them:
x = syms('x', [1 2]);
f = x(1) + x(2)
So x and f are:
x = [x1 x2];
f = x1 + x2
Now I want to give values to x1 and x2 in a for loop and evaluate f. But when I use:
x(1) = value;
then x becomes:
x = [1 x2]
and x1 is lost, so I cannot evaluate f. How can I assign a value to x1, x2, ..., xn and then evaluate f?
You should use subs like the following:
subs(f,x1,value)
instead of replacing symbol of x1 with a value.
You can see the details of the function here.

Creating multiples variables for syms

I want to create many variables such as x1, x2, x3 to use beside syms so it will look something like this:
syms x1 x2 x3 x4 ... x50 x51....xn
n is the number of variables I need.
Is there any way to do that?
x = sym('x', [n 1]);
This will create n symbolic variables i.e. x1, x2, x3 ......, xn and you can access them using x(1), x(2), x(3)....., x(n) respectively
For example with n=4, you'll get these results:
>> x
x =
x1
x2
x3
x4
>> x(1)
ans =
x1
>> x(3)
ans =
x3

Correlation dependent on samples

I have a variable y that depends on some variables x1 ∈ [x1_min,x1_max], x2 ∈ [x2_min,x2_max], x3 ∈ [x3_min,x3_max] and y can be a matrix as well, i.e. y=y(x1,x2,x3). I want to detect which among x1,x2,x3 is less relevant to determine the value of y.
I am using the following code in Matlab:
x = rand(1000,3); % x1, x2, x3 are the columns of x
y = fct(x); % A generic function of x1, x2, x3
[corr_mat, p_val] = corrcoef(x,y);
[i,j] = find(p_val > 0.5);
disp([i,j])
The problem is that the resulting indices strongly depend on the random samples (even if I increase the number of samples). How can I get a more precise measure?
As a simple alternative example, y=x1+x2+x3, with x1∈[50,80], x2∈[0,1], x3∈[0,1]. Clearly, the value of y depends much more on x1 than the other 2 variables. How do I quantify this dependence?
Thank you in advance.
EDIT: Here is what I mean with "quantification" or "relevance". I want to detect which variable determines very small changes in y, i.e. in the previous example x2 and x3 makes y to vary less than x1 does.
You need to use covariance and not correlation coefficient. The correlation coefficient is normalized by the variance of each variable to give the same weight to all variables when they have different ranges, and this is exactly what you want to avoid.
x1 = 50+30*rand(1000,1);
x2 = rand(1000,1);
x3 = rand(1000,1);
y = x1+x2+x3;
c=cov([x1 x2 x3 y]);
c(1:3,4) % Covariances of x[1-3] and y

Transpose 1D to 2D by Group in Matlab

I need to transpose a vector into a 2D matrix according to a group of values that are equal in another column of a matrix. For example:
1 x1
1 x2
1 x3
1 x4
2 x5
2 x6
2 x7
2 x8
Should look like:
x1 x2 x3 x4;
x5 x6 x7 x8;
This is the same procedure you would do in SAS using proc tabulate. Reshape didn't work for me because it doesn't transpose it, and tried permute with no luck either. Is there any built in command that does this besides having to program it in using find, transpose, and vertcat?
If for some reason you want to avoid reshape, although the solution in comments will work, you can use sub2ind to get the linear indices of the new matrix V, given that your first column will always provide the new line sub:
X = [[1,1,1,1,2,2,2,2]' (1:8)'];
subs = X(:,1);
M = length(unique(subs)); % count unique ids
N = length(X)./M; % Problem assumption: M sets of size N (MxN=length(X))
V = zeros(M, N);
i = sub2ind([M, N], subs, repmat(1:N,1,M)');
V(i) = X(:,2);
The above, according to your specs, will work as long as there is an equal number of unique elements in X, so you can form an MxN matrix.