Z-score across a whole matrix in Matlab - matlab

How would I calculate the z-score of an entire 3 dimensional matrix in Matlab?
The Matlab command zscore standardises across vectors in just one of the dimensions of multidimensional arrays.
zscore documentation: https://uk.mathworks.com/help/stats/zscore.html

Here I show two equivalent methods:
You can edit zscore to view how the function works, or the documentation linked in your question gives the equation for zscore:
We can calculate this manually using the mean and std (standard deviation).
M = rand( 3, 5 ) * 10
>> M =
9.5929 1.4929 2.5428 9.2926 2.5108
5.4722 2.5751 8.1428 3.4998 6.1604
1.3862 8.4072 2.4352 1.966 4.7329
Z = ( M - mean(M(:)) ) / std(M(:)) % using M(:) to operate on the array as a vector
>> Z =
1.6598 -1.0771 -0.72235 1.5583 -0.73316
0.26743 -0.71145 1.1698 -0.39899 0.5
-1.1131 1.2591 -0.7587 -0.91727 0.017644
The advantage of this method is that you don't have to use the statistics toolbox required by zscore. This minor disadvantage is you lose the input checking of zscore, and protections if the standard deviation is 0.
If you want to use zscore then you can use reshape, after calculating the zscore as if it were a vector:
Z = reshape( zscore(M(:)), size(M) )
>> Z =
1.6598 -1.0771 -0.72235 1.5583 -0.73316
0.26743 -0.71145 1.1698 -0.39899 0.5
-1.1131 1.2591 -0.7587 -0.91727 0.017644
Note that both of these methods should behave the same as the standard zscore(M) for a vector input M.

Related

Vectorized processing of each N-dimension array in (N+1)-dimension array in Matlab? [duplicate]

You can apply a function to every item in a vector by saying, for example, v + 1, or you can use the function arrayfun. How can I do it for every row/column of a matrix without using a for loop?
Many built-in operations like sum and prod are already able to operate across rows or columns, so you may be able to refactor the function you are applying to take advantage of this.
If that's not a viable option, one way to do it is to collect the rows or columns into cells using mat2cell or num2cell, then use cellfun to operate on the resulting cell array.
As an example, let's say you want to sum the columns of a matrix M. You can do this simply using sum:
M = magic(10); %# A 10-by-10 matrix
columnSums = sum(M, 1); %# A 1-by-10 vector of sums for each column
And here is how you would do this using the more complicated num2cell/cellfun option:
M = magic(10); %# A 10-by-10 matrix
C = num2cell(M, 1); %# Collect the columns into cells
columnSums = cellfun(#sum, C); %# A 1-by-10 vector of sums for each cell
You may want the more obscure Matlab function bsxfun. From the Matlab documentation, bsxfun "applies the element-by-element binary operation specified by the function handle fun to arrays A and B, with singleton expansion enabled."
#gnovice stated above that sum and other basic functions already operate on the first non-singleton dimension (i.e., rows if there's more than one row, columns if there's only one row, or higher dimensions if the lower dimensions all have size==1). However, bsxfun works for any function, including (and especially) user-defined functions.
For example, let's say you have a matrix A and a row vector B. E.g., let's say:
A = [1 2 3;
4 5 6;
7 8 9]
B = [0 1 2]
You want a function power_by_col which returns in a vector C all the elements in A to the power of the corresponding column of B.
From the above example, C is a 3x3 matrix:
C = [1^0 2^1 3^2;
4^0 5^1 6^2;
7^0 8^1 9^2]
i.e.,
C = [1 2 9;
1 5 36;
1 8 81]
You could do this the brute force way using repmat:
C = A.^repmat(B, size(A, 1), 1)
Or you could do this the classy way using bsxfun, which internally takes care of the repmat step:
C = bsxfun(#(x,y) x.^y, A, B)
So bsxfun saves you some steps (you don't need to explicitly calculate the dimensions of A). However, in some informal tests of mine, it turns out that repmat is roughly twice as fast if the function to be applied (like my power function, above) is simple. So you'll need to choose whether you want simplicity or speed.
I can't comment on how efficient this is, but here's a solution:
applyToGivenRow = #(func, matrix) #(row) func(matrix(row, :))
applyToRows = #(func, matrix) arrayfun(applyToGivenRow(func, matrix), 1:size(matrix,1))'
% Example
myMx = [1 2 3; 4 5 6; 7 8 9];
myFunc = #sum;
applyToRows(myFunc, myMx)
Building on Alex's answer, here is a more generic function:
applyToGivenRow = #(func, matrix) #(row) func(matrix(row, :));
newApplyToRows = #(func, matrix) arrayfun(applyToGivenRow(func, matrix), 1:size(matrix,1), 'UniformOutput', false)';
takeAll = #(x) reshape([x{:}], size(x{1},2), size(x,1))';
genericApplyToRows = #(func, matrix) takeAll(newApplyToRows(func, matrix));
Here is a comparison between the two functions:
>> % Example
myMx = [1 2 3; 4 5 6; 7 8 9];
myFunc = #(x) [mean(x), std(x), sum(x), length(x)];
>> genericApplyToRows(myFunc, myMx)
ans =
2 1 6 3
5 1 15 3
8 1 24 3
>> applyToRows(myFunc, myMx)
??? Error using ==> arrayfun
Non-scalar in Uniform output, at index 1, output 1.
Set 'UniformOutput' to false.
Error in ==> #(func,matrix)arrayfun(applyToGivenRow(func,matrix),1:size(matrix,1))'
For completeness/interest I'd like to add that matlab does have a function that allows you to operate on data per-row rather than per-element. It is called rowfun (http://www.mathworks.se/help/matlab/ref/rowfun.html), but the only "problem" is that it operates on tables (http://www.mathworks.se/help/matlab/ref/table.html) rather than matrices.
Adding to the evolving nature of the answer to this question, starting with r2016b, MATLAB will implicitly expand singleton dimensions, removing the need for bsxfun in many cases.
From the r2016b release notes:
Implicit Expansion: Apply element-wise operations and functions to arrays with automatic expansion of dimensions of length 1
Implicit expansion is a generalization of scalar expansion. With
scalar expansion, a scalar expands to be the same size as another
array to facilitate element-wise operations. With implicit expansion,
the element-wise operators and functions listed here can implicitly
expand their inputs to be the same size, as long as the arrays have
compatible sizes. Two arrays have compatible sizes if, for every
dimension, the dimension sizes of the inputs are either the same or
one of them is 1. See Compatible Array Sizes for Basic Operations and
Array vs. Matrix Operations for more information.
Element-wise arithmetic operators — +, -, .*, .^, ./, .\
Relational operators — <, <=, >, >=, ==, ~=
Logical operators — &, |, xor
Bit-wise functions — bitand, bitor, bitxor
Elementary math functions — max, min, mod, rem, hypot, atan2, atan2d
For example, you can calculate the mean of each column in a matrix A,
and then subtract the vector of mean values from each column with A -
mean(A).
Previously, this functionality was available via the bsxfun function.
It is now recommended that you replace most uses of bsxfun with direct
calls to the functions and operators that support implicit expansion.
Compared to using bsxfun, implicit expansion offers faster speed,
better memory usage, and improved readability of code.
None of the above answers worked "out of the box" for me, however, the following function, obtained by copying the ideas of the other answers works:
apply_func_2_cols = #(f,M) cell2mat(cellfun(f,num2cell(M,1), 'UniformOutput',0));
It takes a function f and applies it to every column of the matrix M.
So for example:
f = #(v) [0 1;1 0]*v + [0 0.1]';
apply_func_2_cols(f,[0 0 1 1;0 1 0 1])
ans =
0.00000 1.00000 0.00000 1.00000
0.10000 0.10000 1.10000 1.10000
With recent versions of Matlab, you can use the Table data structure to your advantage. There's even a 'rowfun' operation but I found it easier just to do this:
a = magic(6);
incrementRow = cell2mat(cellfun(#(x) x+1,table2cell(table(a)),'UniformOutput',0))
or here's an older one I had that doesn't require tables, for older Matlab versions.
dataBinner = cell2mat(arrayfun(#(x) Binner(a(x,:),2)',1:size(a,1),'UniformOutput',0)')
The accepted answer seems to be to convert to cells first and then use cellfun to operate over all of the cells. I do not know the specific application, but in general I would think using bsxfun to operate over the matrix would be more efficient. Basically bsxfun applies an operation element-by-element across two arrays. So if you wanted to multiply each item in an n x 1 vector by each item in an m x 1 vector to get an n x m array, you could use:
vec1 = [ stuff ]; % n x 1 vector
vec2 = [ stuff ]; % m x 1 vector
result = bsxfun('times', vec1.', vec2);
This will give you matrix called result wherein the (i, j) entry will be the ith element of vec1 multiplied by the jth element of vec2.
You can use bsxfun for all sorts of built-in functions, and you can declare your own. The documentation has a list of many built-in functions, but basically you can name any function that accepts two arrays (vector or matrix) as arguments and get it to work.
I like splitapply, which allows a function to be applied to the columns of A using splitapply(fun,A,1:size(A,2)).
For example
A = magic(5);
B = splitapply(#(x) x+1, A, 1:size(A,2));
C = splitapply(#std, A, 1:size(A,2));
To apply the function to the rows, you could use
splitapply(fun, A', 1:size(A,1))';
(My source for this solution is here.)
Stumbled upon this question/answer while seeking how to compute the row sums of a matrix.
I would just like to add that Matlab's SUM function actually has support for summing for a given dimension, i.e a standard matrix with two dimensions.
So to calculate the column sums do:
colsum = sum(M) % or sum(M, 1)
and for the row sums, simply do
rowsum = sum(M, 2)
My bet is that this is faster than both programming a for loop and converting to cells :)
All this can be found in the matlab help for SUM.
if you know the length of your rows you can make something like this:
a=rand(9,3);
b=rand(9,3);
arrayfun(#(x1,x2,y1,y2,z1,z2) line([x1,x2],[y1,y2],[z1,z2]) , a(:,1),b(:,1),a(:,2),b(:,2),a(:,3),b(:,3) )

"out of memory" error for mvregress in matlab

I am trying to use mvregress with the data I have with dimensionality of a couple of hundreds. (3~4). Using 32 gb of ram, I can not compute beta and I get "out of memory" message. I couldn't find any limitation of use for mvregress that prevents me to apply it on vectors with this degree of dimensionality, am I doing something wrong? is there any way to use multivar linear regression via my data?
here is an example of what goes wrong:
dim=400;
nsamp=1000;
dataVariance = .10;
noiseVariance = .05;
mixtureCenters=randn(dim,1);
X=randn(dim, nsamp)*sqrt(dataVariance ) + repmat(mixtureCenters,1,nsamp);
N=randn(dim, nsamp)*sqrt(noiseVariance ) + repmat(mixtureCenters,1,nsamp);
A=2*eye(dim);
Y=A*X+N;
%without residual term:
A_hat=mvregress(X',Y');
%wit residual term:
[B, y_hat]=mlrtrain(X,Y)
where
function [B, y_hat]=mlrtrain(X,Y)
[n,d] = size(Y);
Xmat = [ones(n,1) X];
Xmat_sz=size(Xmat);
Xcell = cell(1,n);
for i = 1:n
Xcell{i} = [kron([Xmat(i,:)],eye(d))];
end
[beta,sigma,E,V] = mvregress(Xcell,Y);
B = reshape(beta,d,Xmat_sz(2))';
y_hat=Xmat * B ;
end
the error is:
Error using bsxfun
Out of memory. Type HELP MEMORY for your options.
Error in kron (line 36)
K = reshape(bsxfun(#times,A,B),[ma*mb na*nb]);
Error in mvregress (line 319)
c{j} = kron(eye(NumSeries),Design(j,:));
and this is result of whos command:
whos
Name Size Bytes Class Attributes
A 400x400 1280000 double
N 400x1000 3200000 double
X 400x1000 3200000 double
Y 400x1000 3200000 double
dataVariance 1x1 8 double
dim 1x1 8 double
mixtureCenters 400x1 3200 double
noiseVariance 1x1 8 double
nsamp 1x1 8 double
Okay, I think I have a solution for you, short version first:
dim=400;
nsamp=1000;
dataVariance = .10;
noiseVariance = .05;
mixtureCenters=randn(dim,1);
X=randn(dim, nsamp)*sqrt(dataVariance ) + repmat(mixtureCenters,1,nsamp);
N=randn(dim, nsamp)*sqrt(noiseVariance ) + repmat(mixtureCenters,1,nsamp);
A=2*eye(dim);
Y=A*X+N;
[n,d] = size(Y);
Xmat = [ones(n,1) X];
Xmat_sz=size(Xmat);
Xcell = cell(1,n);
for i = 1:n
Xcell{i} = kron(Xmat(i,:),speye(d));
end
[beta,sigma,E,V] = mvregress(Xcell,Y);
B = reshape(beta,d,Xmat_sz(2))';
y_hat=Xmat * B ;
Strangely, I could not access the function's workspace, it did not appear in the call stack. This is why I put the function after the script here.
Here's the explanation that might also help you in the future:
Looking at the kron definition, the result when inserting an m by n and a p by q matrix has size mxp by nxq, in your case 400 by 1001 and 1000 by 1000, that makes a 400000 by 1001000 matrix, which has 4*10^11 elements. Now you have four hundred of them, and each element takes up 8 bytes for double precision, that is a total size of about 1.281 Petabytes of memory (or 1.138 Pebibytes, if you prefer), well out of reach even with your grand 32 Gibibyte.
Seeing that one of your matrices, the eye one, contains mostly zeros, and the resulting matrix contains all possible element product combinations, most of them will be zero, too. For such cases specifically, MATLAB offers the sparse matrix format, which saves a lot of memory depending on the number of zero elements in a matrix by only storing nonzero ones. You can convert a full matrix to a sparse representation with sparse(X), or you get an eye matrix directly by using speye(n), which is what I did above. The sparse property propagates to the result, which you should now have enough memory for (I have with 1/4 of your memory available, and it works).
However, what remains is the problem Matthew Gunn mentioned in a comment. I get an error saying:
Error using mvregress (line 260)
Insufficient data to estimate either full or least-squares models.
Preface
If your regressors are all the same across each regression equation and you're interested in the OLS estimate, you can replace a call to mvregress with a simple call to \.
It appears in the call to mlrtrain you had a matrix transposition error (since corrected). In the language of mvregress, n is the number of observations, d is the number of outcome variables. You generate a matrix Y that is d by n. But THEN when you should call mlrtrain(X', Y') not mlrtrain(X, Y).
If below isn't specifically, what you're looking for, I suggest you precisely define what you're trying to estimate.
What I would have written if I were you
So much that's been said here is completely off base that I'm posting code of what I would have written if I were you. I've reduced the dimensionality to show the equivalence in your special case to simply calling \. I've also written stuff in a more standard way (i.e. having observations run down the rows and not making matrix transposition errors).
dim=5; % These can go way higher but only if you use my code
nsamp=20; % rather than call mvregress
dataVariance = .10;
noiseVariance = .05;
mixtureCenters=randn(dim,1);
X = randn(nsamp, dim)*sqrt(dataVariance ) + repmat(mixtureCenters', nsamp, 1); %'
E = randn(nsamp, dim)*sqrt(noiseVariance); %noise should be mean zero
B = 2*eye(dim);
Y = X*B+E;
% without constant:
B_hat = mvregress(X,Y); %<-------- slow, blows up with high dimension
B_hat2 = X \ Y; %<-------- fast, fine with higher dimensions
norm(B_hat - B_hat2) % show numerical equivalent if basically 0
% with constant:
B_constant_hat = mlrtrain(X,Y) %<-------- slow, blows up with high dimension
B_constant_hat2 = [ones(nsamp, 1), X] \ Y; % <-- fast, and fine with higher dimensions
norm(B_constant_hat - B_constant_hat2) % show numerical equivalent if basically 0
Explanation
I'll assume you have:
An nsamp by dim sized data matrix X.
An nsamp by ny sized matrix of outcome variables Y
You want the results from regressing each column of Y on data matrix X. That is, we're doing multivariate regression but there's a common data matrix X.
That is, we're estimating:
y_{ij} = \sum_k b_k * x_{ik} + e_{ijk} for i=1...nsamp, j = 1...ny, k=1...dim
If you're trying to do something different than this, you need to clearly state what you're trying to do!
To regress Y on X you could do:
[beta_mvr, sigma_mvr, resid_mvr] = mvregress(X, Y);
This appears to be horribly slow. The following should match mvregress for the case where you're using the same data matrix for each regression.
beta_hat = X \ Y; % estimate beta using least squares
resid = Y - X * beta_hat; % calculate residual
If you want to construct a new data matrix with a vector of ones, you would do:
X_withones = [ones(nsamp, 1), X];
Further clarification for some that are confused
Let's say we want to run the regression
y_i = \sum_j x_{ij} + e_i i=1...n, j=1...k
We can construct the data matrix n by k datamatrix X and an n by 1 outcome vector y. The OLS estimate is bhat = pinv(X' * X) * X' * y which can also be computed in MATLAB with bhat = X \ y.
If you want to do this multiple times (i.e. run multivariate regression on the same data matrix X), you can construct an outcome matrix Y where EACH column represents a separate outcome variable. Y = [ya, yb, yc, ...]. Trivially, the OLS solution is B = pinv(X'*X)*X'*Y which can be computed as B = X \ Y. The first column of B is the result of regressing Y(:,1) on X. The second column of B is the result of regressing Y(:,2) on X, etc... Under these conditions, this is equivalent to a call to B = mvregress(X, Y)
Even more test code
If regressors are the same and estimation is by simple OLS, there is an equivalence between multivariate regression and equation by equation ordinary least squares.
d = 10;
k = 15;
n = 100;
C = RandomCorr(d + k, 1); %Use any method you like to generate a random correlation matrix
s = randn(d+k , 1) * 10;
S = (s * s') .* C; % generate covariance matrix
mu = randn(d+k,1);
data = mvnrnd(ones(n, 1) * mu', S);
Y = data(:,1:d);
X = data(:,d+1:end);
[b1, sigma] = mvregress(X, Y);
b2 = X \ Y;
norm(b1 - b2)
You will notice b1 and b2 are numerically equivalent. They are equivalent even though sigma is EXTREMELY different from zero.

Graphing Polynomials in MATLAB

I need to create a polynomial of the form:
P(x) = q(1,1) + q(2,2)(x-z(1)) + q(3,3)(x-z(1))(x-z(2)) + --- + q(2n, 2n)(x-z(1))(x-z(2))...(x-z(2n)) NOTE: The indices of the equation have been shifted to accomodate MATLAB.
in MATLAB. Consult this link here specifically slides 15 and 16.
I have the matrix Q filled, so I have the diagonal, and I also have z(1:2n) filled.
I'm having a hard time figuring out a way to create a polynomial that I can graph this polynomial. I've tried to use a for loop to append each term to P(x), but it doesn't operate the way I thought it would.
So far, my code will calculate the coefficients (presented as Q(0,0) -> Q(2n+1, 2n+1) in the problem above) without a problem.
I'm having an issue with the construction of a degree n polynomial of the form described above. Plotting makes more sense now, create a vector x with evaluative values, and then run them through the polynomial "function" and plot the x vector against the resulting vector.
So I just need to create this polynomial.
I would use diag and cumprod to help you accomplish this. First use diag to extract the diagonals of your matrix Q. After, use cumprod to generate a vector of cumulative products.
How cumprod works on a vector is that for each element in the vector, the i'th element collects products from 1 up to the i'th element. As an example, if we had a vector V = [1 2 3 4 5], cumprod(V) would produce [1 2 6 24 120]. The 4th element (as an example) would be 1*2*3*4, representing the products from the 1st to the 4th element.
As such, this is the code that I would do:
qdiag = diag(Q);
xMinusZ = x - z; % Takes z and does x - z for every element in z
cumProdRes = cumprod(xMinusZ);
P = sum(qdiag .* [1;cumProdRes(1:end-1)]);
P should give you P(x) that you desired. Make sure that z is a column vector to make it compatible with the diagonals extracted from Q.
NB: I believe there is a typo in your equation. The last term of your equation (going with your convention) should have (x-z(2n-1)) and not (x-z(2n)). This is because the first term in your equation does not have z.
Here's an example. Let's suppose Q is defined
Q = [1 2 3 4; 5 6 7 8; 9 10 11 12; 13 14 15 16];
The vector z is:
z = [4;3;2;1];
Let's evaluate the function at x = 2
Extracting the diagonals of Q should give us Q = [1;6;11;16]. Subtract x from every element of z should give us:
xMinusZ = [-2;-1;0;1];
Using the equation that you have above, we have:
P = 1 + 6*(-2) + 11*(-2)*(-1) + 16*(-2)*(-1)*(0) = 11
This is what the code should give.
What if we want to do this for more than one value of x?
As you have stated in your post, you want to evaluate this for a series of x values. As such, you need to modify the code so that it looks like this (make sure that x is a column vector):
qdiag = diag(Q);
xMinusZ = repmat(x,1,length(z)) - repmat(z',length(z),1);
cumProdRes = cumprod(xMinusZ,2);
P = sum(repmat(qdiag',length(z),1).*[ones(length(z),1) cumProdRes(:,1:end-1)],2);
P should now give you a vector of outputs, and so if you want to plot this, simply do plot(x,P);

MATLAB: Test if anonymous vector is a subset of R^n

I'm trying to use MatLab code as a way to learn math as a programmer.
So reading I'm this post about subspaces and trying to build some simple matlab functions that do it for me.
Here is how far I got:
function performSubspaceTest(subset, numArgs)
% Just a quick and dirty function to perform subspace test on a vector(subset)
%
% INPUT
% subset is the anonymous function that defines the vector
% numArgs is the the number of argument that subset takes
% Author: Lasse Nørfeldt (Norfeldt)
% Date: 2012-05-30
% License: http://creativecommons.org/licenses/by-sa/3.0/
if numArgs == 1
subspaceTest = #(subset) single(rref(subset(rand)+subset(rand))) ...
== single(rref(rand*subset(rand)));
elseif numArgs == 2
subspaceTest = #(subset) single(rref(subset(rand,rand)+subset(rand,rand))) ...
== single(rref(rand*subset(rand,rand)));
end
% rand just gives a random number. Converting to single avoids round off
% errors.
% Know that the code can crash if numArgs isn't given or bigger than 2.
outcome = subspaceTest(subset);
if outcome == true
display(['subset IS a subspace of R^' num2str(size(outcome,2))])
else
display(['subset is NOT a subspace of R^' num2str(size(outcome,2))])
end
And these are the subset that I'm testing
%% Checking for subspaces
V = #(x) [x, 3*x]
performSubspaceTest(V, 1)
A = #(x) [x, 3*x+1]
performSubspaceTest(A, 1)
B = #(x) [x, x^2, x^3]
performSubspaceTest(B, 1)
C = #(x1, x3) [x1, 0, x3, -5*x1]
performSubspaceTest(C, 2)
running the code gives me this
V =
#(x)[x,3*x]
subset IS a subspace of R^2
A =
#(x)[x,3*x+1]
subset is NOT a subspace of R^2
B =
#(x)[x,x^2,x^3]
subset is NOT a subspace of R^3
C =
#(x1,x3)[x1,0,x3,-5*x1]
subset is NOT a subspace of R^4
The C is not working (only works if it only accepts one arg).
I know that my solution for numArgs is not optimal - but it was what I could come up with at the current moment..
Are there any way to optimize this code so C will work properly and perhaps avoid the elseif statments for more than 2 args..?
PS: I couldn't seem to find a build-in matlab function that does the hole thing for me..
Here's one approach. It tests if a given function represents a linear subspace or not. Technically it is only a probabilistic test, but the chance of it failing is vanishingly small.
First, we define a nice abstraction. This higher order function takes a function as its first argument, and applies the function to every row of the matrix x. This allows us to test many arguments to func at the same time.
function y = apply(func,x)
for k = 1:size(x,1)
y(k,:) = func(x(k,:));
end
Now we write the core function. Here func is a function of one argument (presumed to be a vector in R^m) which returns a vector in R^n. We apply func to 100 randomly selected vectors in R^m to get an output matrix. If func represents a linear subspace, then the rank of the output will be less than or equal to m.
function result = isSubspace(func,m)
inputs = rand(100,m);
outputs = apply(func,inputs);
result = rank(outputs) <= m;
Here it is in action. Note that the functions take only a single argument - where you wrote c(x1,x2)=[x1,0,x2] I write c(x) = [x(1),0,x(2)], which is slightly more verbose, but has the advantage that we don't have to mess around with if statements to decide how many arguments our function has - and this works for functions that take input in R^m for any m, not just 1 or 2.
>> v = #(x) [x,3*x]
>> isSubspace(v,1)
ans =
1
>> a = #(x) [x(1),3*x(1)+1]
>> isSubspace(a,1)
ans =
0
>> c = #(x) [x(1),0,x(2),-5*x(1)]
>> isSubspace(c,2)
ans =
1
The solution of not being optimal barely scratches the surface of the problem.
I think you're doing too much at once: rref should not be used and is complicating everything. especially for numArgs greater then 1.
Think it through: [1 0 3 -5] and [3 0 3 -5] are both members of C, but their sum [4 0 6 -10] (which belongs in C) is not linear product of the multiplication of one of the previous vectors (e.g. [2 0 6 -10] ). So all the rref in the world can't fix your problem.
So what can you do instead?
you need to check if
(randn*subset(randn,randn)+randn*subset(randn,randn)))
is a member of C, which, unless I'm mistaken is a difficult problem: Conceptually you need to iterate through every element of the vector and make sure it matches the predetermined condition. Alternatively, you can try to find a set such that C(x1,x2) gives you the right answer. In this case, you can use fminsearch to solve this problem numerically and verify the returned value is within a defined tolerance:
[s,error] = fminsearch(#(x) norm(C(x(1),x(2)) - [2 0 6 -10]),[1 1])
s =
1.999996976386119 6.000035034493023
error =
3.827680714104862e-05
Edit: you need to make sure you can use negative numbers in your multiplication, so don't use rand, but use something else. I changed it to randn.

How can I apply a function to every row/column of a matrix in MATLAB?

You can apply a function to every item in a vector by saying, for example, v + 1, or you can use the function arrayfun. How can I do it for every row/column of a matrix without using a for loop?
Many built-in operations like sum and prod are already able to operate across rows or columns, so you may be able to refactor the function you are applying to take advantage of this.
If that's not a viable option, one way to do it is to collect the rows or columns into cells using mat2cell or num2cell, then use cellfun to operate on the resulting cell array.
As an example, let's say you want to sum the columns of a matrix M. You can do this simply using sum:
M = magic(10); %# A 10-by-10 matrix
columnSums = sum(M, 1); %# A 1-by-10 vector of sums for each column
And here is how you would do this using the more complicated num2cell/cellfun option:
M = magic(10); %# A 10-by-10 matrix
C = num2cell(M, 1); %# Collect the columns into cells
columnSums = cellfun(#sum, C); %# A 1-by-10 vector of sums for each cell
You may want the more obscure Matlab function bsxfun. From the Matlab documentation, bsxfun "applies the element-by-element binary operation specified by the function handle fun to arrays A and B, with singleton expansion enabled."
#gnovice stated above that sum and other basic functions already operate on the first non-singleton dimension (i.e., rows if there's more than one row, columns if there's only one row, or higher dimensions if the lower dimensions all have size==1). However, bsxfun works for any function, including (and especially) user-defined functions.
For example, let's say you have a matrix A and a row vector B. E.g., let's say:
A = [1 2 3;
4 5 6;
7 8 9]
B = [0 1 2]
You want a function power_by_col which returns in a vector C all the elements in A to the power of the corresponding column of B.
From the above example, C is a 3x3 matrix:
C = [1^0 2^1 3^2;
4^0 5^1 6^2;
7^0 8^1 9^2]
i.e.,
C = [1 2 9;
1 5 36;
1 8 81]
You could do this the brute force way using repmat:
C = A.^repmat(B, size(A, 1), 1)
Or you could do this the classy way using bsxfun, which internally takes care of the repmat step:
C = bsxfun(#(x,y) x.^y, A, B)
So bsxfun saves you some steps (you don't need to explicitly calculate the dimensions of A). However, in some informal tests of mine, it turns out that repmat is roughly twice as fast if the function to be applied (like my power function, above) is simple. So you'll need to choose whether you want simplicity or speed.
I can't comment on how efficient this is, but here's a solution:
applyToGivenRow = #(func, matrix) #(row) func(matrix(row, :))
applyToRows = #(func, matrix) arrayfun(applyToGivenRow(func, matrix), 1:size(matrix,1))'
% Example
myMx = [1 2 3; 4 5 6; 7 8 9];
myFunc = #sum;
applyToRows(myFunc, myMx)
Building on Alex's answer, here is a more generic function:
applyToGivenRow = #(func, matrix) #(row) func(matrix(row, :));
newApplyToRows = #(func, matrix) arrayfun(applyToGivenRow(func, matrix), 1:size(matrix,1), 'UniformOutput', false)';
takeAll = #(x) reshape([x{:}], size(x{1},2), size(x,1))';
genericApplyToRows = #(func, matrix) takeAll(newApplyToRows(func, matrix));
Here is a comparison between the two functions:
>> % Example
myMx = [1 2 3; 4 5 6; 7 8 9];
myFunc = #(x) [mean(x), std(x), sum(x), length(x)];
>> genericApplyToRows(myFunc, myMx)
ans =
2 1 6 3
5 1 15 3
8 1 24 3
>> applyToRows(myFunc, myMx)
??? Error using ==> arrayfun
Non-scalar in Uniform output, at index 1, output 1.
Set 'UniformOutput' to false.
Error in ==> #(func,matrix)arrayfun(applyToGivenRow(func,matrix),1:size(matrix,1))'
For completeness/interest I'd like to add that matlab does have a function that allows you to operate on data per-row rather than per-element. It is called rowfun (http://www.mathworks.se/help/matlab/ref/rowfun.html), but the only "problem" is that it operates on tables (http://www.mathworks.se/help/matlab/ref/table.html) rather than matrices.
Adding to the evolving nature of the answer to this question, starting with r2016b, MATLAB will implicitly expand singleton dimensions, removing the need for bsxfun in many cases.
From the r2016b release notes:
Implicit Expansion: Apply element-wise operations and functions to arrays with automatic expansion of dimensions of length 1
Implicit expansion is a generalization of scalar expansion. With
scalar expansion, a scalar expands to be the same size as another
array to facilitate element-wise operations. With implicit expansion,
the element-wise operators and functions listed here can implicitly
expand their inputs to be the same size, as long as the arrays have
compatible sizes. Two arrays have compatible sizes if, for every
dimension, the dimension sizes of the inputs are either the same or
one of them is 1. See Compatible Array Sizes for Basic Operations and
Array vs. Matrix Operations for more information.
Element-wise arithmetic operators — +, -, .*, .^, ./, .\
Relational operators — <, <=, >, >=, ==, ~=
Logical operators — &, |, xor
Bit-wise functions — bitand, bitor, bitxor
Elementary math functions — max, min, mod, rem, hypot, atan2, atan2d
For example, you can calculate the mean of each column in a matrix A,
and then subtract the vector of mean values from each column with A -
mean(A).
Previously, this functionality was available via the bsxfun function.
It is now recommended that you replace most uses of bsxfun with direct
calls to the functions and operators that support implicit expansion.
Compared to using bsxfun, implicit expansion offers faster speed,
better memory usage, and improved readability of code.
None of the above answers worked "out of the box" for me, however, the following function, obtained by copying the ideas of the other answers works:
apply_func_2_cols = #(f,M) cell2mat(cellfun(f,num2cell(M,1), 'UniformOutput',0));
It takes a function f and applies it to every column of the matrix M.
So for example:
f = #(v) [0 1;1 0]*v + [0 0.1]';
apply_func_2_cols(f,[0 0 1 1;0 1 0 1])
ans =
0.00000 1.00000 0.00000 1.00000
0.10000 0.10000 1.10000 1.10000
With recent versions of Matlab, you can use the Table data structure to your advantage. There's even a 'rowfun' operation but I found it easier just to do this:
a = magic(6);
incrementRow = cell2mat(cellfun(#(x) x+1,table2cell(table(a)),'UniformOutput',0))
or here's an older one I had that doesn't require tables, for older Matlab versions.
dataBinner = cell2mat(arrayfun(#(x) Binner(a(x,:),2)',1:size(a,1),'UniformOutput',0)')
The accepted answer seems to be to convert to cells first and then use cellfun to operate over all of the cells. I do not know the specific application, but in general I would think using bsxfun to operate over the matrix would be more efficient. Basically bsxfun applies an operation element-by-element across two arrays. So if you wanted to multiply each item in an n x 1 vector by each item in an m x 1 vector to get an n x m array, you could use:
vec1 = [ stuff ]; % n x 1 vector
vec2 = [ stuff ]; % m x 1 vector
result = bsxfun('times', vec1.', vec2);
This will give you matrix called result wherein the (i, j) entry will be the ith element of vec1 multiplied by the jth element of vec2.
You can use bsxfun for all sorts of built-in functions, and you can declare your own. The documentation has a list of many built-in functions, but basically you can name any function that accepts two arrays (vector or matrix) as arguments and get it to work.
I like splitapply, which allows a function to be applied to the columns of A using splitapply(fun,A,1:size(A,2)).
For example
A = magic(5);
B = splitapply(#(x) x+1, A, 1:size(A,2));
C = splitapply(#std, A, 1:size(A,2));
To apply the function to the rows, you could use
splitapply(fun, A', 1:size(A,1))';
(My source for this solution is here.)
Stumbled upon this question/answer while seeking how to compute the row sums of a matrix.
I would just like to add that Matlab's SUM function actually has support for summing for a given dimension, i.e a standard matrix with two dimensions.
So to calculate the column sums do:
colsum = sum(M) % or sum(M, 1)
and for the row sums, simply do
rowsum = sum(M, 2)
My bet is that this is faster than both programming a for loop and converting to cells :)
All this can be found in the matlab help for SUM.
if you know the length of your rows you can make something like this:
a=rand(9,3);
b=rand(9,3);
arrayfun(#(x1,x2,y1,y2,z1,z2) line([x1,x2],[y1,y2],[z1,z2]) , a(:,1),b(:,1),a(:,2),b(:,2),a(:,3),b(:,3) )