Matlab's bsxfun() code - matlab

What does this do?
u = [5 6];
s = [1 1];
data1 =[randn(10,1) -1*ones(10,1)];
data2 =[randn(10,1) ones(10,1)];
data = [data1; data2];
deviance = bsxfun(#minus,data,u);
deviance = bsxfun(#rdivide,deviance,s);
deviance = deviance .^ 2;
deviance = bsxfun(#plus,deviance,2*log(abs(s)));
[dummy,mini] = min(deviance,[],2);
Is there an equivalent way of doing this without bsxfun?

The function BSXFUN will perform the requested element-wise operation (function handle argument) by replicating dimensions of the two input arguments so that they match each other in size. You can avoid the use of BSXFUN in this case by replicating the variables u and s yourself using the function REPMAT to make them each the same size as data. Then you can use the standard element-wise arithmetic operators:
u = repmat(u,size(data,1),1); %# Replicate u so it becomes a 20-by-2 array
s = repmat(s,size(data,1),1); %# Replicate s so it becomes a 20-by-2 array
deviance = ((data-u)./s).^2 + 2.*log(abs(s)); %# Shortened to one line

bsxfun does binary operations element wise. It's useful when you need to subtract a vector (in this case u) from all the elements along a particular dimension in a matrix (in this case data). The dimension along which the operation is being performed must match in both cases. For your example, you can incorporate the code without bsxfun as
u1=repmat(u,size(data,2),1);
deviance=data-u1;
and so on for the other operations.

Related

Vectorized processing of each N-dimension array in (N+1)-dimension array in Matlab? [duplicate]

You can apply a function to every item in a vector by saying, for example, v + 1, or you can use the function arrayfun. How can I do it for every row/column of a matrix without using a for loop?
Many built-in operations like sum and prod are already able to operate across rows or columns, so you may be able to refactor the function you are applying to take advantage of this.
If that's not a viable option, one way to do it is to collect the rows or columns into cells using mat2cell or num2cell, then use cellfun to operate on the resulting cell array.
As an example, let's say you want to sum the columns of a matrix M. You can do this simply using sum:
M = magic(10); %# A 10-by-10 matrix
columnSums = sum(M, 1); %# A 1-by-10 vector of sums for each column
And here is how you would do this using the more complicated num2cell/cellfun option:
M = magic(10); %# A 10-by-10 matrix
C = num2cell(M, 1); %# Collect the columns into cells
columnSums = cellfun(#sum, C); %# A 1-by-10 vector of sums for each cell
You may want the more obscure Matlab function bsxfun. From the Matlab documentation, bsxfun "applies the element-by-element binary operation specified by the function handle fun to arrays A and B, with singleton expansion enabled."
#gnovice stated above that sum and other basic functions already operate on the first non-singleton dimension (i.e., rows if there's more than one row, columns if there's only one row, or higher dimensions if the lower dimensions all have size==1). However, bsxfun works for any function, including (and especially) user-defined functions.
For example, let's say you have a matrix A and a row vector B. E.g., let's say:
A = [1 2 3;
4 5 6;
7 8 9]
B = [0 1 2]
You want a function power_by_col which returns in a vector C all the elements in A to the power of the corresponding column of B.
From the above example, C is a 3x3 matrix:
C = [1^0 2^1 3^2;
4^0 5^1 6^2;
7^0 8^1 9^2]
i.e.,
C = [1 2 9;
1 5 36;
1 8 81]
You could do this the brute force way using repmat:
C = A.^repmat(B, size(A, 1), 1)
Or you could do this the classy way using bsxfun, which internally takes care of the repmat step:
C = bsxfun(#(x,y) x.^y, A, B)
So bsxfun saves you some steps (you don't need to explicitly calculate the dimensions of A). However, in some informal tests of mine, it turns out that repmat is roughly twice as fast if the function to be applied (like my power function, above) is simple. So you'll need to choose whether you want simplicity or speed.
I can't comment on how efficient this is, but here's a solution:
applyToGivenRow = #(func, matrix) #(row) func(matrix(row, :))
applyToRows = #(func, matrix) arrayfun(applyToGivenRow(func, matrix), 1:size(matrix,1))'
% Example
myMx = [1 2 3; 4 5 6; 7 8 9];
myFunc = #sum;
applyToRows(myFunc, myMx)
Building on Alex's answer, here is a more generic function:
applyToGivenRow = #(func, matrix) #(row) func(matrix(row, :));
newApplyToRows = #(func, matrix) arrayfun(applyToGivenRow(func, matrix), 1:size(matrix,1), 'UniformOutput', false)';
takeAll = #(x) reshape([x{:}], size(x{1},2), size(x,1))';
genericApplyToRows = #(func, matrix) takeAll(newApplyToRows(func, matrix));
Here is a comparison between the two functions:
>> % Example
myMx = [1 2 3; 4 5 6; 7 8 9];
myFunc = #(x) [mean(x), std(x), sum(x), length(x)];
>> genericApplyToRows(myFunc, myMx)
ans =
2 1 6 3
5 1 15 3
8 1 24 3
>> applyToRows(myFunc, myMx)
??? Error using ==> arrayfun
Non-scalar in Uniform output, at index 1, output 1.
Set 'UniformOutput' to false.
Error in ==> #(func,matrix)arrayfun(applyToGivenRow(func,matrix),1:size(matrix,1))'
For completeness/interest I'd like to add that matlab does have a function that allows you to operate on data per-row rather than per-element. It is called rowfun (http://www.mathworks.se/help/matlab/ref/rowfun.html), but the only "problem" is that it operates on tables (http://www.mathworks.se/help/matlab/ref/table.html) rather than matrices.
Adding to the evolving nature of the answer to this question, starting with r2016b, MATLAB will implicitly expand singleton dimensions, removing the need for bsxfun in many cases.
From the r2016b release notes:
Implicit Expansion: Apply element-wise operations and functions to arrays with automatic expansion of dimensions of length 1
Implicit expansion is a generalization of scalar expansion. With
scalar expansion, a scalar expands to be the same size as another
array to facilitate element-wise operations. With implicit expansion,
the element-wise operators and functions listed here can implicitly
expand their inputs to be the same size, as long as the arrays have
compatible sizes. Two arrays have compatible sizes if, for every
dimension, the dimension sizes of the inputs are either the same or
one of them is 1. See Compatible Array Sizes for Basic Operations and
Array vs. Matrix Operations for more information.
Element-wise arithmetic operators — +, -, .*, .^, ./, .\
Relational operators — <, <=, >, >=, ==, ~=
Logical operators — &, |, xor
Bit-wise functions — bitand, bitor, bitxor
Elementary math functions — max, min, mod, rem, hypot, atan2, atan2d
For example, you can calculate the mean of each column in a matrix A,
and then subtract the vector of mean values from each column with A -
mean(A).
Previously, this functionality was available via the bsxfun function.
It is now recommended that you replace most uses of bsxfun with direct
calls to the functions and operators that support implicit expansion.
Compared to using bsxfun, implicit expansion offers faster speed,
better memory usage, and improved readability of code.
None of the above answers worked "out of the box" for me, however, the following function, obtained by copying the ideas of the other answers works:
apply_func_2_cols = #(f,M) cell2mat(cellfun(f,num2cell(M,1), 'UniformOutput',0));
It takes a function f and applies it to every column of the matrix M.
So for example:
f = #(v) [0 1;1 0]*v + [0 0.1]';
apply_func_2_cols(f,[0 0 1 1;0 1 0 1])
ans =
0.00000 1.00000 0.00000 1.00000
0.10000 0.10000 1.10000 1.10000
With recent versions of Matlab, you can use the Table data structure to your advantage. There's even a 'rowfun' operation but I found it easier just to do this:
a = magic(6);
incrementRow = cell2mat(cellfun(#(x) x+1,table2cell(table(a)),'UniformOutput',0))
or here's an older one I had that doesn't require tables, for older Matlab versions.
dataBinner = cell2mat(arrayfun(#(x) Binner(a(x,:),2)',1:size(a,1),'UniformOutput',0)')
The accepted answer seems to be to convert to cells first and then use cellfun to operate over all of the cells. I do not know the specific application, but in general I would think using bsxfun to operate over the matrix would be more efficient. Basically bsxfun applies an operation element-by-element across two arrays. So if you wanted to multiply each item in an n x 1 vector by each item in an m x 1 vector to get an n x m array, you could use:
vec1 = [ stuff ]; % n x 1 vector
vec2 = [ stuff ]; % m x 1 vector
result = bsxfun('times', vec1.', vec2);
This will give you matrix called result wherein the (i, j) entry will be the ith element of vec1 multiplied by the jth element of vec2.
You can use bsxfun for all sorts of built-in functions, and you can declare your own. The documentation has a list of many built-in functions, but basically you can name any function that accepts two arrays (vector or matrix) as arguments and get it to work.
I like splitapply, which allows a function to be applied to the columns of A using splitapply(fun,A,1:size(A,2)).
For example
A = magic(5);
B = splitapply(#(x) x+1, A, 1:size(A,2));
C = splitapply(#std, A, 1:size(A,2));
To apply the function to the rows, you could use
splitapply(fun, A', 1:size(A,1))';
(My source for this solution is here.)
Stumbled upon this question/answer while seeking how to compute the row sums of a matrix.
I would just like to add that Matlab's SUM function actually has support for summing for a given dimension, i.e a standard matrix with two dimensions.
So to calculate the column sums do:
colsum = sum(M) % or sum(M, 1)
and for the row sums, simply do
rowsum = sum(M, 2)
My bet is that this is faster than both programming a for loop and converting to cells :)
All this can be found in the matlab help for SUM.
if you know the length of your rows you can make something like this:
a=rand(9,3);
b=rand(9,3);
arrayfun(#(x1,x2,y1,y2,z1,z2) line([x1,x2],[y1,y2],[z1,z2]) , a(:,1),b(:,1),a(:,2),b(:,2),a(:,3),b(:,3) )

How to crop matrix of any number of dimensions in Matlab?

Suppose I have 4D matrix:
>> A=1:(3*4*5*6);
>> A=reshape(A,3,4,5,6);
And now I want to cut given number of rows and columns (or any given chunks at known dimensions).
If I would know it's 4D I would write:
>> A1=A(1:2,1:3,:,:);
But how to write universally for any given number of dimensions?
The following gives something different:
>> A2=A(1:2,1:3,:);
And the following gives an error:
>> A2=A;
>> A2(3:3,4:4)=[];
It is possible to generate a code with general number of dimension of A using the second form of indexing you used and reshape function.
Here there is an example:
Asize = [3,4,2,6,4]; %Initialization of A, not seen by the rest of the code
A = rand(Asize);
%% This part of the code can operate for any matrix A
I = 1:2;
J = 3:4;
A1 = A(I,J,:);
NewSize = size(A);
NewSize(1) = length(I);
NewSize(2) = length(J);
A2 = reshape(A1,NewSize);
A2 will be your cropped matrix. It works for any Asize you choose.
I recommend the solution Luis Mendo suggested for the general case, but there is also a very simple solution when you know a upper limit for your dimensions. Let's assume you have at most 6 dimensions. Use 6 dimensional indexing for all matrices:
A1=A(1:2,1:3,:,:,:,:);
Matlab will implicit assume singleton dimensions for all remaining dimension, returning the intended result also for matrices with less dimensions.
It sounds like you just want to use ndims.
num_dimensions = ndims(A)
if (num_dimensions == 3)
A1 = A(1:2, 1:3, :);
elseif (num_dimensions == 4)
A1 = A(1:2, 1:3, :, :);
end
If the range of possible matrix dimensions is small this kind of if-else block keeps it simple. It seems like you want some way to create an indexing tuple (e.g. (1:2,:,:,:) ) on the fly, which I don't know if there is a way to do. You must match the correct number of dimensions with your indexing...if you index in fewer dimensions than the matrix has, matlab returns a value with the unindexed dimensions collapsed into a single array (similar to what you get with
A1 = A(:);

element by element matrix multiplication in Matlab

So I have the following matrices:
A = [1 2 3; 4 5 6];
B = [0.5 2 3];
I'm writing a function in MATLAB that will allow me to multiply a vector and a matrix by element as long as the number of elements in the vector matches the number of columns. In A there are 3 columns:
1 2 3
4 5 6
B also has 3 elements so this should work. I'm trying to produce the following output based on A and B:
0.5 4 9
2 10 18
My code is below. Does anyone know what I'm doing wrong?
function C = lab11(mat, vec)
C = zeros(2,3);
[a, b] = size(mat);
[c, d] = size(vec);
for i = 1:a
for k = 1:b
for j = 1
C(i,k) = C(i,k) + A(i,j) * B(j,k);
end
end
end
end
MATLAB already has functionality to do this in the bsxfun function. bsxfun will take two matrices and duplicate singleton dimensions until the matrices are the same size, then perform a binary operation on the two matrices. So, for your example, you would simply do the following:
C = bsxfun(#times,mat,vec);
Referencing MrAzzaman, bsxfun is the way to go with this. However, judging from your function name, this looks like it's homework, and so let's stick with what you have originally. As such, you need to only write two for loops. You would use the second for loop to index into both the vector and the columns of the matrix at the same time. The outer most for loop would access the rows of the matrix. In addition, you are referencing A and B, which are variables that don't exist in your code. You are also initializing the output matrix C to be 2 x 3 always. You want this to be the same size as mat. I also removed your checking of the length of the vector because you weren't doing anything with the result.
As such:
function C = lab11(mat, vec)
[a, b] = size(mat);
C = zeros(a,b);
for i = 1:a
for k = 1:b
C(i,k) = mat(i,k) * vec(k);
end
end
end
Take special note at what I did. The outer-most for loop accesses the rows of mat, while the inner-most loop accesses the columns of mat as well as the elements of vec. Bear in mind that the number of columns of mat need to be the same as the number of elements in vec. You should probably check for this in your code.
If you don't like using the bsxfun approach, one alternative is to take the vector vec and make a matrix out of this that is the same size as mat by stacking the vector vec on top of itself for as many times as we have rows in mat. After this, you can do element-by-element multiplication. You can do this stacking by using repmat which repeats a vector or matrices a given number of times in any dimension(s) you want. As such, your function would be simplified to:
function C = lab11(mat, vec)
rows = size(mat, 1);
vec_mat = repmat(vec, rows, 1);
C = mat .* vec_mat;
end
However, I would personally go with the bsxfun route. bsxfun basically does what the repmat paradigm does under the hood. Internally, it ensures that both of your inputs have the same size. If it doesn't, it replicates the smaller array / matrix until it is the same size as the larger array / matrix, then applies an element-by-element operation to the corresponding elements in both variables. bsxfun stands for Binary Singleton EXpansion FUNction, which is a fancy way of saying exactly what I just talked about.
Therefore, your function is further simplified to:
function C = lab11(mat, vec)
C = bsxfun(#times, mat, vec);
end
Good luck!

Vectorize call to function of two vectors (treat matrix as array of vector)

I wish to compute the cumulative cosine distance between sets of vectors.
The natural representation of a set of vectors is a matrix...but how do I vectorize the following?
function d = cosdist(P1,P2)
ds = zeros(size(P1,2),1);
for k=1:size(P1,2)
%#used transpose() to avoid SO formatting on '
ds(k)=transpose(P1(:,k))*P2(:,k)/(norm(P1(:,k))*norm(P2(:,k)));
end
d = prod(ds);
end
I can of course write
fz = #(v1,v2) transpose(v1)*v2/(norm(v1)*norm(v2));
ds = cellfun(fz,P1,P2);
...so long as I recast my matrices as cell arrays of vectors. Is there a better / entirely numeric way?
Also, will cellfun, arrayfun, etc. take advantage of vector instructions and/or multithreading?
Note probably superfluous in present company but for column vectors v1'*v2 == dot(v1,v2) and is significantly faster in Matlab.
Since P1 and P2 are of the same size, you can do element-wise operations here. v1'*v equals sum(v1.*v2), by the way.
d = prod(sum(P1.*P2,1)./sqrt(sum(P1.^2,1) .* sum(P2.^2,1)));
#Jonas had the right idea, but the normalizing denominator might be incorrect. Try this instead:
%# matrix of column vectors
P1 = rand(5,8);
P2 = rand(5,8);
d = prod( sum(P1.*P2,1) ./ sqrt(sum(P1.^2,1).*sum(P2.^2,1)) );
You can compare this against the results returned by PDIST2 function:
%# PDIST2 returns one minus cosine distance between all pairs of vectors
d2 = prod( 1-diag(pdist2(P1',P2','cosine')) );

How can I apply a function to every row/column of a matrix in MATLAB?

You can apply a function to every item in a vector by saying, for example, v + 1, or you can use the function arrayfun. How can I do it for every row/column of a matrix without using a for loop?
Many built-in operations like sum and prod are already able to operate across rows or columns, so you may be able to refactor the function you are applying to take advantage of this.
If that's not a viable option, one way to do it is to collect the rows or columns into cells using mat2cell or num2cell, then use cellfun to operate on the resulting cell array.
As an example, let's say you want to sum the columns of a matrix M. You can do this simply using sum:
M = magic(10); %# A 10-by-10 matrix
columnSums = sum(M, 1); %# A 1-by-10 vector of sums for each column
And here is how you would do this using the more complicated num2cell/cellfun option:
M = magic(10); %# A 10-by-10 matrix
C = num2cell(M, 1); %# Collect the columns into cells
columnSums = cellfun(#sum, C); %# A 1-by-10 vector of sums for each cell
You may want the more obscure Matlab function bsxfun. From the Matlab documentation, bsxfun "applies the element-by-element binary operation specified by the function handle fun to arrays A and B, with singleton expansion enabled."
#gnovice stated above that sum and other basic functions already operate on the first non-singleton dimension (i.e., rows if there's more than one row, columns if there's only one row, or higher dimensions if the lower dimensions all have size==1). However, bsxfun works for any function, including (and especially) user-defined functions.
For example, let's say you have a matrix A and a row vector B. E.g., let's say:
A = [1 2 3;
4 5 6;
7 8 9]
B = [0 1 2]
You want a function power_by_col which returns in a vector C all the elements in A to the power of the corresponding column of B.
From the above example, C is a 3x3 matrix:
C = [1^0 2^1 3^2;
4^0 5^1 6^2;
7^0 8^1 9^2]
i.e.,
C = [1 2 9;
1 5 36;
1 8 81]
You could do this the brute force way using repmat:
C = A.^repmat(B, size(A, 1), 1)
Or you could do this the classy way using bsxfun, which internally takes care of the repmat step:
C = bsxfun(#(x,y) x.^y, A, B)
So bsxfun saves you some steps (you don't need to explicitly calculate the dimensions of A). However, in some informal tests of mine, it turns out that repmat is roughly twice as fast if the function to be applied (like my power function, above) is simple. So you'll need to choose whether you want simplicity or speed.
I can't comment on how efficient this is, but here's a solution:
applyToGivenRow = #(func, matrix) #(row) func(matrix(row, :))
applyToRows = #(func, matrix) arrayfun(applyToGivenRow(func, matrix), 1:size(matrix,1))'
% Example
myMx = [1 2 3; 4 5 6; 7 8 9];
myFunc = #sum;
applyToRows(myFunc, myMx)
Building on Alex's answer, here is a more generic function:
applyToGivenRow = #(func, matrix) #(row) func(matrix(row, :));
newApplyToRows = #(func, matrix) arrayfun(applyToGivenRow(func, matrix), 1:size(matrix,1), 'UniformOutput', false)';
takeAll = #(x) reshape([x{:}], size(x{1},2), size(x,1))';
genericApplyToRows = #(func, matrix) takeAll(newApplyToRows(func, matrix));
Here is a comparison between the two functions:
>> % Example
myMx = [1 2 3; 4 5 6; 7 8 9];
myFunc = #(x) [mean(x), std(x), sum(x), length(x)];
>> genericApplyToRows(myFunc, myMx)
ans =
2 1 6 3
5 1 15 3
8 1 24 3
>> applyToRows(myFunc, myMx)
??? Error using ==> arrayfun
Non-scalar in Uniform output, at index 1, output 1.
Set 'UniformOutput' to false.
Error in ==> #(func,matrix)arrayfun(applyToGivenRow(func,matrix),1:size(matrix,1))'
For completeness/interest I'd like to add that matlab does have a function that allows you to operate on data per-row rather than per-element. It is called rowfun (http://www.mathworks.se/help/matlab/ref/rowfun.html), but the only "problem" is that it operates on tables (http://www.mathworks.se/help/matlab/ref/table.html) rather than matrices.
Adding to the evolving nature of the answer to this question, starting with r2016b, MATLAB will implicitly expand singleton dimensions, removing the need for bsxfun in many cases.
From the r2016b release notes:
Implicit Expansion: Apply element-wise operations and functions to arrays with automatic expansion of dimensions of length 1
Implicit expansion is a generalization of scalar expansion. With
scalar expansion, a scalar expands to be the same size as another
array to facilitate element-wise operations. With implicit expansion,
the element-wise operators and functions listed here can implicitly
expand their inputs to be the same size, as long as the arrays have
compatible sizes. Two arrays have compatible sizes if, for every
dimension, the dimension sizes of the inputs are either the same or
one of them is 1. See Compatible Array Sizes for Basic Operations and
Array vs. Matrix Operations for more information.
Element-wise arithmetic operators — +, -, .*, .^, ./, .\
Relational operators — <, <=, >, >=, ==, ~=
Logical operators — &, |, xor
Bit-wise functions — bitand, bitor, bitxor
Elementary math functions — max, min, mod, rem, hypot, atan2, atan2d
For example, you can calculate the mean of each column in a matrix A,
and then subtract the vector of mean values from each column with A -
mean(A).
Previously, this functionality was available via the bsxfun function.
It is now recommended that you replace most uses of bsxfun with direct
calls to the functions and operators that support implicit expansion.
Compared to using bsxfun, implicit expansion offers faster speed,
better memory usage, and improved readability of code.
None of the above answers worked "out of the box" for me, however, the following function, obtained by copying the ideas of the other answers works:
apply_func_2_cols = #(f,M) cell2mat(cellfun(f,num2cell(M,1), 'UniformOutput',0));
It takes a function f and applies it to every column of the matrix M.
So for example:
f = #(v) [0 1;1 0]*v + [0 0.1]';
apply_func_2_cols(f,[0 0 1 1;0 1 0 1])
ans =
0.00000 1.00000 0.00000 1.00000
0.10000 0.10000 1.10000 1.10000
With recent versions of Matlab, you can use the Table data structure to your advantage. There's even a 'rowfun' operation but I found it easier just to do this:
a = magic(6);
incrementRow = cell2mat(cellfun(#(x) x+1,table2cell(table(a)),'UniformOutput',0))
or here's an older one I had that doesn't require tables, for older Matlab versions.
dataBinner = cell2mat(arrayfun(#(x) Binner(a(x,:),2)',1:size(a,1),'UniformOutput',0)')
The accepted answer seems to be to convert to cells first and then use cellfun to operate over all of the cells. I do not know the specific application, but in general I would think using bsxfun to operate over the matrix would be more efficient. Basically bsxfun applies an operation element-by-element across two arrays. So if you wanted to multiply each item in an n x 1 vector by each item in an m x 1 vector to get an n x m array, you could use:
vec1 = [ stuff ]; % n x 1 vector
vec2 = [ stuff ]; % m x 1 vector
result = bsxfun('times', vec1.', vec2);
This will give you matrix called result wherein the (i, j) entry will be the ith element of vec1 multiplied by the jth element of vec2.
You can use bsxfun for all sorts of built-in functions, and you can declare your own. The documentation has a list of many built-in functions, but basically you can name any function that accepts two arrays (vector or matrix) as arguments and get it to work.
I like splitapply, which allows a function to be applied to the columns of A using splitapply(fun,A,1:size(A,2)).
For example
A = magic(5);
B = splitapply(#(x) x+1, A, 1:size(A,2));
C = splitapply(#std, A, 1:size(A,2));
To apply the function to the rows, you could use
splitapply(fun, A', 1:size(A,1))';
(My source for this solution is here.)
Stumbled upon this question/answer while seeking how to compute the row sums of a matrix.
I would just like to add that Matlab's SUM function actually has support for summing for a given dimension, i.e a standard matrix with two dimensions.
So to calculate the column sums do:
colsum = sum(M) % or sum(M, 1)
and for the row sums, simply do
rowsum = sum(M, 2)
My bet is that this is faster than both programming a for loop and converting to cells :)
All this can be found in the matlab help for SUM.
if you know the length of your rows you can make something like this:
a=rand(9,3);
b=rand(9,3);
arrayfun(#(x1,x2,y1,y2,z1,z2) line([x1,x2],[y1,y2],[z1,z2]) , a(:,1),b(:,1),a(:,2),b(:,2),a(:,3),b(:,3) )