How to fill a matrix using an equation in MATLAB? - matlab

I have a matrix A of arbitrary dimensions m x n and wish to fill it using an equation, for example, for every element a_ij of A, i = 1,...,m and j=1,...,n, I would like,
a_ij = i^2 + j^2.
In Matlab filled out manually it would appear similar to this,
A = [1^2+1^2, 1^2+2^2, ..., 1^2+j^2, ..., 1^2+n^2;
2^2+1^2, 2^2+2^2, ..., 2^2+j^2, ..., 2^2+n^2;
.
.
.
i^2+1^2, i^2+2^2, ..., i^2+j^2, ..., i^2+n^2;
.
.
.
m^2+1^2, m^2+2^2, ..., m^2+j^2, ..., m^2+n^2]
and so the first few terms would be:
[2, 5, 10,17,...
5, 8, 13,20,...
10,13,18,25,...
17,20,25,32,...
]

bsxfun based solution -
A = bsxfun(#plus,[1:m]'.^2,[1:n].^2)
bsxfun performs array expansion on the singleton dimensions (i.e. dimensions with number of elements equal to 1) and performs an elementwise operation specified by the function handle
which would be the first input argument to a bsxfun call.
So for our case, if we use a column vector (mx1) and a row vector (1xn), then with the listed bsxfun code, both of these vectors would expand as 2D matrices and would perform elementwise summation of elements (because of the function handle - #plus), giving us the desired 2D output. All of these steps are executed internally by MATLAB.
Note: This has to be pretty efficient with runtime performance, as bsxfun is well-suited for these expansion related problems by its very definition as described earlier.

An alternative using ndgrid:
[I, J] = ndgrid(1:m, 1:n);
A = I.^2 + J.^2;

Related

Vectorized processing of each N-dimension array in (N+1)-dimension array in Matlab? [duplicate]

You can apply a function to every item in a vector by saying, for example, v + 1, or you can use the function arrayfun. How can I do it for every row/column of a matrix without using a for loop?
Many built-in operations like sum and prod are already able to operate across rows or columns, so you may be able to refactor the function you are applying to take advantage of this.
If that's not a viable option, one way to do it is to collect the rows or columns into cells using mat2cell or num2cell, then use cellfun to operate on the resulting cell array.
As an example, let's say you want to sum the columns of a matrix M. You can do this simply using sum:
M = magic(10); %# A 10-by-10 matrix
columnSums = sum(M, 1); %# A 1-by-10 vector of sums for each column
And here is how you would do this using the more complicated num2cell/cellfun option:
M = magic(10); %# A 10-by-10 matrix
C = num2cell(M, 1); %# Collect the columns into cells
columnSums = cellfun(#sum, C); %# A 1-by-10 vector of sums for each cell
You may want the more obscure Matlab function bsxfun. From the Matlab documentation, bsxfun "applies the element-by-element binary operation specified by the function handle fun to arrays A and B, with singleton expansion enabled."
#gnovice stated above that sum and other basic functions already operate on the first non-singleton dimension (i.e., rows if there's more than one row, columns if there's only one row, or higher dimensions if the lower dimensions all have size==1). However, bsxfun works for any function, including (and especially) user-defined functions.
For example, let's say you have a matrix A and a row vector B. E.g., let's say:
A = [1 2 3;
4 5 6;
7 8 9]
B = [0 1 2]
You want a function power_by_col which returns in a vector C all the elements in A to the power of the corresponding column of B.
From the above example, C is a 3x3 matrix:
C = [1^0 2^1 3^2;
4^0 5^1 6^2;
7^0 8^1 9^2]
i.e.,
C = [1 2 9;
1 5 36;
1 8 81]
You could do this the brute force way using repmat:
C = A.^repmat(B, size(A, 1), 1)
Or you could do this the classy way using bsxfun, which internally takes care of the repmat step:
C = bsxfun(#(x,y) x.^y, A, B)
So bsxfun saves you some steps (you don't need to explicitly calculate the dimensions of A). However, in some informal tests of mine, it turns out that repmat is roughly twice as fast if the function to be applied (like my power function, above) is simple. So you'll need to choose whether you want simplicity or speed.
I can't comment on how efficient this is, but here's a solution:
applyToGivenRow = #(func, matrix) #(row) func(matrix(row, :))
applyToRows = #(func, matrix) arrayfun(applyToGivenRow(func, matrix), 1:size(matrix,1))'
% Example
myMx = [1 2 3; 4 5 6; 7 8 9];
myFunc = #sum;
applyToRows(myFunc, myMx)
Building on Alex's answer, here is a more generic function:
applyToGivenRow = #(func, matrix) #(row) func(matrix(row, :));
newApplyToRows = #(func, matrix) arrayfun(applyToGivenRow(func, matrix), 1:size(matrix,1), 'UniformOutput', false)';
takeAll = #(x) reshape([x{:}], size(x{1},2), size(x,1))';
genericApplyToRows = #(func, matrix) takeAll(newApplyToRows(func, matrix));
Here is a comparison between the two functions:
>> % Example
myMx = [1 2 3; 4 5 6; 7 8 9];
myFunc = #(x) [mean(x), std(x), sum(x), length(x)];
>> genericApplyToRows(myFunc, myMx)
ans =
2 1 6 3
5 1 15 3
8 1 24 3
>> applyToRows(myFunc, myMx)
??? Error using ==> arrayfun
Non-scalar in Uniform output, at index 1, output 1.
Set 'UniformOutput' to false.
Error in ==> #(func,matrix)arrayfun(applyToGivenRow(func,matrix),1:size(matrix,1))'
For completeness/interest I'd like to add that matlab does have a function that allows you to operate on data per-row rather than per-element. It is called rowfun (http://www.mathworks.se/help/matlab/ref/rowfun.html), but the only "problem" is that it operates on tables (http://www.mathworks.se/help/matlab/ref/table.html) rather than matrices.
Adding to the evolving nature of the answer to this question, starting with r2016b, MATLAB will implicitly expand singleton dimensions, removing the need for bsxfun in many cases.
From the r2016b release notes:
Implicit Expansion: Apply element-wise operations and functions to arrays with automatic expansion of dimensions of length 1
Implicit expansion is a generalization of scalar expansion. With
scalar expansion, a scalar expands to be the same size as another
array to facilitate element-wise operations. With implicit expansion,
the element-wise operators and functions listed here can implicitly
expand their inputs to be the same size, as long as the arrays have
compatible sizes. Two arrays have compatible sizes if, for every
dimension, the dimension sizes of the inputs are either the same or
one of them is 1. See Compatible Array Sizes for Basic Operations and
Array vs. Matrix Operations for more information.
Element-wise arithmetic operators — +, -, .*, .^, ./, .\
Relational operators — <, <=, >, >=, ==, ~=
Logical operators — &, |, xor
Bit-wise functions — bitand, bitor, bitxor
Elementary math functions — max, min, mod, rem, hypot, atan2, atan2d
For example, you can calculate the mean of each column in a matrix A,
and then subtract the vector of mean values from each column with A -
mean(A).
Previously, this functionality was available via the bsxfun function.
It is now recommended that you replace most uses of bsxfun with direct
calls to the functions and operators that support implicit expansion.
Compared to using bsxfun, implicit expansion offers faster speed,
better memory usage, and improved readability of code.
None of the above answers worked "out of the box" for me, however, the following function, obtained by copying the ideas of the other answers works:
apply_func_2_cols = #(f,M) cell2mat(cellfun(f,num2cell(M,1), 'UniformOutput',0));
It takes a function f and applies it to every column of the matrix M.
So for example:
f = #(v) [0 1;1 0]*v + [0 0.1]';
apply_func_2_cols(f,[0 0 1 1;0 1 0 1])
ans =
0.00000 1.00000 0.00000 1.00000
0.10000 0.10000 1.10000 1.10000
With recent versions of Matlab, you can use the Table data structure to your advantage. There's even a 'rowfun' operation but I found it easier just to do this:
a = magic(6);
incrementRow = cell2mat(cellfun(#(x) x+1,table2cell(table(a)),'UniformOutput',0))
or here's an older one I had that doesn't require tables, for older Matlab versions.
dataBinner = cell2mat(arrayfun(#(x) Binner(a(x,:),2)',1:size(a,1),'UniformOutput',0)')
The accepted answer seems to be to convert to cells first and then use cellfun to operate over all of the cells. I do not know the specific application, but in general I would think using bsxfun to operate over the matrix would be more efficient. Basically bsxfun applies an operation element-by-element across two arrays. So if you wanted to multiply each item in an n x 1 vector by each item in an m x 1 vector to get an n x m array, you could use:
vec1 = [ stuff ]; % n x 1 vector
vec2 = [ stuff ]; % m x 1 vector
result = bsxfun('times', vec1.', vec2);
This will give you matrix called result wherein the (i, j) entry will be the ith element of vec1 multiplied by the jth element of vec2.
You can use bsxfun for all sorts of built-in functions, and you can declare your own. The documentation has a list of many built-in functions, but basically you can name any function that accepts two arrays (vector or matrix) as arguments and get it to work.
I like splitapply, which allows a function to be applied to the columns of A using splitapply(fun,A,1:size(A,2)).
For example
A = magic(5);
B = splitapply(#(x) x+1, A, 1:size(A,2));
C = splitapply(#std, A, 1:size(A,2));
To apply the function to the rows, you could use
splitapply(fun, A', 1:size(A,1))';
(My source for this solution is here.)
Stumbled upon this question/answer while seeking how to compute the row sums of a matrix.
I would just like to add that Matlab's SUM function actually has support for summing for a given dimension, i.e a standard matrix with two dimensions.
So to calculate the column sums do:
colsum = sum(M) % or sum(M, 1)
and for the row sums, simply do
rowsum = sum(M, 2)
My bet is that this is faster than both programming a for loop and converting to cells :)
All this can be found in the matlab help for SUM.
if you know the length of your rows you can make something like this:
a=rand(9,3);
b=rand(9,3);
arrayfun(#(x1,x2,y1,y2,z1,z2) line([x1,x2],[y1,y2],[z1,z2]) , a(:,1),b(:,1),a(:,2),b(:,2),a(:,3),b(:,3) )

applying arrayfun on n-dimensional matrixes

i need your help in solving the following problem:
how can i generalize the following for any n dimensional array:
reshape(arrayfun(#(x,y)sprintf('%d,%d',x,y),C{:},'un',0),size(M));
M is my matrix and C is my matrix of indexes of M.
thanks in advance.
The problem isn't the number of dimensions in the arguments to arrayfun per se, but the number of arguments themselves - which happens in your example to correspond to the number of dimensions each argument has. You therefore need to pass it a function that accepts varargin, which still works on an anonymous function:
reshape(arrayfun(#(varargin)sprintf(strjoin(repmat('%d',size(varargin)),','),varargin{:}),C{:},'un',0),size(M));
This function gave me a lot of headache.
For functions like
f1 = #(x1,x2) x1*x2
You can do
output = arrayfun(f1,x1,x2);
where x1 and x2 are input columns.
However, if you're doing a generalized program, where f1 could have any number of inputs and you need a generalized input matrix like X, you'll need, for example
f1 = #(x1,x2,x3,x4,x5) 2*x1+4*x2+10*x3+0.2*x4+x5;
output = arrayfun(f1,num2cell(X,1){:});
where X represents a matrix with 5 columns representing x1 through x5
For example:
X = [1, 2, 3, 4, 5;
6, 7, 8, 9, 0;];

vectorizing a matlab / octave FOR loop

I'm a little confused as to how to vectorize this for loop see code below:
array1=[xfreq_orig,yamp_orig,yamp_inv,phase_orig] %frequency, amplitudes, phases to use
t_rebuilt=linspace(0,2*pi,44100)
aa_sig_rebuilt_L=zeros(1,length(t_rebuilt));
aa_sig_combined_L=zeros(1,length(t_rebuilt));
sig_full_L=zeros(1,length(t_rebuilt));
for kk=1:1:numel(xfreq_orig);
aa_sig_rebuilt_L = array1(kk, 2)*cos ((array1(kk,1))*t_rebuilt+(array1(kk, 4)));
aa_sig_combined_L = aa_sig_combined_L + aa_sig_rebuilt_L;
end
sig_full_L=(aa_sig_combined_L/max(abs(aa_sig_combined_L))*.8);
I came up with this as vectorization
Example:
array1=[10,.4,.34,2.32;12,.3,.45,.4];
t_rebuilt=linspace(0,2*pi,44100)
aa_sig_rebuilt_L = array1(kk, 2).*cos ((array1(kk,1)).*t_rebuilt+(array1(kk, 4)));
aa_sig_combined_L = sum(aa_sig_rebuilt_L);
What I don't know how to do is how to get the kk variable to access the rows incrementally
THanks.
One option is to use bsxfun as follows
a = array1;
t = t_rebuilt;
aa_sig_rebuilt_L = bsxfun(#times, a(:,2) , ...
cos( bsxfun(#plus, bsxfun(#times, a(:,1), t), a(:,4)) ));
aa_sig_combined_L = sum(aa_sig_rebuilt_L);
Bear in mind that this will use more memory than the version will a loop (it will use numel(xfreq_orig) times as much memory, as it computes every row of aa_sig_rebuilt_L before summing them, whereas the loop computes each row, adds it to the sum and then discards it).
The function bsxfun is used when you need to perform a binary operation between arrays of different sizes, e.g. a TxN matrix and a Tx1 vector. In this case it would perform the operation between each column of the matrix and the vector.
In your case you have a column vector and a row vector, and the operation is applied to the i'th element of the column vector and the j'th element of the row vector, to get the ij'th element of a matrix.

matlab "arrayfun" function

Consider the following function, which takes a gray image (2D matrix) as input:
function r = fun1(img)
r = sum(sum(img));
I'm thinking of using arrayfun to process a series of images(3d matrix), thus eliminating the need for a for loop:
arrayfun(#fun1, imgStack);
But arrayfun tries to treat every element of imgStack as an input to fun1, the result of the previous operation also being a 3D matrix. How can I let arrayfun know that I want to repeat fun1 only on the 3rd dimension of imgStack?
Another question, does arrayfun invoke fun1 in parallel?
In this case, you don't need arrayfun to perform your calculation, you can simply do this:
imgStack = rand( 10, 10, 4 ); % 4 10x10 images
r = sum( sum( imgStack, 1 ), 2 ); % sum along both dimensions 1 and 2
In general, lots of MATLAB operations will operate on a whole array at once, that's the usual way of avoiding loops.
MATLAB's normal "arrayfun" is not parallel. However, for GPUArrays (with Parallel Computing Toolbox), there is a parallel version of arrayfun.
On your first question:
You might try accumarray for this. One suggestion
function ds = applyfun_onfirstdim(arr, h_fun)
dimvec = size(arr);
indexarr = repmat( (1:dimvec(1))', [1, dimvec(2:end)] );
ds = accumarray(indexarr(:), arr(:), [], h_fun);
This creates an auxiliary index-array of the same dimensions as the input "arr". Every slice you want to apply h_fun to gets the same index-number. In this example it is the first.

How can I apply a function to every row/column of a matrix in MATLAB?

You can apply a function to every item in a vector by saying, for example, v + 1, or you can use the function arrayfun. How can I do it for every row/column of a matrix without using a for loop?
Many built-in operations like sum and prod are already able to operate across rows or columns, so you may be able to refactor the function you are applying to take advantage of this.
If that's not a viable option, one way to do it is to collect the rows or columns into cells using mat2cell or num2cell, then use cellfun to operate on the resulting cell array.
As an example, let's say you want to sum the columns of a matrix M. You can do this simply using sum:
M = magic(10); %# A 10-by-10 matrix
columnSums = sum(M, 1); %# A 1-by-10 vector of sums for each column
And here is how you would do this using the more complicated num2cell/cellfun option:
M = magic(10); %# A 10-by-10 matrix
C = num2cell(M, 1); %# Collect the columns into cells
columnSums = cellfun(#sum, C); %# A 1-by-10 vector of sums for each cell
You may want the more obscure Matlab function bsxfun. From the Matlab documentation, bsxfun "applies the element-by-element binary operation specified by the function handle fun to arrays A and B, with singleton expansion enabled."
#gnovice stated above that sum and other basic functions already operate on the first non-singleton dimension (i.e., rows if there's more than one row, columns if there's only one row, or higher dimensions if the lower dimensions all have size==1). However, bsxfun works for any function, including (and especially) user-defined functions.
For example, let's say you have a matrix A and a row vector B. E.g., let's say:
A = [1 2 3;
4 5 6;
7 8 9]
B = [0 1 2]
You want a function power_by_col which returns in a vector C all the elements in A to the power of the corresponding column of B.
From the above example, C is a 3x3 matrix:
C = [1^0 2^1 3^2;
4^0 5^1 6^2;
7^0 8^1 9^2]
i.e.,
C = [1 2 9;
1 5 36;
1 8 81]
You could do this the brute force way using repmat:
C = A.^repmat(B, size(A, 1), 1)
Or you could do this the classy way using bsxfun, which internally takes care of the repmat step:
C = bsxfun(#(x,y) x.^y, A, B)
So bsxfun saves you some steps (you don't need to explicitly calculate the dimensions of A). However, in some informal tests of mine, it turns out that repmat is roughly twice as fast if the function to be applied (like my power function, above) is simple. So you'll need to choose whether you want simplicity or speed.
I can't comment on how efficient this is, but here's a solution:
applyToGivenRow = #(func, matrix) #(row) func(matrix(row, :))
applyToRows = #(func, matrix) arrayfun(applyToGivenRow(func, matrix), 1:size(matrix,1))'
% Example
myMx = [1 2 3; 4 5 6; 7 8 9];
myFunc = #sum;
applyToRows(myFunc, myMx)
Building on Alex's answer, here is a more generic function:
applyToGivenRow = #(func, matrix) #(row) func(matrix(row, :));
newApplyToRows = #(func, matrix) arrayfun(applyToGivenRow(func, matrix), 1:size(matrix,1), 'UniformOutput', false)';
takeAll = #(x) reshape([x{:}], size(x{1},2), size(x,1))';
genericApplyToRows = #(func, matrix) takeAll(newApplyToRows(func, matrix));
Here is a comparison between the two functions:
>> % Example
myMx = [1 2 3; 4 5 6; 7 8 9];
myFunc = #(x) [mean(x), std(x), sum(x), length(x)];
>> genericApplyToRows(myFunc, myMx)
ans =
2 1 6 3
5 1 15 3
8 1 24 3
>> applyToRows(myFunc, myMx)
??? Error using ==> arrayfun
Non-scalar in Uniform output, at index 1, output 1.
Set 'UniformOutput' to false.
Error in ==> #(func,matrix)arrayfun(applyToGivenRow(func,matrix),1:size(matrix,1))'
For completeness/interest I'd like to add that matlab does have a function that allows you to operate on data per-row rather than per-element. It is called rowfun (http://www.mathworks.se/help/matlab/ref/rowfun.html), but the only "problem" is that it operates on tables (http://www.mathworks.se/help/matlab/ref/table.html) rather than matrices.
Adding to the evolving nature of the answer to this question, starting with r2016b, MATLAB will implicitly expand singleton dimensions, removing the need for bsxfun in many cases.
From the r2016b release notes:
Implicit Expansion: Apply element-wise operations and functions to arrays with automatic expansion of dimensions of length 1
Implicit expansion is a generalization of scalar expansion. With
scalar expansion, a scalar expands to be the same size as another
array to facilitate element-wise operations. With implicit expansion,
the element-wise operators and functions listed here can implicitly
expand their inputs to be the same size, as long as the arrays have
compatible sizes. Two arrays have compatible sizes if, for every
dimension, the dimension sizes of the inputs are either the same or
one of them is 1. See Compatible Array Sizes for Basic Operations and
Array vs. Matrix Operations for more information.
Element-wise arithmetic operators — +, -, .*, .^, ./, .\
Relational operators — <, <=, >, >=, ==, ~=
Logical operators — &, |, xor
Bit-wise functions — bitand, bitor, bitxor
Elementary math functions — max, min, mod, rem, hypot, atan2, atan2d
For example, you can calculate the mean of each column in a matrix A,
and then subtract the vector of mean values from each column with A -
mean(A).
Previously, this functionality was available via the bsxfun function.
It is now recommended that you replace most uses of bsxfun with direct
calls to the functions and operators that support implicit expansion.
Compared to using bsxfun, implicit expansion offers faster speed,
better memory usage, and improved readability of code.
None of the above answers worked "out of the box" for me, however, the following function, obtained by copying the ideas of the other answers works:
apply_func_2_cols = #(f,M) cell2mat(cellfun(f,num2cell(M,1), 'UniformOutput',0));
It takes a function f and applies it to every column of the matrix M.
So for example:
f = #(v) [0 1;1 0]*v + [0 0.1]';
apply_func_2_cols(f,[0 0 1 1;0 1 0 1])
ans =
0.00000 1.00000 0.00000 1.00000
0.10000 0.10000 1.10000 1.10000
With recent versions of Matlab, you can use the Table data structure to your advantage. There's even a 'rowfun' operation but I found it easier just to do this:
a = magic(6);
incrementRow = cell2mat(cellfun(#(x) x+1,table2cell(table(a)),'UniformOutput',0))
or here's an older one I had that doesn't require tables, for older Matlab versions.
dataBinner = cell2mat(arrayfun(#(x) Binner(a(x,:),2)',1:size(a,1),'UniformOutput',0)')
The accepted answer seems to be to convert to cells first and then use cellfun to operate over all of the cells. I do not know the specific application, but in general I would think using bsxfun to operate over the matrix would be more efficient. Basically bsxfun applies an operation element-by-element across two arrays. So if you wanted to multiply each item in an n x 1 vector by each item in an m x 1 vector to get an n x m array, you could use:
vec1 = [ stuff ]; % n x 1 vector
vec2 = [ stuff ]; % m x 1 vector
result = bsxfun('times', vec1.', vec2);
This will give you matrix called result wherein the (i, j) entry will be the ith element of vec1 multiplied by the jth element of vec2.
You can use bsxfun for all sorts of built-in functions, and you can declare your own. The documentation has a list of many built-in functions, but basically you can name any function that accepts two arrays (vector or matrix) as arguments and get it to work.
I like splitapply, which allows a function to be applied to the columns of A using splitapply(fun,A,1:size(A,2)).
For example
A = magic(5);
B = splitapply(#(x) x+1, A, 1:size(A,2));
C = splitapply(#std, A, 1:size(A,2));
To apply the function to the rows, you could use
splitapply(fun, A', 1:size(A,1))';
(My source for this solution is here.)
Stumbled upon this question/answer while seeking how to compute the row sums of a matrix.
I would just like to add that Matlab's SUM function actually has support for summing for a given dimension, i.e a standard matrix with two dimensions.
So to calculate the column sums do:
colsum = sum(M) % or sum(M, 1)
and for the row sums, simply do
rowsum = sum(M, 2)
My bet is that this is faster than both programming a for loop and converting to cells :)
All this can be found in the matlab help for SUM.
if you know the length of your rows you can make something like this:
a=rand(9,3);
b=rand(9,3);
arrayfun(#(x1,x2,y1,y2,z1,z2) line([x1,x2],[y1,y2],[z1,z2]) , a(:,1),b(:,1),a(:,2),b(:,2),a(:,3),b(:,3) )