Need help in using bsxfun - matlab

I have two arrays in MATLAB:
A; % size(A) = [NX NY NZ 3 3]
b; % size(b) = [NX NY NZ 3 1]
In fact, in the three dimensional domain, I have two arrays defined for each (i, j, k) which are obtained from above-mentioned arrays A and b, respectively and their sizes are [3 3] and [3 1], respectively. Let's for the sake of example, call these arrays m and n.
m; % size(m) = [3 3]
n; % size(n) = [3 1]
How can I solve m\n for each point of the domain in a vectorize fashion? I used bsxfun but I am not successful.
solution = bsxfun( #(A,b) A\b, A, b );
I think the problem is with the expansion of the singleton elements and I don't know how to fix it.

I tried some solutions, it seems that a for loop is acutally the fastest possibility in this case.
A naive approach looks like this:
%iterate
C=zeros(size(B));
for a=1:size(A,1)
for b=1:size(A,2)
for c=1:size(A,3)
C(a,b,c,:)=squeeze(A(a,b,c,:,:))\squeeze(B(a,b,c,:));
end
end
end
The squeeze is expensive in computation time, because it needs some advanced indexing. Swapping the dimensions instead is faster.
A=permute(A,[4,5,1,2,3]);
B=permute(B,[4,1,2,3]);
C2=zeros(size(B));
for a=1:size(A,3)
for b=1:size(A,4)
for c=1:size(A,5)
C2(:,a,b,c)=(A(:,:,a,b,c))\(B(:,a,b,c));
end
end
end
C2=permute(C2,[2,3,4,1]);
The second solution is about 5 times faster.
/Update: I found an improved version. Reshaping and using only one large loop increases the speed again. This version is also suitable to be used with the parallel computing toolbox, in case you own it replace the for with a parfor and start the workers.
A=permute(A,[4,5,1,2,3]);
B=permute(B,[4,1,2,3]);
%linearize A and B to get a better performance
linA=reshape(A,[size(A,1),size(A,2),size(A,3)*size(A,4)*size(A,5)]);
linB=reshape(B,[size(B,1),size(B,2)*size(B,3)*size(B,4)]);
C3=zeros(size(linB));
for a=1:size(linA,3)
C3(:,a)=(linA(:,:,a))\(linB(:,a));
end
%undo linearization
C3=reshape(C3,size(B));
%undo dimension swap
C3=permute(C3,[2,3,4,1]);

Related

Contracting tensor in Matlab

I am looking for a way to contract two indices of a tensor in Matlab.
Say I have a tensor of dimension [17,10,17,12] I am looking for a function that sums over the first and third dimension with the same index and leaves a matrix of dimension [10,12] (analogous to a trace in two dimensions).
I am currently studying tensor networks and I mainly use the functions "permute" and "reshape". If one is contracting multiple tensors and is not careful from the beginning, one might end up with indices one wants to contract in one tensor of the form [i,j,i,k].
Of course one can go back and contract the tensors in a way such that this does not happen, but I'd nonetheless be interested in a more robust solution.
EDIT:
Something to the effect of:
A = rand(17,10,17,12);
A_contracted = zeros(10,12);
for i = [1:10]
for j = [1:12]
for k = [1:17]
A_contracted(i,j) = A_contracted(i,j) + A(k,i,k,j);
end
end
end
Here's a way to do it:
A_contracted = permute(sum( ...
A.*((1:size(A,1)).'==reshape(1:size(A,3), 1, 1, [])), [1 3]), [2 4 1 3]);
The above uses implicit expansion and the possibility to operate along multiple dimensions at once in sum, which are recent Matlab features. For older Matlab versions,
A_contracted = permute(sum(sum( ...
A.*bsxfun(#eq, (1:size(A,1)).', reshape(1:size(A,3), 1, 1, [])),1),3), [2 4 1 3]);
[I feel like I'm starting to sound like a broken record...]
You should always implement your code as a loop first, then try to optimize using permute and reshape. But note that permute needs to copy data, so tends to increase the amount of work, rather than decrease it. Recent versions of MATLAB are no longer slow with loops, and thus copying data is no longer always a useful hack to speed up things.
For example, the loop in the question can be simplified to:
A_contracted = zeros(size(A,2),size(A,4));
for k = 1:size(A,1)
A_contracted = A_contracted + squeeze(A(k,:,k,:));
end
(I've also generalized to arbitrary sizes).
Comparing with Luis' answer, I see the vectorized method winning for small arrays such as the one in the OP (17x10x17x12) with 0.09 ms vs 0.19 ms. But with very small times all around it is likely not worth the effort. However, for larger arrays (I tried 17x100x17x120) I see the loop method winning 1.3 ms vs 2.6 ms.
The more data, the bigger the advantage to using just plain old loops. With 170x100x170x120 it is 0.04 s vs 0.45 s.
Test code:
A = rand(17,100,17,120);
assert(all(method2(A)==method1(A),'all'))
timeit(#()method1(A))
timeit(#()method2(A))
function A_contracted = method1(A)
A_contracted = permute(sum( ...
A.*((1:size(A,1)).'==reshape(1:size(A,3), 1, 1, [])), [1 3]), [2 4 1 3]);
end
function A_contracted = method2(A)
A_contracted = zeros(size(A,2),size(A,4));
for k = 1:size(A,1)
A_contracted = A_contracted + squeeze(A(k,:,k,:));
end
end
My professor suggested another solution (in the following denoted by method3) involving reshape and matrix multiplication.
take a unit matrix of the size of the contracted index
reshape it into a vector
reshape the tensor you want to contract accordingly
multiply the vector and the tensor
reshape the Contracted tensor
sample code comparing to Luis's (method1) and Cris's answer (method2):
A = rand(17,10,17,10);
timeit(#()method1(A))
timeit(#()method2(A))
timeit(#()method3(A))
function A_contracted = method1(A)
A_contracted = permute(sum( ...
A.*((1:size(A,1)).'==reshape(1:size(A,3), 1, 1, [])), [1 3]), [2 4 1 3]);
end
function A_contracted = method2(A)
A_contracted = zeros(size(A,2),size(A,4));
for k = 1:size(A,1)
A_contracted = A_contracted + squeeze(A(k,:,k,:));
end
end
function A_contracted = method3(A)
sa_1 = size(A,1);
Unity = eye(size(A, 1));
Unity = reshape(Unity, [1,sa_1*sa_1]);
A1 = permute(A, [1,3,2,4]);
A2 = reshape(A1, [sa_1*sa_1, size(A1, 3)* size(A1,4)]);
UnA = Unity*A2;
A_contracted = reshape(UnA, [size(A1,3), size(A1,4)]);
end
method3 dominates for small dimensions by an order of magnitude over both method1 and method2 and beats method1 for larger dimensions as well, but is beaten by for loops for larger dimensions by one order of magnitude.
method3 has the (somewhat personal) advantage of being more intuitive for the application in my physics course in the sense that a contraction is not really in the tensor itself, but with respect to a metric. method3 may be easily adapted to incorporate this feature.
Pretty easy
squeeze(sum(sum(a,3),1))
The sum(a,n) sums over the nth dimension of the array and the squeeze removes any singleton dimensions

element by element matrix multiplication in Matlab

So I have the following matrices:
A = [1 2 3; 4 5 6];
B = [0.5 2 3];
I'm writing a function in MATLAB that will allow me to multiply a vector and a matrix by element as long as the number of elements in the vector matches the number of columns. In A there are 3 columns:
1 2 3
4 5 6
B also has 3 elements so this should work. I'm trying to produce the following output based on A and B:
0.5 4 9
2 10 18
My code is below. Does anyone know what I'm doing wrong?
function C = lab11(mat, vec)
C = zeros(2,3);
[a, b] = size(mat);
[c, d] = size(vec);
for i = 1:a
for k = 1:b
for j = 1
C(i,k) = C(i,k) + A(i,j) * B(j,k);
end
end
end
end
MATLAB already has functionality to do this in the bsxfun function. bsxfun will take two matrices and duplicate singleton dimensions until the matrices are the same size, then perform a binary operation on the two matrices. So, for your example, you would simply do the following:
C = bsxfun(#times,mat,vec);
Referencing MrAzzaman, bsxfun is the way to go with this. However, judging from your function name, this looks like it's homework, and so let's stick with what you have originally. As such, you need to only write two for loops. You would use the second for loop to index into both the vector and the columns of the matrix at the same time. The outer most for loop would access the rows of the matrix. In addition, you are referencing A and B, which are variables that don't exist in your code. You are also initializing the output matrix C to be 2 x 3 always. You want this to be the same size as mat. I also removed your checking of the length of the vector because you weren't doing anything with the result.
As such:
function C = lab11(mat, vec)
[a, b] = size(mat);
C = zeros(a,b);
for i = 1:a
for k = 1:b
C(i,k) = mat(i,k) * vec(k);
end
end
end
Take special note at what I did. The outer-most for loop accesses the rows of mat, while the inner-most loop accesses the columns of mat as well as the elements of vec. Bear in mind that the number of columns of mat need to be the same as the number of elements in vec. You should probably check for this in your code.
If you don't like using the bsxfun approach, one alternative is to take the vector vec and make a matrix out of this that is the same size as mat by stacking the vector vec on top of itself for as many times as we have rows in mat. After this, you can do element-by-element multiplication. You can do this stacking by using repmat which repeats a vector or matrices a given number of times in any dimension(s) you want. As such, your function would be simplified to:
function C = lab11(mat, vec)
rows = size(mat, 1);
vec_mat = repmat(vec, rows, 1);
C = mat .* vec_mat;
end
However, I would personally go with the bsxfun route. bsxfun basically does what the repmat paradigm does under the hood. Internally, it ensures that both of your inputs have the same size. If it doesn't, it replicates the smaller array / matrix until it is the same size as the larger array / matrix, then applies an element-by-element operation to the corresponding elements in both variables. bsxfun stands for Binary Singleton EXpansion FUNction, which is a fancy way of saying exactly what I just talked about.
Therefore, your function is further simplified to:
function C = lab11(mat, vec)
C = bsxfun(#times, mat, vec);
end
Good luck!

Matlab: element 3D matrices multiplication

I have two matrices: B with size 9x100x51 and K with size 34x9x100. I want to multiply all of K(34) with each one of B(9) so as to have a final matrix G with size 34x9x100x51.
For example: the element G(:,5,60,25) is composed as follow
G(:,5,60,25)=K(:,5,60)*B(5,60,25).
I hope that the example helps to understand what I want to do.
Thank you
Any time you find yourself writing nested loops in matlab, there's a good chance you can speed up quite a bit using the built-in vectorized forms of the functions. The code ends up being quite a bit shorter typically too (but often less immediately clear to a reader, so comment your code!).
In this case, does avoiding the nested loops make a difference? Absolutely! Let's get to work. #slayton has provided a 3-loop solution. We can get faster.
Restating the problem a bit, B has 51 9x100 matrices and K has 34 9x100 matrices. For each combination of 51x34, you want to element-wise multiply the respective 9x100 matrices from B and K.
Element-wise multiplication is a great job for bsxfun, so we can conceptually reduce this problem to working along two dimensions (the third dimension of B, first dimension of K):
Initial, two-loop solution:
B = rand(9,100,51);
K = rand(34,9,100);
G = nan(34,9,100,51);
for b=1:size(B,3)
for k=1:size(K,1)
G(k,:,:,b) = bsxfun(#times,B(:,:,b), squeeze(K(k,:,:)));
end
end
Ok, two loops is making progress. Can we do better? Well, let's recognize that the matrices B and K can be replicated along the appropriate dimensions, then element-wise multiplied all at once.
B = rand(9,100,51);
K = rand(34,9,100);
B2 = repmat(permute(B,[4 1 2 3]), [size(K,1) size(B)]);
K2 = repmat(K, [size(K) size(B,3)]);
G = bsxfun(#times,B2,K2);
So, how do the solutions compare speed-wise? I tested the on the octave online utility, and didn't include the time to generate the initial B and K matrices. I did include the time to preallocate the G matrix for the solutions that needed preallocation. The code is below.
3 loops (#slayton's answer): 4.024471 s
2 loop solution: 1.616120 s
0-loop repmat/bsxfun solution: 1.211850 s
0-loop repmat/bsxfun solution, no temporaries: 0.605838 s
Caveat: The timing may depend quite a bit on your machine, I wouldn't trust the online utility for great timing tests. Changing the order of when the loops were executed (even taking care not to reuse variables and mess up time of allocation) did change things a bit, namely the 2-loop solution was sometimes as fast as the no-loop solution with temporaries stored. However, the more vectorized you can get, the better you will be.
Here's the code for the speed test:
B = rand(9,100,51);
K = rand(34,9,100);
tic
G1 = nan(34,9,100,51);
for ii = 1:size(B,1)
for jj = 1:size(B,2);
for kk = 1:size(B,3)
G1(:, ii, jj, kk) = K(:,ii,jj) .* B(ii,jj,kk);
end
end
end
t=toc;
printf('Time for 3 loop solution: %f\n' ,t)
tic
G2 = nan(34,9,100,51);
for b=1:size(B,3)
for k=1:size(K,1)
G2(k,:,:,b) = bsxfun(#times,B(:,:,b), squeeze(K(k,:,:)));
end
end
t=toc;
printf('Time for 2 loop solution: %f\n' ,t)
tic
B2 = repmat(permute(B,[4 1 2 3]), [size(K,1) 1 1 1]);
K2 = repmat(K, [1 1 1 size(B,3)]);
G3 = bsxfun(#times,B2,K2);
t=toc;
printf('Time for 0-loop repmat/bsxfun solution: %f\n' ,t)
tic
G4 = bsxfun(#times,repmat(permute(B,[4 1 2 3]), [size(K,1) 1 1 1]),repmat(K, [1 1 1 size(B,3)]));
t=toc;
printf('Time for 0-loop repmat/bsxfun solution, no temporaries: %f\n' ,t)
disp('Are the results equal?')
isequal(G1,G2)
isequal(G1,G3)
Time for 3 loop solution: 4.024471
Time for 2 loop solution: 1.616120
Time for 0-loop repmat/bsxfun solution: 1.211850
Time for 0-loop repmat/bsxfun solution, no temporaries: 0.605838
Are the results equal?
ans = 1
ans = 1
You can do this with nested loops, although it probably won't be terribly fast:
B = rand(9,100,51);
K = rand(34,9,100);
G = nan(34,9,100,51)
for ii = 1:size(B,1)
for jj = 1:size(B,2);
for kk = 1:size(B,3)
G(:, ii, jj, kk) = K(:,ii,jj) .* B(ii,jj,kk);
end
end
end
Its been a long day and my brain is a bit fried, kudos to anyone who can improve this!

MATLAB - avoiding loops to create a matrix based on the elements of other vectors

Suppose I have vectors x,y,z, of lengths n,m,l. I want to create a cell matrix Q using the elements of those vectors. Naively one could use a for loop as so:
for i = 1:n
for j = 1:m
for k = 1:l
Q{i,j,k} = someFunction(x(i), y(j), z(k));
end
end
end
Each element of Q is a vector.
Is there a more elegant (and probably less slow) way to do this?
x=[1 2 3 4];
y=[5 6];
z=[7 8 9];
[X Y Z]=meshgrid(x,y,z);
someFunc = #(a,b,c)[a b c]; #% test function; use whatever you want
Q = arrayfun(someFunc,X,Y,Z,'UniformOutput',false);
Q{1,1,1} #% output: [1 5 7]
If someFunction is defined elsewhere, use arrayfun(#someFunction,X,Y,Z); to get a handle to it. (arrayfun uses each element of the arguments as args to the function handle you provide - it, and the related cellfun, are key in avoiding loops.)
With someFunction is designed this way, then it does not look possible.
You should change someFunction to take matrices and return a matrix. Then the problem becomes writing the specific someFunction using matrix operations. Altough a generic solution to the original problem seems not possible, when you consider a specific function (like I suggested here) it can be possible.

How can I apply a function to every row/column of a matrix in MATLAB?

You can apply a function to every item in a vector by saying, for example, v + 1, or you can use the function arrayfun. How can I do it for every row/column of a matrix without using a for loop?
Many built-in operations like sum and prod are already able to operate across rows or columns, so you may be able to refactor the function you are applying to take advantage of this.
If that's not a viable option, one way to do it is to collect the rows or columns into cells using mat2cell or num2cell, then use cellfun to operate on the resulting cell array.
As an example, let's say you want to sum the columns of a matrix M. You can do this simply using sum:
M = magic(10); %# A 10-by-10 matrix
columnSums = sum(M, 1); %# A 1-by-10 vector of sums for each column
And here is how you would do this using the more complicated num2cell/cellfun option:
M = magic(10); %# A 10-by-10 matrix
C = num2cell(M, 1); %# Collect the columns into cells
columnSums = cellfun(#sum, C); %# A 1-by-10 vector of sums for each cell
You may want the more obscure Matlab function bsxfun. From the Matlab documentation, bsxfun "applies the element-by-element binary operation specified by the function handle fun to arrays A and B, with singleton expansion enabled."
#gnovice stated above that sum and other basic functions already operate on the first non-singleton dimension (i.e., rows if there's more than one row, columns if there's only one row, or higher dimensions if the lower dimensions all have size==1). However, bsxfun works for any function, including (and especially) user-defined functions.
For example, let's say you have a matrix A and a row vector B. E.g., let's say:
A = [1 2 3;
4 5 6;
7 8 9]
B = [0 1 2]
You want a function power_by_col which returns in a vector C all the elements in A to the power of the corresponding column of B.
From the above example, C is a 3x3 matrix:
C = [1^0 2^1 3^2;
4^0 5^1 6^2;
7^0 8^1 9^2]
i.e.,
C = [1 2 9;
1 5 36;
1 8 81]
You could do this the brute force way using repmat:
C = A.^repmat(B, size(A, 1), 1)
Or you could do this the classy way using bsxfun, which internally takes care of the repmat step:
C = bsxfun(#(x,y) x.^y, A, B)
So bsxfun saves you some steps (you don't need to explicitly calculate the dimensions of A). However, in some informal tests of mine, it turns out that repmat is roughly twice as fast if the function to be applied (like my power function, above) is simple. So you'll need to choose whether you want simplicity or speed.
I can't comment on how efficient this is, but here's a solution:
applyToGivenRow = #(func, matrix) #(row) func(matrix(row, :))
applyToRows = #(func, matrix) arrayfun(applyToGivenRow(func, matrix), 1:size(matrix,1))'
% Example
myMx = [1 2 3; 4 5 6; 7 8 9];
myFunc = #sum;
applyToRows(myFunc, myMx)
Building on Alex's answer, here is a more generic function:
applyToGivenRow = #(func, matrix) #(row) func(matrix(row, :));
newApplyToRows = #(func, matrix) arrayfun(applyToGivenRow(func, matrix), 1:size(matrix,1), 'UniformOutput', false)';
takeAll = #(x) reshape([x{:}], size(x{1},2), size(x,1))';
genericApplyToRows = #(func, matrix) takeAll(newApplyToRows(func, matrix));
Here is a comparison between the two functions:
>> % Example
myMx = [1 2 3; 4 5 6; 7 8 9];
myFunc = #(x) [mean(x), std(x), sum(x), length(x)];
>> genericApplyToRows(myFunc, myMx)
ans =
2 1 6 3
5 1 15 3
8 1 24 3
>> applyToRows(myFunc, myMx)
??? Error using ==> arrayfun
Non-scalar in Uniform output, at index 1, output 1.
Set 'UniformOutput' to false.
Error in ==> #(func,matrix)arrayfun(applyToGivenRow(func,matrix),1:size(matrix,1))'
For completeness/interest I'd like to add that matlab does have a function that allows you to operate on data per-row rather than per-element. It is called rowfun (http://www.mathworks.se/help/matlab/ref/rowfun.html), but the only "problem" is that it operates on tables (http://www.mathworks.se/help/matlab/ref/table.html) rather than matrices.
Adding to the evolving nature of the answer to this question, starting with r2016b, MATLAB will implicitly expand singleton dimensions, removing the need for bsxfun in many cases.
From the r2016b release notes:
Implicit Expansion: Apply element-wise operations and functions to arrays with automatic expansion of dimensions of length 1
Implicit expansion is a generalization of scalar expansion. With
scalar expansion, a scalar expands to be the same size as another
array to facilitate element-wise operations. With implicit expansion,
the element-wise operators and functions listed here can implicitly
expand their inputs to be the same size, as long as the arrays have
compatible sizes. Two arrays have compatible sizes if, for every
dimension, the dimension sizes of the inputs are either the same or
one of them is 1. See Compatible Array Sizes for Basic Operations and
Array vs. Matrix Operations for more information.
Element-wise arithmetic operators — +, -, .*, .^, ./, .\
Relational operators — <, <=, >, >=, ==, ~=
Logical operators — &, |, xor
Bit-wise functions — bitand, bitor, bitxor
Elementary math functions — max, min, mod, rem, hypot, atan2, atan2d
For example, you can calculate the mean of each column in a matrix A,
and then subtract the vector of mean values from each column with A -
mean(A).
Previously, this functionality was available via the bsxfun function.
It is now recommended that you replace most uses of bsxfun with direct
calls to the functions and operators that support implicit expansion.
Compared to using bsxfun, implicit expansion offers faster speed,
better memory usage, and improved readability of code.
None of the above answers worked "out of the box" for me, however, the following function, obtained by copying the ideas of the other answers works:
apply_func_2_cols = #(f,M) cell2mat(cellfun(f,num2cell(M,1), 'UniformOutput',0));
It takes a function f and applies it to every column of the matrix M.
So for example:
f = #(v) [0 1;1 0]*v + [0 0.1]';
apply_func_2_cols(f,[0 0 1 1;0 1 0 1])
ans =
0.00000 1.00000 0.00000 1.00000
0.10000 0.10000 1.10000 1.10000
With recent versions of Matlab, you can use the Table data structure to your advantage. There's even a 'rowfun' operation but I found it easier just to do this:
a = magic(6);
incrementRow = cell2mat(cellfun(#(x) x+1,table2cell(table(a)),'UniformOutput',0))
or here's an older one I had that doesn't require tables, for older Matlab versions.
dataBinner = cell2mat(arrayfun(#(x) Binner(a(x,:),2)',1:size(a,1),'UniformOutput',0)')
The accepted answer seems to be to convert to cells first and then use cellfun to operate over all of the cells. I do not know the specific application, but in general I would think using bsxfun to operate over the matrix would be more efficient. Basically bsxfun applies an operation element-by-element across two arrays. So if you wanted to multiply each item in an n x 1 vector by each item in an m x 1 vector to get an n x m array, you could use:
vec1 = [ stuff ]; % n x 1 vector
vec2 = [ stuff ]; % m x 1 vector
result = bsxfun('times', vec1.', vec2);
This will give you matrix called result wherein the (i, j) entry will be the ith element of vec1 multiplied by the jth element of vec2.
You can use bsxfun for all sorts of built-in functions, and you can declare your own. The documentation has a list of many built-in functions, but basically you can name any function that accepts two arrays (vector or matrix) as arguments and get it to work.
I like splitapply, which allows a function to be applied to the columns of A using splitapply(fun,A,1:size(A,2)).
For example
A = magic(5);
B = splitapply(#(x) x+1, A, 1:size(A,2));
C = splitapply(#std, A, 1:size(A,2));
To apply the function to the rows, you could use
splitapply(fun, A', 1:size(A,1))';
(My source for this solution is here.)
Stumbled upon this question/answer while seeking how to compute the row sums of a matrix.
I would just like to add that Matlab's SUM function actually has support for summing for a given dimension, i.e a standard matrix with two dimensions.
So to calculate the column sums do:
colsum = sum(M) % or sum(M, 1)
and for the row sums, simply do
rowsum = sum(M, 2)
My bet is that this is faster than both programming a for loop and converting to cells :)
All this can be found in the matlab help for SUM.
if you know the length of your rows you can make something like this:
a=rand(9,3);
b=rand(9,3);
arrayfun(#(x1,x2,y1,y2,z1,z2) line([x1,x2],[y1,y2],[z1,z2]) , a(:,1),b(:,1),a(:,2),b(:,2),a(:,3),b(:,3) )