Get binomial coefficients - matlab

In an attempt to vectorize a particular piece of Matlab code, I could not find a straightforward function to generate a list of the binomial coefficients. The best I could find was nchoosek, but for some inexplicable reason this function only accepts integers (not vectors of integers). My current solution looks like this:
mybinom = #(n) arrayfun(#nchoosek, n*ones(1,n), 1:n)
This generates the set of binomial coefficients for a given value of n. However, since the binomial coefficients are always symmetric, I know that I am doing twice as much work as necessary. I'm sure that I could create a solution that exploits the symmetry, but I'm sure that it would be at the expense of readability.
Is there a more elegant solution than this, perhaps using a Matlab function that I am not aware of? Note that I am not interested in using the symbolic toolbox.

If you want to minimize operations you can go along these lines:
n = 6;
k = 1:n;
result = [1 cumprod((n-k+1)./k)]
>> result
result =
1 6 15 20 15 6 1
This requires very few operations per coefficient, because each cofficient is obtained exploiting the previously computed one.
You can reduce the number of operations by approximately half if you take into account the symmetry:
m1 = floor(n/2);
m2 = ceil(n/2);
k = 1:m2;
result = [1 cumprod((n-k+1)./k)];
result(n+1:-1:m1+2) = result(1:m2);

What about a modified version of Luis Mendo's solution - but in logarithms:
n = 1e4;
m1 = floor(n/2);
m2 = ceil(n/2);
k = 1:m2;
% Attempt to compute real value
out0 = [1 cumprod((n-k+1)./k)];
out0(n+1:-1:m1+2) = out0(1:m2);
% In logarithms
out1 = [0 cumsum((log(n-k+1)) - log(k))];
out1(n+1:-1:m1+2) = out1(1:m2);
plot(log(out0) - out1, 'o-')
The advantage of working with logarithms is that you can set n = 1e4; and still obtain a good approximation of the real value (nchoosek(1e4, 5e3) returns Inf and this is not a good approximation at all!).
EDIT following horchler's comment
You can use the gammaln function to obtain the same result but it's not faster. The two approximations seem to be quite different:
n = 1e7;
m1 = floor(n/2);
m2 = ceil(n/2);
k = 1:m2;
% In logarithms
tic
out1 = [0 cumsum((log(n-k+1)) - log(k))];
out1(n+1:-1:m1+2) = out1(1:m2);
toc
% Elapsed time is 0.912649 seconds.
tic
k = 0:m2;
out2 = gammaln(n + 1) - gammaln(k + 1) - gammaln(n - k + 1);
out2(n+1:-1:m1+2) = out2(1:m2);
toc
% Elapsed time is 1.020188 seconds.
tmp = out2 - out1;
plot(tmp, '.')
prctile(tmp, [0 2.5 25 50 75 97.5 100])
% 1.0e-006 *
% -0.2217 -0.1462 -0.0373 0.0363 0.1225 0.2943 0.3846
Is adding three gammaln worse than adding n logarithms? Or viceversa?

This works for Octave only
You can use bincoeff function.
Example: bincoeff(5, 0:5)
EDIT :
Only improvement I can think of goes like this. Maybe you already thought this trivial solution and didn't like it.
# Calculate only the first half
mybinomhalf = #(n) arrayfun(#nchoosek, n*ones(1,n/2+1), 0:n/2)
# pad your array symmetrically
mybinom = #(n) padarray(mybinomhalf(n), [0 n/2], 'symmetric', 'post')
# I couldn't test it and this line may not work

Related

Performance improvement for multiple uses of polyval function

I have a simple performance question on using polyval function with Matlab.
Currently, I have a vector of x that can be quite long (>1000 scalars). I want to apply a different polynomial form to each of the x.
The polynomial forms are stored in a 2d array and applied in a loop like the code below. The code is relatively fast as polyval is optimized but the loop can be lengthy and performance is paramount as it is an objective function that can be computed thousands of times in a process.
Any idea on how to improve the performance?
Thanks
% ---------- Objective Function ------------------
function [obj] = obj(x, poly_objective)
polyvalue = zeros(length(x),1);
for index = 1: length(x)
polyvalue (index) = polyval(poly_objective(index,:), x(index));
end
obj= -sum(polyvalue );
end
% -------------------------------------------------
You can linearize your for loop manually, here is an example:
p = [3,2,1;
5,1,3]; %polynomial coeff
x = [5,6].'; %the query points
d = size(p,2)-1:-1:0; %the power factors
res = sum(x.^d.*p,2); %process all the polynome without for loop.
with
res =
86
189
Also if you would evaluate each x value for each polynome you could use:
res = x.^d*p.'; %using only matrix multiplication
with
res =
p1 p2
x1 86 133
x2 121 189
The quickest way is likely to evaluate the different polynomials directly, removing the loop (as shown by obchardon or Luis). However, here's a note on polyval performance...
If you type edit polyval in the command window, you can see the source for the polyval function. In particular there is the following conditional evaluation near the top:
nc = length(p);
if isscalar(x) && (nargin < 3) && nc>0 && isfinite(x) && all(isfinite(p(:)))
% Make it scream for scalar x. Polynomial evaluation can be
% implemented as a recursive digital filter.
y = filter(1,[1 -x],p);
y = y(nc);
return
end
I think the "Make it scream" comment is the developer(s) telling us this is a very quick route through the function! Aside; it's also the best comment I've found in a MATLAB built-in.
So let's try to satisfy the conditions for this if statement...
✓ isscalar(x)
✓ nargin < 3
✓ length(p) > 0
✓ isfinite(x)
✓ all(isfinite(p(:)))
Brilliant, so this is always the evaluation you're using. You might find speed improvements in removing these 5 checks, and simply doing this instead of polyval. In terms of your variables, this looks like so:
y = filter(1,[1 -x(index)],poly_objective(index,:));
polyvalue (index) = y(size(poly_objective,2));
% Note you should get size(poly_objective,2) outside your loop
I find your question a little confusing, but I think this does what you want:
polyvalue = sum(poly_objective .* x(:).^(numel(x)-1:-1:0), 2);
Note that the above uses implicit expansion. For Matlab vesions before R2016b, use bsxfun:
polyvalue = sum(poly_objective .* bsxfun(#power, x(:), (numel(x)-1:-1:0)), 2);
Example
Random data:
>> x = rand(1,4);
>> poly_objective = randi(9,4,4);
Your code:
>> polyvalue = zeros(length(x),1);
for index = 1: length(x)
polyvalue (index) = polyval(poly_objective(index,:), x(index));
end
>> polyvalue
polyvalue =
13.545710504297881
16.286929525147158
13.289183623920710
5.777980886766799
My code:
>> polyvalue = sum(poly_objective .* x(:).^(numel(x)-1:-1:0), 2)
polyvalue =
13.545710504297881
16.286929525147158
13.289183623920710
5.777980886766799

Serious performance issue with iterating simulations

I recently stumbled upon a performance problem while implementing a simulation algorithm. I managed to find the bottleneck function (signally, it's the internal call to arrayfun that slows everything down):
function sim = simulate_frequency(the_f,k,n)
r = rand(1,n); %
x = arrayfun(#(x) find(x <= the_f,1,'first'),r);
sim = (histcounts(x,[1:k Inf]) ./ n).';
end
It is being used in other parts of code as follows:
h0 = zeros(1,sims);
for i = 1:sims
p = simulate_frequency(the_f,k,n);
h0(i) = max(abs(p - the_p));
end
Here are some possible values:
% Test Case 1
sims = 10000;
the_f = [0.3010; 0.4771; 0.6021; 0.6990; 0.7782; 0.8451; 0.9031; 0.9542; 1.0000];
k = 9;
n = 95;
% Test Case 2
sims = 10000;
the_f = [0.0413; 0.0791; 0.1139; 0.1461; 0.1760; 0.2041; 0.2304; 0.2552; 0.2787; 0.3010; 0.3222; 0.3424; 0.3617; 0.3802; 0.3979; 0.4149; 0.4313; 0.4471; 0.4623; 0.4771; 0.4913; 0.5051; 0.5185; 0.5314; 0.5440; 0.5563; 0.5682; 0.5797; 0.5910; 0.6020; 0.6127; 0.6232; 0.6334; 0.6434; 0.6532; 0.6627; 0.6720; 0.6812; 0.6901; 0.6989; 0.7075; 0.7160; 0.7242; 0.7323; 0.7403; 0.7481; 0.7558; 0.7634; 0.7708; 0.7781; 0.7853; 0.7923; 0.7993; 0.8061; 0.8129; 0.8195; 0.8260; 0.8325; 0.8388; 0.8450; 0.8512; 0.8573; 0.8633; 0.8692; 0.8750; 0.8808; 0.8864; 0.8920; 0.8976; 0.9030; 0.9084; 0.9138; 0.9190; 0.9242; 0.9294; 0.9344; 0.9395; 0.9444; 0.9493; 0.9542; 0.9590; 0.9637; 0.9684; 0.9731; 0.9777; 0.9822; 0.9867; 0.9912; 0.9956; 1.000];
k = 90;
n = 95;
The scalar sims must be in the range 1000 1000000. The vector of cumulated frequencies the_f never contains more than 100 elements. The scalar k represents the number of elements in the_f. Finally, the scalar n represents the number of elements in the empirical sample vector, and can even be very large (up to 10000 elements, as far as I can tell).
Any clue about how to improve the computation time of this process?
This seems to be slightly faster for me in the second test case, not the first. The time differences might be larger for longer the_f and larger values of n.
function sim = simulate_frequency(the_f,k,n)
r = rand(1,n); %
[row,col] = find(r <= the_f); % Implicit singleton expansion going on here!
[~,ind] = unique(col,'first');
x = row(ind);
sim = (histcounts(x,[1:k Inf]) ./ n).';
end
I'm using implicit singleton expansion in r <= the_f, use bsxfun if you have an older version of MATLAB (but you know the drill).
Find then returns row and column to all the locations where r is larger than the_f. unique finds the indices into the result for the first element of each column.
Credit: Andrei Bobrov over on MATLAB Answers
Another option (derived from this other answer) is a bit shorter but also a bit more obscure IMO:
mask = r <= the_f;
[x,~] = find(mask & (cumsum(mask,1)==1));
If I want performance, I would avoid arrayfun. Even this for loop is faster:
function sim = simulate_frequency(the_f,k,n)
r = rand(1,n); %
for i = 1:numel(r)
x(i) = find(r(i)<the_f,1,'first');
end
sim = (histcounts(x,[1:k Inf]) ./ n).';
end
Running 10000 sims with the first set of the sample data gives the following timing.
Your arrayfun function:
>Elapsed time is 2.848206 seconds.
The for loop function:
>Elapsed time is 0.938479 seconds.
Inspired by Cris Luengo's answer, I suggest below:
function sim = simulate_frequency(the_f,k,n)
r = rand(1,n); %
x = sum(r > the_f)+1;
sim = (histcounts(x,[1:k Inf]) ./ n)';
end
Time:
>Elapsed time is 0.264146 seconds.
You can use histcounts with r as its input:
r = rand(1,n);
sim = (histcounts(r,[-inf ;the_f]) ./ n).';
If histc is used instead of histcounts the whole simulation can be vectorized:
r = rand(n,sims);
p = histc(r, [-inf; the_f],1);
p = [p(1:end-2,:) ;sum(p(end-1:end,:))]./n;
h0 = max(abs(p-the_p(:))); %h0 = max(abs(bsxfun(#minus,p,the_p(:))));

Vectorization - Sum and Bessel function

Can anyone help vectorize this Matlab code? The specific problem is the sum and bessel function with vector inputs.
Thank you!
N = 3;
rho_g = linspace(1e-3,1,N);
phi_g = linspace(0,2*pi,N);
n = 1:3;
tau = [1 2.*ones(1,length(n)-1)];
for ii = 1:length(rho_g)
for jj = 1:length(phi_g)
% Coordinates
rho_o = rho_g(ii);
phi_o = phi_g(jj);
% factors
fc = cos(n.*(phi_o-phi_s));
fs = sin(n.*(phi_o-phi_s));
Ez_t(ii,jj) = sum(tau.*besselj(n,k(3)*rho_s).*besselh(n,2,k(3)*rho_o).*fc);
end
end
You could try to vectorize this code, which might be possible with some bsxfun or so, but it would be hard to understand code, and it is the question if it would run any faster, since your code already uses vector math in the inner loop (even though your vectors only have length 3). The resulting code would become very difficult to read, so you or your colleague will have no idea what it does when you have a look at it in 2 years time.
Before wasting time on vectorization, it is much more important that you learn about loop invariant code motion, which is easy to apply to your code. Some observations:
you do not use fs, so remove that.
the term tau.*besselj(n,k(3)*rho_s) does not depend on any of your loop variables ii and jj, so it is constant. Calculate it once before your loop.
you should probably pre-allocate the matrix Ez_t.
the only terms that change during the loop are fc, which depends on jj, and besselh(n,2,k(3)*rho_o), which depends on ii. I guess that the latter costs much more time to calculate, so it better to not calculate this N*N times in the inner loop, but only N times in the outer loop. If the calculation based on jj would take more time, you could swap the for-loops over ii and jj, but that does not seem to be the case here.
The result code would look something like this (untested):
N = 3;
rho_g = linspace(1e-3,1,N);
phi_g = linspace(0,2*pi,N);
n = 1:3;
tau = [1 2.*ones(1,length(n)-1)];
% constant part, does not depend on ii and jj, so calculate only once!
temp1 = tau.*besselj(n,k(3)*rho_s);
Ez_t = nan(length(rho_g), length(phi_g)); % preallocate space
for ii = 1:length(rho_g)
% calculate stuff that depends on ii only
rho_o = rho_g(ii);
temp2 = besselh(n,2,k(3)*rho_o);
for jj = 1:length(phi_g)
phi_o = phi_g(jj);
fc = cos(n.*(phi_o-phi_s));
Ez_t(ii,jj) = sum(temp1.*temp2.*fc);
end
end
Initialization -
N = 3;
rho_g = linspace(1e-3,1,N);
phi_g = linspace(0,2*pi,N);
n = 1:3;
tau = [1 2.*ones(1,length(n)-1)];
Nested loops form (Copy from your code and shown here for comparison only) -
for ii = 1:length(rho_g)
for jj = 1:length(phi_g)
% Coordinates
rho_o = rho_g(ii);
phi_o = phi_g(jj);
% factors
fc = cos(n.*(phi_o-phi_s));
fs = sin(n.*(phi_o-phi_s));
Ez_t(ii,jj) = sum(tau.*besselj(n,k(3)*rho_s).*besselh(n,2,k(3)*rho_o).*fc);
end
end
Vectorized solution -
%%// Term - 1
term1 = repmat(tau.*besselj(n,k(3)*rho_s),[N*N 1]);
%%// Term - 2
[n1,rho_g1] = meshgrid(n,rho_g);
term2_intm = besselh(n1,2,k(3)*rho_g1);
term2 = transpose(reshape(repmat(transpose(term2_intm),[N 1]),N,N*N));
%%// Term -3
angle1 = repmat(bsxfun(#times,bsxfun(#minus,phi_g,phi_s')',n),[N 1]);
fc = cos(angle1);
%%// Output
Ez_t = sum(term1.*term2.*fc,2);
Ez_t = transpose(reshape(Ez_t,N,N));
Points to note about this vectorization or code simplification –
‘fs’ doesn’t change the output of the script, Ez_t, so it could be removed for now.
The output seems to be ‘Ez_t’,which requires three basic terms in the code as –
tau.*besselj(n,k(3)*rho_s), besselh(n,2,k(3)*rho_o) and fc. These are calculated separately for vectorization as terms1,2 and 3 respectively.
All these three terms appear to be of 1xN sizes. Our aim thus becomes to calculate these three terms without loops. Now, the two loops run for N times each, thus giving us a total loop count of NxN. Thus, we must have NxN times the data in each such term as compared to when these terms were inside the nested loops.
This is basically the essence of the vectorization done here, as the three terms are represented by ‘term1’,’term2’ and ‘fc’ itself.
In order to give a self-contained answer, I'll copy the original initialization
N = 3;
rho_g = linspace(1e-3,1,N);
phi_g = linspace(0,2*pi,N);
n = 1:3;
tau = [1 2.*ones(1,length(n)-1)];
and generate some missing data (k(3) and rho_s and phi_s in the dimension of n)
rho_s = rand(size(n));
phi_s = rand(size(n));
k(3) = rand(1);
then you can compute the same Ez_t with multidimensional arrays:
[RHO_G, PHI_G, N] = meshgrid(rho_g, phi_g, n);
[~, ~, TAU] = meshgrid(rho_g, phi_g, tau);
[~, ~, RHO_S] = meshgrid(rho_g, phi_g, rho_s);
[~, ~, PHI_S] = meshgrid(rho_g, phi_g, phi_s);
FC = cos(N.*(PHI_G - PHI_S));
FS = sin(N.*(PHI_G - PHI_S)); % not used
EZ_T = sum(TAU.*besselj(N, k(3)*RHO_S).*besselh(N, 2, k(3)*RHO_G).*FC, 3).';
You can check afterwards that both matrices are the same
norm(Ez_t - EZ_T)

How to obtain complexity cosine similarity in Matlab?

I have implemented cosine similarity in Matlab like this. In fact, I have a two-dimensional 50-by-50 matrix. To obtain a cosine should I compare items in a line by line form.
for j = 1:50
x = dat(j,:);
for i = j+1:50
y = dat(i,:);
c = dot(x,y);
sim = c/(norm(x,2)*norm(y,2));
end
end
Is this correct?
and The question is this: wath is the complexity or O(n) in this state?
Just a note on an efficient implementation of the same thing using vectorized and matrix-wise operations (which are optimized in MATLAB). This can have huge time savings for large matrices:
dat = randn(50, 50);
OP (double-for) implementation:
sim = zeros(size(dat));
nRow = size(dat,1);
for j = 1:nRow
x = dat(j, :);
for i = j+1:nRow
y = dat(i, :);
c = dot(x, y);
sim(j, i) = c/(norm(x,2)*norm(y,2));
end
end
Vectorized implementation:
normDat = sqrt(sum(dat.^2, 2)); % L2 norm of each row
datNorm = bsxfun(#rdivide, dat, normDat); % normalize each row
dotProd = datNorm*datNorm'; % dot-product vectorized (redundant!)
sim2 = triu(dotProd, 1); % keep unique upper triangular part
Comparisons for 1000 x 1000 matrix: (MATLAB 2013a, x64, Intel Core i7 960 # 3.20GHz)
Elapsed time is 34.103095 seconds.
Elapsed time is 0.075208 seconds.
sum(sum(sim-sim2))
ans =
-1.224314766369880e-14
Better end with 49. Maybe you should also add an index to sim?
for j = 1:49
x = dat(j,:);
for i = j+1:50
y = dat(i,:);
c = dot(x,y);
sim(j) = c/(norm(x,2)*norm(y,2));
end
end
The complexity should be roughly like o(n^2), isn't it?
Maybe you should have a look at correlation functions ... I don't get what you want to write exactly, but it looks like you want to do something similar. There are built-in correlation functions in Matlab.

Multiply a 3D matrix with a 2D matrix

Suppose I have an AxBxC matrix X and a BxD matrix Y.
Is there a non-loop method by which I can multiply each of the C AxB matrices with Y?
As a personal preference, I like my code to be as succinct and readable as possible.
Here's what I would have done, though it doesn't meet your 'no-loops' requirement:
for m = 1:C
Z(:,:,m) = X(:,:,m)*Y;
end
This results in an A x D x C matrix Z.
And of course, you can always pre-allocate Z to speed things up by using Z = zeros(A,D,C);.
You can do this in one line using the functions NUM2CELL to break the matrix X into a cell array and CELLFUN to operate across the cells:
Z = cellfun(#(x) x*Y,num2cell(X,[1 2]),'UniformOutput',false);
The result Z is a 1-by-C cell array where each cell contains an A-by-D matrix. If you want Z to be an A-by-D-by-C matrix, you can use the CAT function:
Z = cat(3,Z{:});
NOTE: My old solution used MAT2CELL instead of NUM2CELL, which wasn't as succinct:
[A,B,C] = size(X);
Z = cellfun(#(x) x*Y,mat2cell(X,A,B,ones(1,C)),'UniformOutput',false);
Here's a one-line solution (two if you want to split into 3rd dimension):
A = 2;
B = 3;
C = 4;
D = 5;
X = rand(A,B,C);
Y = rand(B,D);
%# calculate result in one big matrix
Z = reshape(reshape(permute(X, [2 1 3]), [A B*C]), [B A*C])' * Y;
%'# split into third dimension
Z = permute(reshape(Z',[D A C]),[2 1 3]);
Hence now: Z(:,:,i) contains the result of X(:,:,i) * Y
Explanation:
The above may look confusing, but the idea is simple.
First I start by take the third dimension of X and do a vertical concatenation along the first dim:
XX = cat(1, X(:,:,1), X(:,:,2), ..., X(:,:,C))
... the difficulty was that C is a variable, hence you can't generalize that expression using cat or vertcat. Next we multiply this by Y:
ZZ = XX * Y;
Finally I split it back into the third dimension:
Z(:,:,1) = ZZ(1:2, :);
Z(:,:,2) = ZZ(3:4, :);
Z(:,:,3) = ZZ(5:6, :);
Z(:,:,4) = ZZ(7:8, :);
So you can see it only requires one matrix multiplication, but you have to reshape the matrix before and after.
I'm approaching the exact same issue, with an eye for the most efficient method. There are roughly three approaches that i see around, short of using outside libraries (i.e., mtimesx):
Loop through slices of the 3D matrix
repmat-and-permute wizardry
cellfun multiplication
I recently compared all three methods to see which was quickest. My intuition was that (2) would be the winner. Here's the code:
% generate data
A = 20;
B = 30;
C = 40;
D = 50;
X = rand(A,B,C);
Y = rand(B,D);
% ------ Approach 1: Loop (via #Zaid)
tic
Z1 = zeros(A,D,C);
for m = 1:C
Z1(:,:,m) = X(:,:,m)*Y;
end
toc
% ------ Approach 2: Reshape+Permute (via #Amro)
tic
Z2 = reshape(reshape(permute(X, [2 1 3]), [A B*C]), [B A*C])' * Y;
Z2 = permute(reshape(Z2',[D A C]),[2 1 3]);
toc
% ------ Approach 3: cellfun (via #gnovice)
tic
Z3 = cellfun(#(x) x*Y,num2cell(X,[1 2]),'UniformOutput',false);
Z3 = cat(3,Z3{:});
toc
All three approaches produced the same output (phew!), but, surprisingly, the loop was the fastest:
Elapsed time is 0.000418 seconds.
Elapsed time is 0.000887 seconds.
Elapsed time is 0.001841 seconds.
Note that the times can vary quite a lot from one trial to another, and sometimes (2) comes out the slowest. These differences become more dramatic with larger data. But with much bigger data, (3) beats (2). The loop method is still best.
% pretty big data...
A = 200;
B = 300;
C = 400;
D = 500;
Elapsed time is 0.373831 seconds.
Elapsed time is 0.638041 seconds.
Elapsed time is 0.724581 seconds.
% even bigger....
A = 200;
B = 200;
C = 400;
D = 5000;
Elapsed time is 4.314076 seconds.
Elapsed time is 11.553289 seconds.
Elapsed time is 5.233725 seconds.
But the loop method can be slower than (2), if the looped dimension is much larger than the others.
A = 2;
B = 3;
C = 400000;
D = 5;
Elapsed time is 0.780933 seconds.
Elapsed time is 0.073189 seconds.
Elapsed time is 2.590697 seconds.
So (2) wins by a big factor, in this (maybe extreme) case. There may not be an approach that is optimal in all cases, but the loop is still pretty good, and best in many cases. It is also best in terms of readability. Loop away!
Nope. There are several ways, but it always comes out in a loop, direct or indirect.
Just to please my curiosity, why would you want that anyway ?
To answer the question, and for readability, please see:
ndmult, by ajuanpi (Juan Pablo Carbajal), 2013, GNU GPL
Input
2 arrays
dim
Example
nT = 100;
t = 2*pi*linspace (0,1,nT)’;
# 2 experiments measuring 3 signals at nT timestamps
signals = zeros(nT,3,2);
signals(:,:,1) = [sin(2*t) cos(2*t) sin(4*t).^2];
signals(:,:,2) = [sin(2*t+pi/4) cos(2*t+pi/4) sin(4*t+pi/6).^2];
sT(:,:,1) = signals(:,:,1)’;
sT(:,:,2) = signals(:,:,2)’;
G = ndmult (signals,sT,[1 2]);
Source
Original source. I added inline comments.
function M = ndmult (A,B,dim)
dA = dim(1);
dB = dim(2);
# reshape A into 2d
sA = size (A);
nA = length (sA);
perA = [1:(dA-1) (dA+1):(nA-1) nA dA](1:nA);
Ap = permute (A, perA);
Ap = reshape (Ap, prod (sA(perA(1:end-1))), sA(perA(end)));
# reshape B into 2d
sB = size (B);
nB = length (sB);
perB = [dB 1:(dB-1) (dB+1):(nB-1) nB](1:nB);
Bp = permute (B, perB);
Bp = reshape (Bp, sB(perB(1)), prod (sB(perB(2:end))));
# multiply
M = Ap * Bp;
# reshape back to original format
s = [sA(perA(1:end-1)) sB(perB(2:end))];
M = squeeze (reshape (M, s));
endfunction
I highly recommend you use the MMX toolbox of matlab. It can multiply n-dimensional matrices as fast as possible.
The advantages of MMX are:
It is easy to use.
Multiply n-dimensional matrices (actually it can multiply arrays of 2-D matrices)
It performs other matrix operations (transpose, Quadratic Multiply, Chol decomposition and more)
It uses C compiler and multi-thread computation for speed up.
For this problem, you just need to write this command:
C=mmx('mul',X,Y);
here is a benchmark for all possible methods. For more detail refer to this question.
1.6571 # FOR-loop
4.3110 # ARRAYFUN
3.3731 # NUM2CELL/FOR-loop/CELL2MAT
2.9820 # NUM2CELL/CELLFUN/CELL2MAT
0.0244 # Loop Unrolling
0.0221 # MMX toolbox <===================
I would like to share my answer to the problems of:
1) making the tensor product of two tensors (of any valence);
2) making the contraction of two tensors along any dimension.
Here are my subroutines for the first and second tasks:
1) tensor product:
function [C] = tensor(A,B)
C = squeeze( reshape( repmat(A(:), 1, numel(B)).*B(:).' , [size(A),size(B)] ) );
end
2) contraction:
Here A and B are the tensors to be contracted along the dimesions i and j respectively. The lengths of these dimensions should be equal, of course. There's no check for this (this would obscure the code) but apart from this it works well.
function [C] = tensorcontraction(A,B, i,j)
sa = size(A);
La = length(sa);
ia = 1:La;
ia(i) = [];
ia = [ia i];
sb = size(B);
Lb = length(sb);
ib = 1:Lb;
ib(j) = [];
ib = [j ib];
% making the i-th dimension the last in A
A1 = permute(A, ia);
% making the j-th dimension the first in B
B1 = permute(B, ib);
% making both A and B 2D-matrices to make use of the
% matrix multiplication along the second dimension of A
% and the first dimension of B
A2 = reshape(A1, [],sa(i));
B2 = reshape(B1, sb(j),[]);
% here's the implicit implication that sa(i) == sb(j),
% otherwise - crash
C2 = A2*B2;
% back to the original shape with the exception
% of dimensions along which we've just contracted
sa(i) = [];
sb(j) = [];
C = squeeze( reshape( C2, [sa,sb] ) );
end
Any critics?
I would think recursion, but that's the only other non- loop method you can do
You could "unroll" the loop, ie write out all the multiplications sequentially that would occur in the loop