Vectorization: matrix array multiplication element wise one by one - matlab

I have a matrix:
R = [0 -1;1 0];
array = 1:1:10;
Also x0 = [2;1]
How can I obtain another array in the most efficient way without loop?
array2 = [expm(1*R) expm(2*R) expm(3*R) .... expm(10*R)];
Then I want to obtain
array3 of dimension 2 by 10 such that:
array3 = [expm(1*R)*x0 expm(2*R)*x0 expm(3*R)*x0 .... expm(10*R)*x0];

From wikipedia:
If a matrix is diagonal its exponential can be obtained by exponentiating each entry on the main diagonal.
Given that a block diagonal matrix can be created from {1*R, 2*R,...} then its exponential can be obtained and reshaped to a [2 * n] and it can be multiplied by x0.
However its performance may be worse than for loop.
R = [0 -1;1 0];
array = 1:1:10;
x0 = [2;1]
n = numel(array);
result = reshape(expm(kron(spdiags(array.',0,n,n),R))*repmat(x0,n,1),2,[]);
For array of small size (less than 70 elements) full matrix is more efficient:
result = reshape(expm(kron(diag(array),R))*repmat(x0,n,1),2,[]);

Well I see that the matrix R that you have is 2x2. In case it is always 2x2, then you can use the following function (Wikipedia) to calculate the exponential:
function output = expm2d(A)
% Assuming t = 1 from Evaluation by Laurent series (https://en.wikipedia.org/wiki/Matrix_exponential#Evaluation_by_Laurent_series)
s = trace(A) / 2;
q = sqrt(-det(A - s*eye(size(A))));
output = exp(s) * ((cosh(q) - s * sinh(q) / q) * eye(size(A)) + (sinh(q) * A / q));
end
Using the excellent comparison function provided by thewaywewalk, I got the following results:
When using expm:
>> bench
ans =
0.0181 %// rahnema
0.1075 %// thewaywewalk arrayfun
0.1139 %// thewaywewalk accumarray
When using expm2d:
>> bench
ans =
0.0048 %// rahnema
0.0161 %// thewaywewalk arrayfun
0.0222 %// thewaywewalk accumarray
As you can see, using the function for 2d matrices leads to a 10x decrease in the runtime. Of course, this cannot be used when R is not 2x2.
Edit:
When using expm2d for A = 1:100:
>> bench
ans =
0.1379 %// rahnema
0.1415 %// thewaywewalk arrayfun
0.1756 %// thewaywewalk accumarray

I still don't know if I got your question right. Here are two solutions which are not fully vectorized, but fairly fast:
R = [0 -1;1 0];
A = 1:1:10;
x0 = [2;1];
%// option 1
temp = arrayfun(#(x) (expm(R*x)*x0).', A, 'uni', 0);
array3 = vertcat( temp{:} )
%// option 2
temp = accumarray( (1:numel(A)).', A(:), [], #(x) {(expm(R*x)*x0).'})
array3 = vertcat( temp{:} )
Benchmark
I haven't considered Leander's Answer as it doesn't calculate array3:
function [t] = bench()
R = [0 -1;1 0];
A = 1:1:10;
x0 = [2;1];
% functions to compare
fcns = {
#() compare1(A,R,x0);
#() compare2(A,R,x0);
#() compare3(A,R,x0);
};
% timeit
t = zeros(3,1);
for ii = 1:100;
t = t + cellfun(#timeit, fcns);
end
end
function array3 = compare1(A,R,x0) %rahnema1
n = numel(A);
array3 = reshape(expm(kron(diag(A),R))*repmat(x0,n,1),2,[])
end
function array3 = compare2(A,R,x0) %thewaywewalk 1
temp = arrayfun(#(x) (expm(R*x)*x0).', A, 'uni', 0);
array3 = vertcat( temp{:} )
end
function array3 = compare3(A,R,x0) %thewaywewalk 2
temp = accumarray( (1:numel(A)).', A(:), [], #(x) {(expm(R*x)*x0).'});
array3 = vertcat( temp{:} )
end
Results:
for A = 1:1:10;
0.1006 %// rahnema
0.2831 %// thewaywewalk arrayfun
0.3103 %// thewaywewalk accumarray
As kron gets really slow for large arrays, the benchmark results change for A = 1:1:100;:
4.0068 %// rahnema
1.8045 %// thewaywewalk arrayfun
2.4257 %// thewaywewalk accumarray

Related

how to get an incremental power matrix in matlab

I wanted to compute the following matrix in Matlab:
g=[I
A
.
.
.
A^N]
I used the following program in Matlab:
A=[2 3;4 1];
s=A;
for n=1:1:50
s(n)=A.^n;
end
g=[eye(1,1),s];
I am getting the following error:
In an assignment A(I) = B, the number of elements in B and I must be the same.
Error in s_x_calcu_v1 (line 5)
s(n)=A.^n;
The problem is that you are trying to assign a matrix to a single element. In matlab calling s(n) mean you get the nth element of s, regardless of the dimensions of s. You can use a three dimensional matrix
N = 50;
A=[2 3;4 1];
[nx,ny] = size(A);
s(nx,ny,N) = 0; %makes s a nx x ny x N matrix
for n=1:1:N
s(:,:,n)=A.^n; %Colon to select all elements of that dimension
end
g=cat(3, eye(size(A)) ,s); %Add the I matrix of same size as A
Or a vectorized version
s = bsxfun(#power, A(:), 1:N);
s = reshape(s,2,2,N);
g = cat(3, eye(size(A)) ,s);
And a third solution using cumprod
s = repmat(A(:), [1 N]);
s = cumprod(s,2);
s = reshape(s,2,2,N);
g = cat(3, eye(size(A)) ,s);
Your s array is a 2-by-2 array, you cannot index it to store the result of your compuation at each step of your loop.
For this, the simpler is probably to define s as a cell:
% --- Definitions
A = [2 3;4 1];
N = 50;
% --- Preparation
s = cell(N,1);
% --- Computation
for n=1:N
s{n} = A.^n;
end
Best,
When you loop from 1 to N computing each time A.^n you are doing LOTS of redundant computations! Note that
A.^n = (A.^(n-1)).*A; %//element-wise power
A^n = (A^n) * A; %// matrix power
Therefore,
A = [2 3;4 1];
N = 50;
s = cell(N+1,1);
s{1} = eye(size(A,1));
for ii=1:N
s{ii+1} = s{ii}.*A; %// no powers, just product!
end
g = vertcat( s{:} );
BTW, the same holds if you want to compute matrix power (instead of element-wise powers), all you need is changing to s{ii+1} = s{ii}*A;

Obtain 3-D matrix from multiplication of one 1-D matrix and one 2-D matrix [duplicate]

As always trying to learn more from you, I was hoping I could receive some help with the following code.
I need to accomplish the following:
1) I have a vector:
x = [1 2 3 4 5 6 7 8 9 10 11 12]
2) and a matrix:
A =[11 14 1
5 8 18
10 8 19
13 20 16]
I need to be able to multiply each value from x with every value of A, this means:
new_matrix = [1* A
2* A
3* A
...
12* A]
This will give me this new_matrix of size (12*m x n) assuming A (mxn). And in this case (12*4x3)
How can I do this using bsxfun from matlab? and, would this method be faster than a for-loop?
Regarding my for-loop, I need some help here as well... I am not able to storage each "new_matrix" as the loop runs :(
for i=x
new_matrix = A.*x(i)
end
Thanks in advance!!
EDIT: After the solutions where given
First solution
clear all
clc
x=1:0.1:50;
A = rand(1000,1000);
tic
val = bsxfun(#times,A,permute(x,[3 1 2]));
out = reshape(permute(val,[1 3 2]),size(val,1)*size(val,3),[]);
toc
Output:
Elapsed time is 7.597939 seconds.
Second solution
clear all
clc
x=1:0.1:50;
A = rand(1000,1000);
tic
Ps = kron(x.',A);
toc
Output:
Elapsed time is 48.445417 seconds.
Send x to the third dimension, so that singleton expansion would come into effect when bsxfun is used for multiplication with A, extending the product result to the third dimension. Then, perform the bsxfun multiplication -
val = bsxfun(#times,A,permute(x,[3 1 2]))
Now, val is a 3D matrix and the desired output is expected to be a 2D matrix concatenated along the columns through the third dimension. This is achieved below -
out = reshape(permute(val,[1 3 2]),size(val,1)*size(val,3),[])
Hope that made sense! Spread the bsxfun word around! woo!! :)
The kron function does exactly that:
kron(x.',A)
Here is my benchmark of the methods mentioned so far, along with a few additions of my own:
function [t,v] = testMatMult()
% data
%{
x = [1 2 3 4 5 6 7 8 9 10 11 12];
A = [11 14 1; 5 8 18; 10 8 19; 13 20 16];
%}
x = 1:50;
A = randi(100, [1000,1000]);
% functions to test
fcns = {
#() func1_repmat(A,x)
#() func2_bsxfun_3rd_dim(A,x)
#() func2_forloop_3rd_dim(A,x)
#() func3_kron(A,x)
#() func4_forloop_matrix(A,x)
#() func5_forloop_cell(A,x)
#() func6_arrayfun(A,x)
};
% timeit
t = cellfun(#timeit, fcns, 'UniformOutput',true);
% check results
v = cellfun(#feval, fcns, 'UniformOutput',false);
isequal(v{:})
%for i=2:numel(v), assert(norm(v{1}-v{2}) < 1e-9), end
end
% Amro
function B = func1_repmat(A,x)
B = repmat(x, size(A,1), 1);
B = bsxfun(#times, B(:), repmat(A,numel(x),1));
end
% Divakar
function B = func2_bsxfun_3rd_dim(A,x)
B = bsxfun(#times, A, permute(x, [3 1 2]));
B = reshape(permute(B, [1 3 2]), [], size(A,2));
end
% Vissenbot
function B = func2_forloop_3rd_dim(A,x)
B = zeros([size(A) numel(x)], 'like',A);
for i=1:numel(x)
B(:,:,i) = x(i) .* A;
end
B = reshape(permute(B, [1 3 2]), [], size(A,2));
end
% Luis Mendo
function B = func3_kron(A,x)
B = kron(x(:), A);
end
% SergioHaram & TheMinion
function B = func4_forloop_matrix(A,x)
[m,n] = size(A);
p = numel(x);
B = zeros(m*p,n, 'like',A);
for i=1:numel(x)
B((i-1)*m+1:i*m,:) = x(i) .* A;
end
end
% Amro
function B = func5_forloop_cell(A,x)
B = cell(numel(x),1);
for i=1:numel(x)
B{i} = x(i) .* A;
end
B = cell2mat(B);
%B = vertcat(B{:});
end
% Amro
function B = func6_arrayfun(A,x)
B = cell2mat(arrayfun(#(xx) xx.*A, x(:), 'UniformOutput',false));
end
The results on my machine:
>> t
t =
0.1650 %# repmat (Amro)
0.2915 %# bsxfun in the 3rd dimension (Divakar)
0.4200 %# for-loop in the 3rd dim (Vissenbot)
0.1284 %# kron (Luis Mendo)
0.2997 %# for-loop with indexing (SergioHaram & TheMinion)
0.5160 %# for-loop with cell array (Amro)
0.4854 %# arrayfun (Amro)
(Those timings can slightly change between different runs, but this should give us an idea how the methods compare)
Note that some of these methods are going to cause out-of-memory errors for larger inputs (for example my solution based on repmat can easily run out of memory). Others will get significantly slower for larger sizes but won't error due to exhausted memory (the kron solution for instance).
I think that the bsxfun method func2_bsxfun_3rd_dim or the straightforward for-loop func4_forloop_matrix (thanks to MATLAB JIT) are the best solutions in this case.
Of course you can change the above benchmark parameters (size of x and A) and draw your own conclusions :)
Just to add an alternative, you maybe can use cellfun to achieve what you want. Here's an example (slightly modified from yours):
x = randi(2, 5, 3)-1;
a = randi(3,3);
%// bsxfun 3D (As implemented in the accepted solution)
val = bsxfun(#and, a, permute(x', [3 1 2])); %//'
out = reshape(permute(val,[1 3 2]),size(val,1)*size(val,3),[]);
%// cellfun (My solution)
val2 = cellfun(#(z) bsxfun(#and, a, z), num2cell(x, 2), 'UniformOutput', false);
out2 = cell2mat(val2); % or use cat(3, val2{:}) to get a 3D matrix equivalent to val and then permute/reshape like for out
%// compare
disp(nnz(out ~= out2));
Both give the same exact result.
For more infos and tricks using cellfun, see: http://matlabgeeks.com/tips-tutorials/computation-using-cellfun/
And also this: https://stackoverflow.com/a/1746422/1121352
If your vector x is of lenght = 12 and your matrix of size 3x4, I don't think that using one or the other would change much in term of time. If you are working with higher size matrix and vector, now that might become an issue.
So first of all, we want to multiply a vector with a matrix. In the for-loop method, that would give something like that :
s = size(A);
new_matrix(s(1),s(2),numel(x)) = zeros; %This is for pre-allocating. If you have a big vector or matrix, this will help a lot time efficiently.
for i = 1:numel(x)
new_matrix(:,:,i)= A.*x(i)
end
This will give you 3D matrix, with each 3rd dimension being a result of your multiplication. If this is not what you are looking for, I'll be adding another solution which might be more time efficient with bigger matrixes and vectors.

How can I (efficiently) compute a moving average of a vector?

I've got a vector and I want to calculate the moving average of it (using a window of width 5).
For instance, if the vector in question is [1,2,3,4,5,6,7,8], then
the first entry of the resulting vector should be the sum of all entries in [1,2,3,4,5] (i.e. 15);
the second entry of the resulting vector should be the sum of all entries in [2,3,4,5,6] (i.e. 20);
etc.
In the end, the resulting vector should be [15,20,25,30]. How can I do that?
The conv function is right up your alley:
>> x = 1:8;
>> y = conv(x, ones(1,5), 'valid')
y =
15 20 25 30
Benchmark
Three answers, three different methods... Here is a quick benchmark (different input sizes, fixed window width of 5) using timeit; feel free to poke holes in it (in the comments) if you think it needs to be refined.
conv emerges as the fastest approach; it's about twice as fast as coin's approach (using filter), and about four times as fast as Luis Mendo's approach (using cumsum).
Here is another benchmark (fixed input size of 1e4, different window widths). Here, Luis Mendo's cumsum approach emerges as the clear winner, because its complexity is primarily governed by the length of the input and is insensitive to the width of the window.
Conclusion
To summarize, you should
use the conv approach if your window is relatively small,
use the cumsum approach if your window is relatively large.
Code (for benchmarks)
function benchmark
clear all
w = 5; % moving average window width
u = ones(1, w);
n = logspace(2,6,60); % vector of input sizes for benchmark
t1 = zeros(size(n)); % preallocation of time vectors before the loop
t2 = t1;
th = t1;
for k = 1 : numel(n)
x = rand(1, round(n(k))); % generate random row vector
% Luis Mendo's approach (cumsum)
f = #() luisMendo(w, x);
tf(k) = timeit(f);
% coin's approach (filter)
g = #() coin(w, u, x);
tg(k) = timeit(g);
% Jubobs's approach (conv)
h = #() jubobs(u, x);
th(k) = timeit(h);
end
figure
hold on
plot(n, tf, 'bo')
plot(n, tg, 'ro')
plot(n, th, 'mo')
hold off
xlabel('input size')
ylabel('time (s)')
legend('cumsum', 'filter', 'conv')
end
function y = luisMendo(w,x)
cs = cumsum(x);
y(1,numel(x)-w+1) = 0; %// hackish way to preallocate result
y(1) = cs(w);
y(2:end) = cs(w+1:end) - cs(1:end-w);
end
function y = coin(w,u,x)
y = filter(u, 1, x);
y = y(w:end);
end
function jubobs(u,x)
y = conv(x, u, 'valid');
end
function benchmark2
clear all
w = round(logspace(1,3,31)); % moving average window width
n = 1e4; % vector of input sizes for benchmark
t1 = zeros(size(n)); % preallocation of time vectors before the loop
t2 = t1;
th = t1;
for k = 1 : numel(w)
u = ones(1, w(k));
x = rand(1, n); % generate random row vector
% Luis Mendo's approach (cumsum)
f = #() luisMendo(w(k), x);
tf(k) = timeit(f);
% coin's approach (filter)
g = #() coin(w(k), u, x);
tg(k) = timeit(g);
% Jubobs's approach (conv)
h = #() jubobs(u, x);
th(k) = timeit(h);
end
figure
hold on
plot(w, tf, 'bo')
plot(w, tg, 'ro')
plot(w, th, 'mo')
hold off
xlabel('window size')
ylabel('time (s)')
legend('cumsum', 'filter', 'conv')
end
function y = luisMendo(w,x)
cs = cumsum(x);
y(1,numel(x)-w+1) = 0; %// hackish way to preallocate result
y(1) = cs(w);
y(2:end) = cs(w+1:end) - cs(1:end-w);
end
function y = coin(w,u,x)
y = filter(u, 1, x);
y = y(w:end);
end
function jubobs(u,x)
y = conv(x, u, 'valid');
end
Another possibility is to use cumsum. This approach probably requires fewer operations than conv does:
x = 1:8
n = 5;
cs = cumsum(x);
result = cs(n:end) - [0 cs(1:end-n)];
To save a little time, you can replace the last line by
%// clear result
result(1,numel(x)-n+1) = 0; %// hackish way to preallocate result
result(1) = cs(n);
result(2:end) = cs(n+1:end) - cs(1:end-n);
If you want to preserve the size of your input vector, I suggest using filter
>> x = 1:8;
>> y = filter(ones(1,5), 1, x)
y =
1 3 6 10 15 20 25 30
>> y = (5:end)
y =
15 20 25 30

Matlab - Multiplying a matrix with every matrix of a 3d matrix

I have two matlab questions that seem closely related.
I want to find the most efficient way (no loop?) to multiply a (A x A) matrix with every single matrix of a 3d matrix (A x A x N). Also, I would like to take the trace of each of those products.
http://en.wikipedia.org/wiki/Matrix_multiplication#Frobenius_product
This is the inner frobenius product. On the crappy code I have below I'm using its secondary definition which is more efficient.
I want to multiply each element of a vector (N x 1) with its "corresponding" matrix of a 3d matrix (A x A x N).
function Y_returned = problem_1(X_matrix, weight_matrix)
% X_matrix is the randn(50, 50, 2000) matrix
% weight_matrix is the randn(50, 50) matrix
[~, ~, number_of_matries] = size(X_matrix);
Y_returned = zeros(number_of_matries, 1);
for i = 1:number_of_matries
% Y_returned(i) = trace(X_matrix(:,:,i) * weight_matrix');
temp1 = X_matrix(:,:,i)';
temp2 = weight_matrix';
Y_returned(i) = temp1(:)' * temp2(:);
end
end
function output = problem_2(vector, matrix)
% matrix is the randn(50, 50, 2000) matrix
% vector is the randn(2000, 1) vector
[n1, n2, number_of_matries] = size(matrix);
output = zeros(n1, n2, number_of_matries);
for i = 1:number_of_matries
output(:, :, i) = vector(i) .* matrix(:, :, i);
end
output = sum(output, 3);
end
I assume you mean element-wise multiplication:
Use bsxfun:
A = 10;
N = 4;
mat1 = randn(A,A);
mat2 = randn(A,A,N);
result = bsxfun(#times, mat1, mat2);
Use bsxfun with permute to align dimensions:
A = 10;
N = 4;
vec1 = rand(N,1);
mat2 = randn(A,A,N);
result = bsxfun(#times, permute(vec1,[2 3 1]), mat2);

pairwise evlaluation without using loop

I have a N x 1 array A, and want to get the result matrix with elements being evaluation of function f (such as max) on pairs A(i) & A(j) (i, j =1,...,N). The result matrix will look like [ f(A(i), A(j))]. Any one have suggestions to achieve this without using loop? Also better avoid bsxfun, since bsxfun is not implemented in some program. TKS
Use meshgrid and arrayfun:
[ii jj ] = ndgrid(1:N, 1:N); %// generate all combinations of i and j
result = arrayfun(#(n) f(A(ii(n)), A(jj(n))), 1:N^2);
result = reshape(result, length(A)*[1 1]); %// reshape into a matrix
Example:
N = 3;
A = [4 5 2];
f = #(x,y) max(x,y);
>>[ii jj ] = ndgrid(1:N, 1:N);
result = arrayfun(#(n) f(A(ii(n)), A(jj(n))), 1:N^2);
result = reshape(result, length(A)*[1 1])
result =
4 5 4
5 5 5
4 5 2
If you do not want loops and no bsxfun you are left with repmat
ra = repmat( A, [1 size(N,1)] );
res = f( ra, ra' ); % assuming f can be vectorized over matrices