Interpolate matrices for different times in Matlab - matlab

I have computed variables stored in a matrix for a specific time vector.
Now I want to interpolate between those whole matrices for a new time vector to get the matrices for the desired new time vector.
I've came up with the following solution but it seems clunky and computational demanding:
clear all;
a(:,:,1) = [1 1 1;2 2 2;3 3 3]; % Matrix 1
a(:,:,2) = [4 4 4;6 6 6;8 8 8]; % Matrix 2
t1 = [1 2]; % Old time vector
t2 = [1 1.5 2]; % New time vector
% Interpolation for each matrix element
for r = 1:1:size(a,2)
for c = 1:1:size(a,1)
tab(:) = a(r,c,:);
tabInterp(r,c,:) = interp1(t1,tab(:),t2);
end
end
The result is and should be:
[2.5000 2.5000 2.5000
4.0000 4.0000 4.0000
5.5000 5.5000 5.5000]
Any thoughts?

You can do the linear interpolation manually, and all at once...
m = ( t2 - t1(1) ) / ( t1(2) - t1(1) );
% Linear interpolation using the standard 'y = m*x + c' linear structure
tabInterp = reshape(m,1,1,[]) .* (a(:,:,2)-a(:,:,1)) + a(:,:,1);
This will work for any size t2, as long as t1 has 2 elements.
If you have a t1 with more than 2 elements, you can create the scaling vector m using interp1. This is relatively efficient because you're only using interp1 for your time vector, not the matrix:
m = interp1( t1, (t1-min(t1))/(max(t1)-min(t1)), t2, 'linear', 'extrap' );
This uses implicit expansion with the .* operation, which requires R2016b or newer. If you have an older MATLAB version then use bsxfun for the same functionality.

I don't really see a problem with a loop based approach, but if you're looking for a loopless method you can do the following.
[rows, cols, ~] = size(a);
aReshape = reshape(a, rows*cols, []).';
tabInterp = reshape(interp1(t1, aReshape, t2).', rows, cols, []);
Looking at the source code for interp1 it appears a for loop is being used anyway so I doubt this will result in any performance gain.

Related

Computing Camera Matrix Using MATLAB

I'm currently trying to compute the camera matrix P given a set of world points (X) with its corresponding image points (x). However, when testing for the the result, P (3 x 4 camera matrix) multiplying by the world points does not give me the correct corresponding image points. However, only the first column of PX = x. The other column won't return the approximate image points.
Code:
X = [1 2 3; 4 5 6; 7 8 9; 1 1 1];
x = [3 2 1; 6 5 4; 1 1 1];
[mX, nX] = size(X);
[mx, nx] = size(x);
for i = 0:(nX-1)
XX{i+1} = transpose(X(1+i: 4+i));
end
for i = 0:(nx-1)
xx{i+1} = transpose(x(i+1:3+i));
end
%TODO - normalization
A = [];
%construct matrix
for i = 1:nX
A = [A; zeros(1,4) -1*(xx{i}(3)*transpose(XX{i})) xx{i}(2)*transpose(XX{i})];
A = [A; xx{i}(3)*transpose(XX{i}) zeros(1,4) -1*xx{i}(1)*transpose(XX{i})];
end
%using svd to solve for non zero solution
[u s v] = svd(A);
p = v(:, size(v,2));
p = reshape(p, 4,3)';
output for the first column, works good:
>> p*XX{1}
ans =
0.0461
0.0922
0.0154
>> ans/0.0154
ans =
2.9921
5.9841
0.9974
>> xx{1}
ans =
3
6
1
output for the second column, doesn't work:
>> p*XX{2}
ans =
0.5202
0.0867
0.1734
>> ans/0.1734
ans =
2.9999
0.5000
1.0000
>> xx{2}
ans =
6
1
2
By the way, I was told that I need to normalize the world points and image points before I compute the camera matrix. I have not done this step and have no idea how to. If this is causing the issue, please explain what can be done. Thank you in advance.
This is because you aren't indexing into the matrix properly. You are using linear indexing to access each column of the matrix. In that case, your for loop needs to access each column independently. Therefore, each iteration of your for loop must access groups of 4 elements for your 3D points and groups of 3 elements for your 2D points.
As such, you simply need to do this for your for loops:
for i = 0:(nX-1)
XX{i+1} = transpose(X(4*i + 1 : 4*(i + 1)));
end
for i = 0:(nx-1)
xx{i+1} = transpose(x(3*i + 1 : 3*(i + 1)));
end
After this, the code should work no problem. To verify, we can loop through each 3D point and determine its 2D equivalent as you're using cells:
out = zeros(size(xx)); % Declare output matrix
for ii = 1 : numel(XX) % For each 3D point...
out(:,ii) = p * XX{ii}; % Transform the point
out(:,ii) = out(:,ii) / out(end,ii); % Normalize
end
We thus get:
>> out
out =
3.0000 2.0000 1.0000
6.0000 5.0000 4.0000
1.0000 1.0000 1.0000
Compare with your x:
>> x
x =
3 2 1
6 5 4
1 1 1
Suggestion - Use vectorization
If I can suggest something, please do not use cell arrays here. You can create the matrix of equations for solving using vectorization. Specifically, you can create the matrix A directly without any for loops:
A = [zeros(N, 4) -X.' bsxfun(#times, x(2,:).', X.');
X.' zeros(N, 4) bsxfun(#times, -x(1,:).', X.')];
If you own MATLAB R2016b and up, you can do this with internal broadcasting:
A = [zeros(N, 4) -X.' x(2,:).' .* X.';
X.' zeros(N, 4) -x(1,:).' .* X.']
Note that you will see the rows are shuffled in comparison to your original matrix A because of the vectorization. Because we are solving for the null space of the matrix A, shuffling the rows has no effect. Therefore, your code can be simplified to:
X = [1 2 3; 4 5 6; 7 8 9; 1 1 1];
x = [3 2 1; 6 5 4; 1 1 1];
A = [zeros(N, 4) -X.' bsxfun(#times, x(2,:).', X.');
X.' zeros(N, 4) bsxfun(#times, -x(1,:).', X.')];
% Use this for MATLAB R2016b and up
% A = [zeros(N, 4) -X.' x(2,:).' .* X.';
% X.' zeros(N, 4) -x(1,:).' .* X.']
[u, s, v] = svd(A);
p = v(:, end);
p = reshape(p, 4, 3).';
To finally compute the output matrix, you can just use simple matrix multiplication. The fact that you are using cells requires that you have to use a for loop and it's much faster to do this with matrix multiplication:
out = p * X;
You can then take the last row of the results and divide each of the other rows by this row.
out = bsxfun(#rdivide, out, out(end,:));
Again with MATLAB R2016b and up, you can just do it as so:
out = out ./ out(end,:);

Extracting and storing non-zero entries in MATLAB

Could anyone help me build and correct my code which aims to only save the non-zero elements of an arbitrary square matrix and its index? Basically I need to write a script that does the same function as 'sparse' in MATLAB.
`%Consider a 3x3 matrix
A=[ 0 0 9 ;-1 8 0;0 -5 0 ];
n=3; %size of matrix
%initialise following arrays:
RI= zeros(n,1); %row index
CI = zeros(n,1); %column index
V = zeros(n,1); %value in the matrix
for k = 1:n %row 1 to n
for j = 1:n %column 1 to n
if A(k,j)~=0
RI(k)=k;
CI(j)=j;
V(k,j)=A(k,j);
end
end
end`
You could use the find function to find all the non-zero elements.
So,
[RI, CI, V] = find(A);
% 2 1 -1
% 2 2 8
% 3 2 -5
% 1 3 9
EDIT :
I realize from your comments that your goal was to learn coding in Matlab and you might be wondering why your code didn't work as expected. So let me try to explain the issue along with an example code that is similar to yours.
% Given:
A=[ 0 0 9 ;-1 8 0;0 -5 0 ];
Firstly, instead of manually specifying the size as n = 3, I'd recommend using the built-in size function.
sz = size(A);
% note that this contains 2 elements:
% [number of rows, number of columns]
Next, to initialize the arrays RI, CI and V we would like to know their sizes. Since we do not know the number of non-zero elements to start with, we
have two options: (1) choose a large number that is guaranteed to be equal to or greater than the number of non-zero elements, for example prod(sz). (Why is that true?). (2) Do not initialize it at all and let Matlab dynamically allocate memory as required. I'd follow the second option in the code below.
% we'll keep a count of non-zero elements as we find them
numNZ = 0; % this will increment every time a non-zero element is found
for iCol = 1:sz(2) %column 1 to end
for iRow = 1:sz(1) %row 1 to end
if A(iRow,iCol)~=0
numNZ = numNZ + 1;
RI(numNZ) = iRow;
CI(numNZ) = iCol;
V(numNZ) = A(iRow,iCol);
end
end
end
disp([RI, CI, V])
% 2 1 -1
% 2 2 8
% 3 2 -5
% 1 3 9
Makes sense?
So I think we've established that the point of this is to learn an unfamiliar programming language. The simplest solution is to use sparse itself but that gives you no insight into programming. Nor does find, which can be used similarly.
Now, we could go the same route you've started using: procedural for and if over each row and each column. Could be almost any programming language, but for a few quirks of punctuation. But what you'll find, even if you do correct the mistakes (like the fact that n should be the number of non-zero entries, not the number of rows) is that this is a very slow way of doing numerical work in Matlab.
Here's another (still inefficient, but less so) way which will hopefully provide some insight into the "vectorized" way of doing things, which is one of the things that makes Matlab as powerful as it is:
function [RI, CI, V] = mysparse(A) % first: use functions!
[nRows, nCols] = size(A);
[allRowIndices, allColIndices] = ndgrid(1:nRows, 1:nCols) % let's leave the semicolon off so you can see for yourself what it does.
% It's very similar to `meshgrid` which you'll see more often (it's heavily used in Matlab graphics)
% but `ndgrid` is "simpler" in that it's more in tune with the fundamental conventions of Matlab (rows, then columns)
isNonZero = A ~= 0; % this gives you a "logical array" which is a very powerful thing: it can be used as a subscript to select elements from another array, in one shot...
RI = allRowIndices(isNonZero); % like this
CI = allColIndices(isNonZero); % or this
V = A(isNonZero); % or even this
RI = RI(:); % have to do this explicitly, because the lines above will reshape the values into a single long string under some circumstances but not others
CI = CI(:);
V = V(:);
I will go with a N x 3 matrix where N are the number of non-zero elements in the matrix.
% Define a matrix A as follows:
A = randi([0 1],[4 4])
for i=1:16
if A(i) ~= 0
A(i) = rand;
end
end
[row,col] = find(A);
elms = A(A~=0); % MATLAB always works in column-major order and is consistent,
% so no need to use sub2ind to access elements given by find
newSparse_A = [row col elms];
Output:
newSparse_A =
1.0000 1.0000 0.9027
2.0000 1.0000 0.9448
3.0000 1.0000 0.4909
1.0000 2.0000 0.4893
2.0000 2.0000 0.3377
4.0000 2.0000 0.9001
>> sparse(A)
ans =
(1,1) 0.9027
(2,1) 0.9448
(3,1) 0.4909
(1,2) 0.4893
(2,2) 0.3377
(4,2) 0.9001

measure similarity between 1 dimensional vectors

EDITED QUESTION
I have n signals of equal length.
X_signal
Y_signal
...
Z_signal
I calculate minima of these signals and I store their location (in time) in the vectors
X = [x1 x2 x3 x4 ... x100]
Y = [y1 y2 y3 y4 ... y150]
...
Z = [z1 z2 z3 z4 ... z110]
You can think about X,Y,..Z as time series that can have different lenght.
I assume that the original signals are similar if they have their minima almost at the same locations.
I would like to know what would be a smart approach to measure this kind of similarity keeping in mind that some minima in X,Y,Z can be just noise.
For example if X = [1 5 8 12 15 20] and Y = [1.5 5.5 7.5 10 12 15.5 20.2] they should be similar since almost all the points have the same value exept for Y(4) = [10].
If you have time code or pseudo code in Matlab is appreciated, otherwise also a suggestion, link etc. is super fine.
Thanks
ORIGINAL QUESTION
I have n vectors of different length.
X = [x1 x2 x3 x4 ... x100]
Y = [y1 y2 y3 y4 ... y150]
...
Z = [z1 z2 z3 z4 ... z110]
Vectors (X Y ... Z) represent minima values of the energy of the corresponding signals (X_energy, Y_energy, etc).
To recap starting from the signals X_signal, Y_signal ... Z_signal I compute the energy in windows of 20 samples and I calculate the minima of the resulting energy signals.
Assuming that 2 or more vector are similar if they have almost equal values (i.e. X and Y are similar if x1 ~= y1, x2 ~=y2, etc.) In other words I assume that the original signals are similar if they have minimum energy at the same (or almost at the same) time instant. I would like to know what would be a smart approach to measure this kind of similarity.
PS.
It is almost impossible that two vectors are equal so I would like to have just an idea of how close their "points" are.
X and Y could be similar also if they are shifted (i.e. x1~=y3, x2=~y4, etc)
It is always the case that the values are in ascending order (x1<x2<...<x100)
If you have time code or pseudo code in Matlab is appreciated, otherwise also a suggestion, link etc. is super fine.
Thanks
One possible approach (particularly if you do not have the Statistics and/or Signal Processing toolbox) is to generate a correlation matrix for all of your vectors with the Matlab function corrcoef
Since your vectors are different sizes, you would have to either
zero pad the smaller vectors so they are the same size as the largest
Or take an aligned sample of values less than or equal to the number
of values in the smallest vector, out of each of them before
computing correlation.
It depends on your application which procedure is more suitable. Since your vectors are ordered in ascending order, likely zero padding would be inappropriate.
Then you would need to create a matrix M with the rows corresponding to the elements, and the columns corresponding to each (zero padded or sampled) vector.
You could do that with the Matlab function horzcat:
M=horzcat(V1,V2,...Vn)
where V1, V2, ..Vn are each column vectors of the same size.
Finally you could get a correlation matrix for all of your vectors with corrcoef:
Cmat=corrcoef(M)
Matlab docs for corrcoef at this link will help you understand how to interpret the results statistically.
Note that this approach would not take into account any correlation between lagged versions of your vectors.
Edited answer
Now that it is clear that X vector is the time positions of all minima of signal 'X', Y vector is the time positions of all minima of signal 'Y', etc... Here is some updated code.
In fact the idea is still the same ... we build a linearly sampled time vector from all time positions of the minima in all signals (+ from some time sampling precision)... then we build new signals being 1.0 everywhere expect at minima time locations (set to 0.0) ... finally we use the same correlation code as before.
NB Speed and memory optimized version is now available here
function [RMax] = MinimaCorrelation(c, ts)
%[
% Some default resolution and time-location of minima positions
if (nargin < 2), ts = 0.1; end
if (nargin < 1), c = { [1 3 8 7 3 4 12]; [3 8 7 3]; [4 12]; [5 3 8 -3 12]; [1 3 8 7 3 4 12]; }; end
% Number of channels
n = length(c);
% Build linearly sample time vector for all time locations
minTime = min(cellfun(#min, c));
maxTime = max(cellfun(#max, c));
timeVector = minTime:ts:maxTime;
timeVector(end+1) = timeVector(end) + ts; % just to really include min and max if step is not ok
% Build new signals being '1' everywhere except at minima locations (set to '0')
s = ones(n, length(timeVector));
for ni = 1:n
for mv = c{ni}
[~, ind] = min(abs(timeVector - mv));
s(ni, ind) = 0;
end
end
% Correlation (copied 3 times to avoid biased effect on sides ==> circular shifting is ok this way)
s = [s, s, s].';
RMax = max(xcorr(s, 'coeff'), [], 1);
% Put in R(i,j) format
RMax = reshape(RMax, [n n]);
%]
end
With default data, one obtains:
1.0000 0.9899 0.9866 0.9829 1.0000
0.9899 1.0000 0.9833 0.9865 0.9899
0.9866 0.9833 1.0000 0.9832 0.9866
0.9829 0.9865 0.9832 1.0000 0.9829
1.0000 0.9899 0.9866 0.9829 1.0000
Careful, it is brute force solution (time & memory consumption increases quickly with the number of signal and time resolution to have). Now that question is more clear, maybe someone will find smarter answer.
Original answer
Here is coarse-code for an approach using the maximum of cross-correlation and xcorr routine (in signal processing toolbox):
function [RMax] = xcorrmax(c)
%[
% Default signals for test
if (nargin < 1),
c = cell(0,0);
c{end+1} = [1 3 8 7 3 4 12];
c{end+1} = [3 8 7 3];
c{end+1} = [4 12];
c{end+1} = [5 3 8 -3 12];
c{end+1} = [1 3 8 7 3 4 12];
end
% Number of channels
n = length(c);
% Padding to have vectors all of the same length
% See also `padarray` to do circular/symmetric padding (i don't have image toolbox)
maxlength = max(cellfun(#length, c));
c = cellfun(#(x)myquickpad(x, maxlength), c, 'UniformOutput', false);
c = cell2mat(c.').';
% Compute cross correlation (multichannel case) and keep max value
% NB1: May also use xcov if signal mean is not important
% NB2: Normalization at lag = 0
RMax = max(xcorr(c, 'coeff'), [], 1);
% Put in R(i,j) format
RMax = reshape(RMax, [n n]);
%]
end
function [a] = myquickpad(a, maxlength)
%[
if (length(a) < maxlength)
a(maxlength) = 0;
end
%]
end
For the following signals:
(1) [1 3 8 7 3 4 12]
(2) [3 8 7 3]
(3) [4 12]
(4) [5 3 8 -3 12]
(5) [1 3 8 7 3 4 12]
It returns the following correlation matrix R(i,j) between ith and jth signals:
1.0000 0.6698 0.7402 0.8016 1.0000
0.6698 1.0000 0.8012 0.4853 0.6698
0.7402 0.8012 1.0000 0.6587 0.7402
0.8016 0.4853 0.6587 1.0000 0.8016
1.0000 0.6698 0.7402 0.8016 1.0000
Some remarks:
It looks coherent, for instance signal (1) and (5) are identical and correlation is 1.0.
Because of normalization used it considers (1) closer to (3) than (2) ... so should be reviewed upon your needs (see normalization as in xcorrcoef for instance as shown by #paisanco).
You can use xcov instead of xcorr if signal shifts in amplitude are not important.
Again, this is a coarse approach, not speed/memory optimized at all, nor accounting for the fact that values are sorted, and may be not fully inline with what you'd really like to have.

Obtain 3-D matrix from multiplication of one 1-D matrix and one 2-D matrix [duplicate]

As always trying to learn more from you, I was hoping I could receive some help with the following code.
I need to accomplish the following:
1) I have a vector:
x = [1 2 3 4 5 6 7 8 9 10 11 12]
2) and a matrix:
A =[11 14 1
5 8 18
10 8 19
13 20 16]
I need to be able to multiply each value from x with every value of A, this means:
new_matrix = [1* A
2* A
3* A
...
12* A]
This will give me this new_matrix of size (12*m x n) assuming A (mxn). And in this case (12*4x3)
How can I do this using bsxfun from matlab? and, would this method be faster than a for-loop?
Regarding my for-loop, I need some help here as well... I am not able to storage each "new_matrix" as the loop runs :(
for i=x
new_matrix = A.*x(i)
end
Thanks in advance!!
EDIT: After the solutions where given
First solution
clear all
clc
x=1:0.1:50;
A = rand(1000,1000);
tic
val = bsxfun(#times,A,permute(x,[3 1 2]));
out = reshape(permute(val,[1 3 2]),size(val,1)*size(val,3),[]);
toc
Output:
Elapsed time is 7.597939 seconds.
Second solution
clear all
clc
x=1:0.1:50;
A = rand(1000,1000);
tic
Ps = kron(x.',A);
toc
Output:
Elapsed time is 48.445417 seconds.
Send x to the third dimension, so that singleton expansion would come into effect when bsxfun is used for multiplication with A, extending the product result to the third dimension. Then, perform the bsxfun multiplication -
val = bsxfun(#times,A,permute(x,[3 1 2]))
Now, val is a 3D matrix and the desired output is expected to be a 2D matrix concatenated along the columns through the third dimension. This is achieved below -
out = reshape(permute(val,[1 3 2]),size(val,1)*size(val,3),[])
Hope that made sense! Spread the bsxfun word around! woo!! :)
The kron function does exactly that:
kron(x.',A)
Here is my benchmark of the methods mentioned so far, along with a few additions of my own:
function [t,v] = testMatMult()
% data
%{
x = [1 2 3 4 5 6 7 8 9 10 11 12];
A = [11 14 1; 5 8 18; 10 8 19; 13 20 16];
%}
x = 1:50;
A = randi(100, [1000,1000]);
% functions to test
fcns = {
#() func1_repmat(A,x)
#() func2_bsxfun_3rd_dim(A,x)
#() func2_forloop_3rd_dim(A,x)
#() func3_kron(A,x)
#() func4_forloop_matrix(A,x)
#() func5_forloop_cell(A,x)
#() func6_arrayfun(A,x)
};
% timeit
t = cellfun(#timeit, fcns, 'UniformOutput',true);
% check results
v = cellfun(#feval, fcns, 'UniformOutput',false);
isequal(v{:})
%for i=2:numel(v), assert(norm(v{1}-v{2}) < 1e-9), end
end
% Amro
function B = func1_repmat(A,x)
B = repmat(x, size(A,1), 1);
B = bsxfun(#times, B(:), repmat(A,numel(x),1));
end
% Divakar
function B = func2_bsxfun_3rd_dim(A,x)
B = bsxfun(#times, A, permute(x, [3 1 2]));
B = reshape(permute(B, [1 3 2]), [], size(A,2));
end
% Vissenbot
function B = func2_forloop_3rd_dim(A,x)
B = zeros([size(A) numel(x)], 'like',A);
for i=1:numel(x)
B(:,:,i) = x(i) .* A;
end
B = reshape(permute(B, [1 3 2]), [], size(A,2));
end
% Luis Mendo
function B = func3_kron(A,x)
B = kron(x(:), A);
end
% SergioHaram & TheMinion
function B = func4_forloop_matrix(A,x)
[m,n] = size(A);
p = numel(x);
B = zeros(m*p,n, 'like',A);
for i=1:numel(x)
B((i-1)*m+1:i*m,:) = x(i) .* A;
end
end
% Amro
function B = func5_forloop_cell(A,x)
B = cell(numel(x),1);
for i=1:numel(x)
B{i} = x(i) .* A;
end
B = cell2mat(B);
%B = vertcat(B{:});
end
% Amro
function B = func6_arrayfun(A,x)
B = cell2mat(arrayfun(#(xx) xx.*A, x(:), 'UniformOutput',false));
end
The results on my machine:
>> t
t =
0.1650 %# repmat (Amro)
0.2915 %# bsxfun in the 3rd dimension (Divakar)
0.4200 %# for-loop in the 3rd dim (Vissenbot)
0.1284 %# kron (Luis Mendo)
0.2997 %# for-loop with indexing (SergioHaram & TheMinion)
0.5160 %# for-loop with cell array (Amro)
0.4854 %# arrayfun (Amro)
(Those timings can slightly change between different runs, but this should give us an idea how the methods compare)
Note that some of these methods are going to cause out-of-memory errors for larger inputs (for example my solution based on repmat can easily run out of memory). Others will get significantly slower for larger sizes but won't error due to exhausted memory (the kron solution for instance).
I think that the bsxfun method func2_bsxfun_3rd_dim or the straightforward for-loop func4_forloop_matrix (thanks to MATLAB JIT) are the best solutions in this case.
Of course you can change the above benchmark parameters (size of x and A) and draw your own conclusions :)
Just to add an alternative, you maybe can use cellfun to achieve what you want. Here's an example (slightly modified from yours):
x = randi(2, 5, 3)-1;
a = randi(3,3);
%// bsxfun 3D (As implemented in the accepted solution)
val = bsxfun(#and, a, permute(x', [3 1 2])); %//'
out = reshape(permute(val,[1 3 2]),size(val,1)*size(val,3),[]);
%// cellfun (My solution)
val2 = cellfun(#(z) bsxfun(#and, a, z), num2cell(x, 2), 'UniformOutput', false);
out2 = cell2mat(val2); % or use cat(3, val2{:}) to get a 3D matrix equivalent to val and then permute/reshape like for out
%// compare
disp(nnz(out ~= out2));
Both give the same exact result.
For more infos and tricks using cellfun, see: http://matlabgeeks.com/tips-tutorials/computation-using-cellfun/
And also this: https://stackoverflow.com/a/1746422/1121352
If your vector x is of lenght = 12 and your matrix of size 3x4, I don't think that using one or the other would change much in term of time. If you are working with higher size matrix and vector, now that might become an issue.
So first of all, we want to multiply a vector with a matrix. In the for-loop method, that would give something like that :
s = size(A);
new_matrix(s(1),s(2),numel(x)) = zeros; %This is for pre-allocating. If you have a big vector or matrix, this will help a lot time efficiently.
for i = 1:numel(x)
new_matrix(:,:,i)= A.*x(i)
end
This will give you 3D matrix, with each 3rd dimension being a result of your multiplication. If this is not what you are looking for, I'll be adding another solution which might be more time efficient with bigger matrixes and vectors.

Getting the N-dimensional product of vectors

I am trying to write code to get the 'N-dimensional product' of vectors. So for example, if I have 2 vectors of length L, x & y, then the '2-dimensional product' is simply the regular vector product, R=x*y', so that each entry of R, R(i,j) is the product of the i'th element of x and the j'th element of y, aka R(i,j)=x(i)*y(j).
The problem is how to elegantly generalize this in matlab for arbitrary dimensions. This is I had 3 vectors, x,y,z, I want the 3 dimensional array, R, such that R(i,j,k)=x(i)*y(j)*z(k).
Same thing for 4 vectors, x1,x2,x3,x4: R(i1,i2,i3,i4)=x1(i1)*x2(i2)*x3(i3)*x4(i4), etc...
Also, I do NOT know the number of dimensions beforehand. The code must be able to handle an arbitrary number of input vectors, and the number of input vectors corresponds to the dimensionality of the final answer.
Is there any easy matlab trick to do this and avoid going through each element of R specifically?
Thanks!
I think by "regular vector product" you mean outer product.
In any case, you can use the ndgrid function. I like this more than using bsxfun as it's a little more straightforward.
% make some vectors
w = 1:10;
x = w+1;
y = x+1;
z = y+1;
vecs = {w,x,y,z};
nvecs = length(vecs);
[grids{1:nvecs}] = ndgrid(vecs{:});
R = grids{1};
for i=2:nvecs
R = R .* grids{i};
end;
% Check results
for i=1:10
for j=1:10
for k=1:10
for l=1:10
V(i,j,k,l) = R(i,j,k,l) == w(i)*x(j)*y(k)*z(l);
end;
end;
end;
end;
all(V(:))
ans = 1
The built-in function bsxfun is a fast utility that should be able to help. It is designed to perform 2 input functions on a per-element basis for two inputs with mismatching dimensions. Singletons dimensions are expanded, and non-singleton dimensions need to match. (It sounds confusing, but once grok'd it useful in many ways.)
As I understand your problem, you can adjust the dimension shape of each vector to define the dimension that it should be defined across. Then use nested bsxfun calls to perform the multiplication.
Example code follows:
%Some inputs, N-by-1 vectors
x = [1; 3; 9];
y = [1; 2; 4];
z = [1; 5];
%The computation you describe, using nested BSXFUN calls
bsxfun(#times, bsxfun(#times, ... %Nested BSX fun calls, 1 per dimension
x, ... % First argument, in dimension 1
permute(y,2:-1:1) ) , ... % Second argument, permuited to dimension 2
permute(z,3:-1:1) ) % Third argument, permuted to dimension 3
%Result
% ans(:,:,1) =
% 1 2 4
% 3 6 12
% 9 18 36
% ans(:,:,2) =
% 5 10 20
% 15 30 60
% 45 90 180
To handle an arbitrary number of dimensions, this can be expanded using a recursive or loop construct. The loop would look something like this:
allInputs = {[1; 3; 9], [1; 2; 4], [1; 5]};
accumulatedResult = allInputs {1};
for ix = 2:length(allInputs)
accumulatedResult = bsxfun(#times, ...
accumulatedResult, ...
permute(allInputs{ix},ix:-1:1));
end