Matlab ordfilt2 or alternatives for weighted local max - matlab

I would like to compute the weighted maxima of a vector in Matlab. For weighted maxima I intend the following:
Given a vector of 2*N+1 weights W={w[-N], w[-N+1] .. w[0] .. w[N]} and given an input sequence A, weighted maxima is a vector M where m[i]=max(w[-N]*a[i-N], w[-N+1]*a[i-N+1], ... w[N]*a[i+N])
So for example given a vector A= [1, 4, 12, 2, 4] and weights W=[0.5, 1, 0.5], the weighted maxima would be M=[2, 6, 12, 6, 4].
This can be done using ordfilt2, but ordfilt2 uses weights as additive rather then multiplicative.
I am actually working on 4D matrixes, but any 1D solution would work as the 4D weight matrix is separable.
My current solution is to generate shifted copies of the input array A, weight them according to the shift and maximize all the arrays. Shift is performed using circshift and is the bottleneck in the process. generating shifted matrixes "manually" trough indexing turned out to be even slower.
Can you suggest any more efficient solution?
EDIT: For a positive A, M=exp(ordfilt2(log(A), length(W), ones(size(W)), log(W))) does the job, but still takes longer than the circshift solution above. I am still looking for more efficient solutions.

>> B = padarray(A, [0 floor(numel(W)/2)], 0); % Pad A with zeros
>> B = bsxfun(#times, B(bsxfun(#plus, 1:numel(B)-numel(W)+1, (0:numel(W)-1)')), W(:)); % Apply the weights
>> M = max(B) % Compute the local maxima
M =
2 6 12 6 4

Related

Select values with a matrix of indices in MATLAB?

In MATLAB, I am looking for an efficient (and/or vectorized) way of filling a matrix by selecting from multiple matrices given a "selector matrix." For instance, given three source matrices
M1 = [0.1, 0.2; 0.3, 0.4]
M2 = [1, 2; 3, 4]
M3 = [10, 20; 30, 40]
and a matrix of indices
I = [1, 3; 1, 2]
I want to generate a new matrix M = [0.1, 20; 0.3, 4] by selecting the first entry from M1, second from M3, etc.
I can definitely do it in a nested loop, going through each entry and filling in the value, but I am sure there is a more efficient way.
What if M1, M2, M3 and M are all 3D matrices (RGB images)? Each entry of I tells us from which matrix we should take a 3-vector. Say, if I(1, 3) = 3, then we know entries indexed by (1, 3, :) of M should be M3(1, 3, :).
A way of doing this, without changing the way you store your variable is to use masks. If you have a few matrices, it is doing the job avoiding a for loop. You won't be able to fully vectorize without going through the cat function, or using cells.
M = zeros(size(M1));
Itmp = repmat(I==1,[1 1 size(M1,3)]); M(Itmp) = M1(Itmp);
Itmp = repmat(I==2,[1 1 size(M1,3)]); M(Itmp) = M2(Itmp);
Itmp = repmat(I==3,[1 1 size(M1,3)]); M(Itmp) = M3(Itmp);
I think the best way to approach this is to stack dimensions (ie have a matrix with values that are each of your indvidiual matricies). Unfortunately MATLAB doesn't really support array level indexing so what ends up happening is you end up using linear indexing to convert your values through the subs2ind command. I believe you can use the code below.
M1 = [0.1, 0.2; 0.3, 0.4]
M2 = [1, 2; 3, 4]
M3 = [10, 20; 30, 40]
metamatrix=cat(3,M1,M2,M3)
%Create a 3 dimenssional or however many dimension matrix by concatenating
%lower order matricies
I=[1,1,1;1,2,3;2,1,1;2,2,2]
M=reshape(metamatrix(sub2ind(size(metamatrix),I(:,1),I(:,2),I(:,3))),size(metamatrix(:,:,1)))
With a more complex (3 dimensional case), you would have to extend the code for higher dimensions.
One way of doing this could be to generate a 4D matrix with you images. It has the cost of increasing the amount of memory, or at least, change you memory scheme.
Mcat = cat(4, M1, M2, M3);
Then you can use the function sub2ind to get a vectorized Matrix creation.
% get the index for the basic Image matrix
I = repmat(I,[1 1 3]); % repeat the index for for RGB images
Itmp = sub2ind(size(I),reshape(1:numel(I),size(I)));
% update so that indices reach the I(x) value element on the 4th dim of Mcat.
Itmp = Itmp + (I-1)*numel(I);
% get the matrix
M = Mcat(Itmp);
I haven't tested it properly, but it should work.

How to extract coefficients of variables in a matrix in Matlab

Suppose I have the matrix below:
syms x y z
M = [x+y-z;2*x+3*y+-5*z;-x-y6*z];
I want to have the a matrix consisting the coefficients of the variables x,y, and z:
CM = [1,1,-1;2,3,-5;-1,-1,6];
If I multiply CM by [x;y;z], I expect to get M.
Edit
I have a system of ODE:
(d/dt)A = B
A and B are square matrices. I want to solve these set of equations. I don't want to use ode solving commands of Matlab.
If I turn the above set of equations into:
(d/dt)a = M*a
then I can solve it easily by the eigen vectors and values of matrix M. Here a is a column vector containing the variables, and M is the matrix of coefficient extracted from B.
Since you seem to be using the Symbolic Math Toolbox, you should diff symbolically, saving the derivative with respect to each variable:
syms x y z;
M=[x+y-z;2*x+3*y-5*z;-x-y+6*z];
Mdiff=[];
for k=symvar(M)
Mdiff=[Mdiff diff(M,k)];
end
Then you get
Mdiff =
[ 1, 1, -1]
[ 2, 3, -5]
[ -1, -1, 6]
If you want to order the columns in a non-lexicographical way, then you need to use a vector of your own instead of symvar.
Update
Since you mentioned that this approach is slow, it might be faster to use coeffs to treat M as a polynomial of its variables:
syms x y z;
M=[x+y-z;2*x+3*y-5*z;-x-y+6*z];
Mdiff2=[];
varnames=symvar(M);
for k=1:length(M)
Mdiff2=[Mdiff2; coeffs(M(k),varnames(end:-1:1))];
end
Note that for some reason (which I don't understand) the output of coeffs is reversed compare to its input variable list, this is why we call it with an explicitly reversed version of symvar(M).
Output:
>> Mdiff2
Mdiff2 =
[ 1, 1, -1]
[ 2, 3, -5]
[ -1, -1, 6]
As #horchler pointed out, this second solution will not work if your symbolic vector has varying number of variables in its components. Since speed only matters if you have to do this operation a lot of times, with many configurations of the parameters in your M, I would suggest constructing M parametrically (so that the coefficients are also syms) is possible, then you only have to perform the first version once. The rest is only substitution into the result.

Understanding behaviour of MATLAB's convn

I'm doing convolution of some tensors.
Here is small test in MATLAB:
ker= rand(3,4,2);
a= rand(5,7,2);
c=convn(a,ker,'valid');
c11=sum(sum(a(1:3,1:4,1).*ker(:,:,1)))+sum(sum(a(1:3,1:4,2).*ker(:,:,2)));
c(1,1)-c11 % not equal!
The third line performs a N-D convolution with convn, and I want to compare the result of the first row, first column of convn with computing the value manually. However, my computation in comparison to convn is not equal.
So what is behind MATLAB's convn? Is my understanding of tensor convolution is wrong?
You almost have it correct. There are two things slightly wrong with your understanding:
You chose valid as the convolution flag. This means that the output returned from the convolution has its size so that when you are using the kernel to sweep over the matrix, it has to fit comfortably inside the matrix itself. Therefore, the first "valid" output that is returned is actually for the computation at location (2,2,1) of your matrix. This means that you can fit your kernel comfortably at this location, and this corresponds to position (1,1) of the output. To demonstrate, this is what a and ker look like for me using your above code:
>> a
a(:,:,1) =
0.9930 0.2325 0.0059 0.2932 0.1270 0.8717 0.3560
0.2365 0.3006 0.3657 0.6321 0.7772 0.7102 0.9298
0.3743 0.6344 0.5339 0.0262 0.0459 0.9585 0.1488
0.2140 0.2812 0.1620 0.8876 0.7110 0.4298 0.9400
0.1054 0.3623 0.5974 0.0161 0.9710 0.8729 0.8327
a(:,:,2) =
0.8461 0.0077 0.5400 0.2982 0.9483 0.9275 0.8572
0.1239 0.0848 0.5681 0.4186 0.5560 0.1984 0.0266
0.5965 0.2255 0.2255 0.4531 0.5006 0.0521 0.9201
0.0164 0.8751 0.5721 0.9324 0.0035 0.4068 0.6809
0.7212 0.3636 0.6610 0.5875 0.4809 0.3724 0.9042
>> ker
ker(:,:,1) =
0.5395 0.4849 0.0970 0.3418
0.6263 0.9883 0.4619 0.7989
0.0055 0.3752 0.9630 0.7988
ker(:,:,2) =
0.2082 0.4105 0.6508 0.2669
0.4434 0.1910 0.8655 0.5021
0.7156 0.9675 0.0252 0.0674
As you can see, at position (2,2,1) in the matrix a, ker can fit comfortably inside the matrix and if you recall from convolution, it is simply a sum of element-by-element products between the kernel and the subset of the matrix at position (2,2,1) that is the same size as your kernel (actually, you need to do something else to the kernel which I will reserve for my next point - see below). Therefore, the coefficient that you are calculating is actually the output at (2,2,1), not at (1,1,1). From the gist of it though, you already know this, but I wanted to put that out there in case you didn't know.
You are forgetting that for N-D convolution, you need to flip the mask in each dimension. If you remember from 1D convolution, the mask must be flipped in the horizontally. What I mean by flipped is that you simply place the elements in reverse order. An array of [1 2 3 4] for example would become [4 3 2 1]. In 2D convolution, you must flip both horizontally and vertically. Therefore, you would take each row of your matrix and place each row in reverse order, much like the 1D case. Here, you would treat each row as a 1D signal and do the flipping. Once you accomplish this, you would take this flipped result, and treat each column as a 1D signal and do the flipping again.
Now, in your case for 3D, you must flip horizontally, vertically and temporally. This means that you would need to perform the 2D flipping for each slice of your matrix independently, you would then grab single columns in a 3D fashion and treat those as 1D signals. In MATLAB syntax, you would get ker(1,1,:), treat this as a 1D signal, then flip. You would repeat this for ker(1,2,:), ker(1,3,:) etc. until you are finished with the first slice. Bear in mind that we don't go to the second slice or any of the other slices and repeat what we just did. Because you are taking a 3D section of your matrix, you are inherently operating over all of the slices for each 3D column you extract. Therefore, only look at the first slice of your matrix, and so you need to do this to your kernel before computing the convolution:
ker_flipped = flipdim(flipdim(flipdim(ker, 1), 2), 3);
flipdim performs the flipping on a specified axis. In our case, we are doing it vertically, then taking the result and doing it horizontally, and then again doing it temporally. You would then use ker_flipped in your summation instead. Take note that it doesn't matter which order you do the flipping. flipdim operates on each dimension independently, and so as long as you remember to flip all dimensions, the output will be the same.
To demonstrate, here's what the output looks like with convn:
c =
4.1837 4.1843 5.1187 6.1535
4.5262 5.3253 5.5181 5.8375
5.1311 4.7648 5.3608 7.1241
Now, to determine what c(1,1) is by hand, you would need to do your calculation on the flipped kernel:
ker_flipped = flipdim(flipdim(flipdim(ker, 1), 2), 3);
c11 = sum(sum(a(1:3,1:4,1).*ker_flipped(:,:,1)))+sum(sum(a(1:3,1:4,2).*ker_flipped(:,:,2)));
The output of what we get is:
c11 =
4.1837
As you can see, this verifies what we get by hand with the calculation done in MATLAB using convn. If you want to compare more digits of precision, use format long and compare them both:
>> format long;
>> disp(c11)
4.183698205668000
>> disp(c(1,1))
4.183698205668001
As you can see, all of the digits are the same, except for the last one. That is attributed to numerical round-off. To be absolutely sure:
>> disp(abs(c11 - c(1,1)));
8.881784197001252e-16
... I think a difference of an order or 10-16 is good enough for me to show that they're equal, right?
Yes, your understanding of convolution is wrong. Your formula for c11 is not convolution: you just multiplied matching indices and then summed. It's more of a dot-product operation (on tensors trimmed to the same size). I'll try to explain beginning with 1 dimension.
1-dimensional arrays
Entering conv([4 5 6], [2 3]) returns [8 22 27 18]. I find it easiest to think of this in terms of multiplication of polynomials:
(4+5x+6x^2)*(2+3x) = 8+22x+27x^2+18x^3
Use the entries of each array as coefficients of a polynomial, multiply the polynomials, collect like terms, and read off the result from coefficients. The powers of x are here to keep track of what gets multiplied and added. Note that the coefficient of x^n is found in the (n+1)th entry, because powers of x begin with 0 while the indices begin with 1.
2-dimensional arrays
Entering conv2([2 3; 3 1], [4 5 6; 0 -1 1]) returns the matrix
8 22 27 18
12 17 22 9
0 -3 2 1
Again, this can be interpreted as multiplication of polynomials, but now we need two variables: say x and y. The coefficient of x^n y^m is found in (m+1, n+1) entry. The above output means that
(2+3x+3y+xy)*(4+5x+6x^2+0y-xy+x^2y) = 8+22x+27x^2+18x^3+12y+17xy+22x^2y+9x^3y-3xy^2+2x^2y^2+x^3y^2
3-dimensional arrays
Same story. You can think of the entries as coefficients of a polynomial in variables x,y,z. The polynomials get multiplied, and the coefficients of the product are the result of convolution.
'valid' parameter
This keeps only the central part of the convolution: those coefficients in which all terms of the second factor have participated. For this to be nonempty, the second array should have dimensions no greater than the first. (This is unlike the default setting, for which the order convolved arrays does not matter.) Example:
conv([4 5 6], [2 3]) returns [22 27] (compare to the 1-dimensional example above). This corresponds to the fact that in
(4+5x+6x^2)*(2+3x) = 8+22x+27x^2+18x^3
the bolded terms got contributions from both 2 and 3x.

Eigen Values from Matlab

I'm trying to figure out Eigenvalues/Eigenvectors for large datasets in order to compute
the PCA. I can calculate the Eigenvalues and Eigenvectors for 2x2, 3x3 etc..
The problem is, I have a dataset containing 451x128 I compute the covariance matrix which
gives me 128x128 values from this. This, therefore looks like the following:
A = [ [1, 2, 3,
2, 3, 1,
..........,
= 128]
[5, 4, 1,
3, 2, 1,
2, 1, 2,
..........
= 128]
.......,
128]
Computing the Eigenvalues and vectors for a 128x128 vector seems really difficult and
would take a lot of computing power. However, if I allow for each of the blocks in A to be a 2-dimensional (3xN) I can then compute the covariance matrix which will give me a 3x3 matrix.
My question is this: Would this be a good or reasonable assumption for solving the eigenvalues and vectors? Something like this:
A is a 2-dimensional vector containing 128x451,
foreach of the blocks compute the eigenvalues and eigenvectors of the covariance vector,
like so:
Eig1 = eig(cov(A[0]))
Eig2 = eig(cov(A[1]))
This would then give me 128 Eigenvalues (for each of the blocks inside the 128x128 vector)..
If this is not correct, how does MATLAB handle such large dimensional data?
Have you tried svd()
Do the singular value decomposition
[U,S,V] = svd(X)
U and V are orthogonal matrices and S contains the eigen values. Sort U and V in descending order based on S.
As kkuilla mentions, you can use the SVD of the original matrix, as the SVD of a matrix is related to the Eigenvalues and Eigenvectors of the covariance matrix as I demonstrate in the following example:
A = [1 2 3; 6 5 4]; % A rectangular matrix
X = A*A'; % The covariance matrix of A
[V, D] = eig(X); % Get the eigenvectors and eigenvalues of the covariance matrix
[U,S,W] = svd(A); % Get the singular values of the original matrix
V is a matrix containing the eigenvectors, and D contains the eigenvalues. Now, the relationship:
SST ~ D
U ~ V
As to your own assumption, I may be misreading it, but I think it is false. I can't see why the Eigenvalues of the blocks would relate to the Eigenvalues of the matrix as a whole; they wouldn't correspond to the same Eigenvectors, as the dimensionality of the Eigenvectors wouldn't match. I think your covariances would be different too, but then I'm not completely clear on how you are creating these blocks.
As to how Matlab does it, it does use some tricks. Perhaps the link below might be informative (though it might be a little old). I believe they use (or used) LAPACK and a QZ factorisation to obtain intermediate values.
https://au.mathworks.com/company/newsletters/articles/matlab-incorporates-lapack.html
Use the word
[Eigenvectors, Eigenvalues] = eig(Matrix)

Eigen vector in SVD

Im going to compute the eigen value and eigen vector from my Matrix data fro the classification.
The rows represent the different classes and the columns represent the features.
So, for example if I have
X=
[2 3 4]
[3 2 4]
[4 5 6]
[8 9 0]
I have to use SVD instead of PCA because the matrix is not square.
What I have done are:
Compute the mean for each row. So I have
Mean=
M1
M2
M3
M4
Substract my matrix X with the Mean
Substract=
[2-M1 3-M1 4-M1]
[3-M2 2-M2 4-M2]
[4-M3 5-M3 6-M3]
[8-M4 9-M4 0-M4]
Covariance Matrix = (Substract*Substract^t)/(4-1)
[U,S,V] = svd(X)
Are all my step right? By compute the mean for each row (as the classes)?
If I want to project my data into eigen space (for dimensionality reduction), which is the eigen vector (U or V)??
You can do PCA whether your matrix is square or not. In fact, your matrix is rarely square because it has a form n*p where n is the number of observations and p is the number of features. Thus you can use MATLAB's pricomp function
[W, pc] = princomp(data);
where W is a weight matrix and pc is the principal component score. You can see your data projected into the principal component space by,
plot(pc(1,:),pc(2,:),'.');
which shows your data in the first- and second- principal component directions.