Matlab: How to convert a matrix into a Toeplitz matrix - matlab

Considering a discrete dynamical system where x[0]=rand() denotes the initial condition of the system.
I have generated an m by n matrix by the following step -- generate m vectors with m different initial conditions each with dimension N (N indicates the number of samples or elements). This matrix is called R. Using R how do I create a Toeplitz matrix, T? T
Mathematically,
R = [ x_0[0], ....,x_0[n-1];
..., ,.....;
x_m[0],.....,x_m[n-1]]
The toeplitz matrix T =
x[n-1], x[n-2],....,x[0];
x[0], x[n-1],....,x[1];
: : :
x[m-2],x[m-3]....,x[m-1]
I tried working with toeplitz(R) but the dimension changes. The dimension should no change, as seen mathematically.

According to the paper provided (Toeplitz structured chaotic sensing matrix for compressive sensing by Yu et al.) there are two Chaotic Sensing Matrices involved. Let's explore them separately.
The Chaotic Sensing Matrix (Section A)
It is clearly stated that to create such matrix you have to build m independent signals (sequences) with m different initials conditions (in range ]0;1[) and then concatenate such signals per rows (that is, one signal = one row). Each of these signals must have length N. This actually is your matrix R, which is correctly evaluated as it is. Although I'd like to suggest a code improvement: instead of building a column and then transpose the matrix you can directly build such matrix per rows:
R=zeros(m,N);
R(:,1)=rand(m,1); %build the first column with m initial conditions
Please note: by running randn() you select values with Gaussian (Normal) distribution, such values might not be in range ]0;1[ as stated in the paper (right below equation 9). As instead by using rand() you take uniformly distributed values in such range.
After that, you can build every row separately according to the for-loop:
for i=1:m
for j=2:N %skip first column
R(i,j)=4*R(i,j-1)*(1-R(i,j-1));
R(i,j)=R(i,j)-0.5;
end
end
The Toeplitz Chaotic Sensing Matrix (Section B)
It is clearly stated at the beginning of Section B that to build the Toeplitz matrix you should consider a single sequence x with a given, single, initial condition. So let's build such sequence:
x=rand();
for j=2:N %skip first element
x(j)=4*x(j-1)*(1-x(j-1));
x(j)=x(j)-0.5;
end
Now, to build the matrix you can consider:
how do the first row looks like? Well, it looks like the sequence itself, but flipped (i.e. instead of going from 0 to n-1, it goes from n-1 to 0)
how do the first column looks like? It is the last item from x concatenated with the elements in range 0 to m-2
Let's then build the first row (r) and the first column (c):
r=fliplr(x);
c=[x(end) x(1:m-1)];
Please note: in Matlab the indices start from 1, not from 0 (so instead of going from 0 to m-2, we go from 1 to m-1). Also end means the last element from a given array.
Now by looking at the help for the toeplitz() function, it is clearly stated that you can build a non-squared Toeplitz matrix by specifying the first row and the first column. Therefore, finally, you can build such matrix as:
T=toeplitz(c,r);
Such matrix will indeed have dimensions m*N, as reported in the paper.
Even though the Authors call both of them \Phi, they actually are two separate matrices.
They do not take the Toeplitz of the Beta-Like Matrix (Toeplitz matrix is not a function or operator of some kind), neither do they transform the Beta-Like Matrix into a Toeplitz-matrix.
You have the Beta-Like Matrix (i.e. the Chaotic Sensing Matrix) at first, and then the Toeplitz-structured Chaotic Sensing Matrix: such structure is typical for Toeplitz matrices, that is a diagonal-constant structure (all elements along a diagonal have the same value).

Related

Computing the SVD of a rectangular matrix

I have a matrix like M = K x N ,where k is 49152 and is the dimension of the problem and N is 52 and is the number of observations.
I have tried to use [U,S,V]=SVD(M) but doing this I get less memory space.
I found another code which uses [U,S,V]=SVD(COV(M)) and it works well. My questions are what is the meaning of using the COV(M) command inside the SVD and what is the meaning of the resultant [U,S,V]?
Finding the SVD of the covariance matrix is a method to perform Principal Components Analysis or PCA for short. I won't get into the mathematical details here, but PCA performs what is known as dimensionality reduction. If you like a more formal treatise on the subject, you can read up on my post about it here: What does selecting the largest eigenvalues and eigenvectors in the covariance matrix mean in data analysis?. However, simply put dimensionality reduction projects your data stored in the matrix M onto a lower dimensional surface with the least amount of projection error. In this matrix, we are assuming that each column is a feature or a dimension and each row is a data point. I suspect the reason why you are getting more memory occupied by applying the SVD on the actual data matrix M itself rather than the covariance matrix is because you have a significant amount of data points with a small amount of features. The covariance matrix finds the covariance between pairs of features. If M is a m x n matrix where m is the total number of data points and n is the total number of features, doing cov(M) would actually give you a n x n matrix, so you are applying SVD on a small amount of memory in comparison to M.
As for the meaning of U, S and V, for dimensionality reduction specifically, the columns of V are what are known as the principal components. The ordering of V is in such a way where the first column is the first axis of your data that describes the greatest amount of variability possible. As you start going to the second columns up to the nth column, you start to introduce more axes in your data and the variability starts to decrease. Eventually when you hit the nth column, you are essentially describing your data in its entirety without reducing any dimensions. The diagonal values of S denote what is called the variance explained which respect the same ordering as V. As you progress through the singular values, they tell you how much of the variability in your data is described by each corresponding principal component.
To perform the dimensionality reduction, you can either take U and multiply by S or take your data that is mean subtracted and multiply by V. In other words, supposing X is the matrix M where each column has its mean computed and the is subtracted from each column of M, the following relationship holds:
US = XV
To actually perform the final dimensionality reduction, you take either US or XV and retain the first k columns where k is the total amount of dimensions you want to retain. The value of k depends on your application, but many people choose k to be the total number of principal components that explains a certain percentage of your variability in your data.
For more information about the link between SVD and PCA, please see this post on Cross Validated: https://stats.stackexchange.com/q/134282/86678
Instead of [U, S, V] = svd(M), which tries to build a matrix U that is 49152 by 49152 (= 18 GB šŸ˜±!), do svd(M, 'econ'). That returns the ā€œeconomy-classā€ SVD, where U will be 52 by 52, S is 52 by 52, and V is also 52 by 52.
cov(M) will remove each dimensionā€™s mean and evaluate the inner product, giving you a 52 by 52 covariance matrix. You can implement your own version of cov, called mycov, as
function [C] = mycov(M)
M = bsxfun(#minus, M, mean(M, 1)); % subtract each dimensionā€™s mean over all observations
C = M' * M / size(M, 1);
(You can verify this works by looking at mycov(randn(49152, 52)), which should be close to eye(52), since each element of that array is IID-Gaussian.)
Thereā€™s a lot of magical linear algebraic properties and relationships between the SVD and EVD (i.e., singular value vs eigenvalue decompositions): because the covariance matrix cov(M) is a Hermitian matrix, itā€™s left- and right-singular vectors are the same, and in fact also cov(M)ā€™s eigenvectors. Furthermore, cov(M)ā€™s singular values are also its eigenvalues: so svd(cov(M)) is just an expensive way to get eig(cov(M)) šŸ˜‚, up to Ā±1 and reordering.
As #rayryeng explains at length, usually people look at svd(M, 'econ') because they want eig(cov(M)) without needing to evaluate cov(M), because you never want to compute cov(M): itā€™s numerically unstable. I recently wrote an answer that showed, in Python, how to compute eig(cov(M)) using svd(M2, 'econ'), where M2 is the 0-mean version of M, used in the practical application of color-to-grayscale mapping, which might help you get more context.

Difference in between Covariance and Correlation Matrix

In Matlab, I have created a matrix A with size (244x2014723)
and a matrix B with size (244x1)
I was able to calculate the correlation matrix using corr(A,B) which yielded in a matrix of size 2014723x1. So, every column of matrix A correlates with matrix B and gives one row value in the matrix of size 2014723x1.
My question is when I ask for a covariance matrix using cov(A,B), I get an error saying A and B should be of same sizes. Why do I get this error? How is the method to find corr(A,B) any different from cov(A,B)?
The answer is pretty clear if you read the documentation:
cov:
If A and B are matrices of observations, cov(A,B) treats A and B as vectors and is equivalent to cov(A(:),B(:)). A and B must have equal size.
corr
corr(X,Y) returns a p1-by-p2 matrix containing the pairwise correlation coefficient between each pair of columns in the n-by-p1 and n-by-p2 matrices X and Y.
The difference between corr(X,Y) and the MATLABĀ® function corrcoef(X,Y) is that corrcoef(X,Y) returns a matrix of correlation coefficients for the two column vectors X and Y. If X and Y are not column vectors, corrcoef(X,Y) converts them to column vectors.
One way you could get the covariances of your vector with each column of you matrix is to use a loop. Another way (might be in-efficient depending on the size) is
C = cov([B,A])
and then look at the first row (or column) or C.
See link
In the more about section, the equation describing how cov is computed for cov(A,B) makes it clear why they need to be the same size. The summation is over only one variable which enumerates the elements of A,B.

Sparse diagonal matrix solver

I want to solve, in MatLab, a linear system (corresponding to a PDE system of two equations written in finite difference scheme). The action of the system matrix (corresponding to one of the diffusive terms of the PDE system) reads, symbolically (u is one of the unknown fields, n is the time step, j is the grid point):
and fully:
The above matrix has to be intended as A, where A*U^n+1 = B is the system. U contains the 'u' and the 'v' (second unknown field of the PDE system) alternatively: U = [u_1,v_1,u_2,v_2,...,u_J,v_J].
So far I have been filling this matrix using spdiags and diag in the following expensive way:
E=zeros(2*J,1);
E(1:2:2*J) = 1;
E(2:2:2*J) = 0;
Dvec=zeros(2*J,1);
for i=3:2:2*J-3
Dvec(i)=D_11((i+1)/2);
end
for i=4:2:2*J-2
Dvec(i)=D_21(i/2);
end
A = diag(Dvec)*spdiags([-E,-E,2*E,2*E,-E,-E],[-3,-2,-1,0,1,2],2*J,2*J)/(dx^2);`
and for the solution
[L,U]=lu(A);
y = L\B;
U(:) =U\y;
where B is the right hand side vector.
This is obviously unreasonably expensive because it needs to build a JxJ matrix, do a JxJ matrix multiplication, etc.
Then comes my question: is there a way to solve the system without passing MatLab a matrix, e.g., by passing the vector Dvec or alternatively directly D_11 and D_22?
This would spare me a lot of memory and processing time!
Matlab doesn't store sparse matrices as JxJ arrays but as lists of size O(J). See
http://au.mathworks.com/help/matlab/math/constructing-sparse-matrices.html
Since you are using the spdiags function to construct A, Matlab should already recognize A as sparse and you should indeed see such a list if you display A in console view.
For a tridiagonal matrix like yours, the L and U matrices should already be sparse.
So you just need to ensure that the \ operator uses the appropriate sparse algorithm according to the rules in http://au.mathworks.com/help/matlab/ref/mldivide.html. It's not clear whether the vector B will already be considered sparse, but you could recast it as a diagonal matrix which should certainly be considered sparse.

Remove duplicates in correlations in matlab

Please see the following issue:
P=rand(4,4);
for i=1:size(P,2)
for j=1:size(P,2)
[r,p]=corr(P(:,i),P(:,j))
end
end
Clearly, the loop will cause the number of correlations to be doubled (i.e., corr(P(:,1),P(:,4)) and corr(P(:,4),P(:,1)). Does anyone have a suggestion on how to avoid this? Perhaps not using a loop?
Thanks!
I have four suggestions for you, depending on what exactly you are doing to compute your matrices. I'm assuming the example you gave is a simplified version of what needs to be done.
First Method - Adjusting the inner loop index
One thing you can do is change your j loop index so that it only goes from 1 up to i. This way, you get a lower triangular matrix and just concentrate on the values within the lower triangular half of your matrix. The upper half would essentially be all set to zero. In other words:
for i = 1 : size(P,2)
for j = 1 : i
%// Your code here
end
end
Second Method - Leave it unchanged, but then use unique
You can go ahead and use the same matrix like you did before with the full two for loops, but you can then filter the duplicates by using unique. In other words, you can do this:
[Y,indices] = unique(P);
Y will give you a list of unique values within the matrix P and indices will give you the locations of where these occurred within P. Note that these are column major indices, and so if you wanted to find the row and column locations of where these locations occur, you can do:
[rows,cols] = ind2sub(size(P), indices);
Third Method - Use pdist and squareform
Since you're looking for a solution that requires no loops, take a look at the pdist function. Given a M x N matrix, pdist will find distances between each pair of rows in a matrix. squareform will then transform these distances into a matrix like what you have seen above. In other words, do this:
dists = pdist(P.', 'correlation');
distMatrix = squareform(dists);
Fourth Method - Use the corr method straight out of the box
You can just use corr in the following way:
[rho, pvals] = corr(P);
corr in this case will produce a m x m matrix that contains the correlation coefficient between each pair of columns an n x m matrix stored in P.
Hopefully one of these will work!
this works ?
for i=1:size(P,2)
for j=1:i
Since you are just correlating each column with the other, then why not just use (straight from the documentation)
[Rho,Pval] = corr(P);
I don't have the Statistics Toolbox, but according to http://www.mathworks.com/help/stats/corr.html,
corr(X) returns a p-by-p matrix containing the pairwise linear correlation coefficient between each pair of columns in the n-by-p matrix X.

Find largest subset of linearly independent vectors with Matlab

I need to create a matlab function that finds the largest subset of linearly independent vectors in a matrix A.
Initialize the output of the program to be 0, which corresponds to the empty set (containing no column vectors). Scanning the columns of A from left to right one by one; if adding the current column vector to the set of linearly independent vectors found so far makes the new set of vectors linearly DEPENDENT, then skip this vector, otherwise add this vector to the solution set; and move to the next column.
function [ out ] = maxindependent(A)
%MAXINDEPENDENT takes a matrix A and produces an array in which the columns
%are a subset of independent vectors with maximum size.
[r c]= size(A);
out=0;
A=A(:,rank(A))
for jj=1:c
M=[A A(:,jj)]
if rank(M)~=size(M,2)
A=A
elseif rank(M)==size(M,2)
A=M
end
end
out=A
if max(out)==0
0;
end
end
The number of linearly independent vectors in a matrix is equal to the rank of the matrix, and a particular subset of linearly independent vectors is not unique. Any 'largest subset' of linearly independent vectors will have size equal to the rank.
There is a function for this in MATLAB:
n = rank(A);
The algorithm you described is not necessary; you should just use the SVD. There is a concise way to do it here: how to get the maximally independent vectors given a set of vectors in MATLAB?