MTIMES is not fully supported for integer classes. At least one input must be scalar - matlab

I'm trying to implement a 1 dimensional DFT without using Matlab built-in functions such as fft(). This is my code
function [Xk] = dft1(xn)
N=length(xn);
n = 0:1:N-1; % row vector for n
k = 0:1:N-1; % row vecor for k
WN = exp(-1j*2*pi/N); % Twiddle factor (w)
nk = n'*k; % creates a N by N matrix of nk values
WNnk = WN .^ nk; % DFT matrix
Xk = (WNnk*xn );
when i run the code after using the following commands:
I = imread('sample.jpg')
R = dft1(I)
I get this particular error:
Error using *
MTIMES is not fully supported for
integer classes. At least one input
must be scalar.
To compute elementwise TIMES, use
TIMES (.*) instead.
Can someone please help me to figure out how to solve this problem
Note: I am still in the very beginning level of learning Matlab
thank you very much

You just need to cast the data to double, then run your code again. Basically what the error is saying is that you are trying to mix classes of data together when applying a matrix multiplication between two variable. Specifically, the numerical vectors and matrices you define in dft1 are all of a double type, yet your image is probably of type uint8 when you read this in through imread. This is why you're getting that integer error because uint8 is an integer class and you are trying to perform matrix multiplication with this data type with those of a double data type. Bear in mind that you can mix data types, so long as one number is a single number / scalar. This is also what the error is alluding to. Matrix multiplication of varaibles that are not floating point (double, single) is not supported in MATLAB so you need to make sure that your image data and your DFT matrices are the same type before applying your algorithm.
As such, simply do:
I = imread('sample.jpg');
R = dft1(double(I));
Minor Note
This code is quite clever, and it (by default) applies the 1D DFT to all columns of your image. The output will be a matrix of the same size as I where each column is the 1D DFT result of each column from I.
This is something to think about, but should you want to apply this to all rows of your image, you would simply transpose I before it goes into dft1 so that the rows become columns and you can operate on these new "columns". Once you're done, you simply have to transpose the result back so that you'll get your output from dft1 shaped such that the results are applied on a per row basis. Therefore:
I = imread('sample.jpg');
R = dft1(double(I.')).';
Hope this helps! Good luck!

Related

Calculating covariance in Matlab for large dataset and different mean

So I'm trying to implement an EM-Algorithm to train a Gaussian Class Conditional model for classifying data. I'm stuck in the M-step at the moment because I can't figure out how to calculate the covariance matrix.
The problem is I have a big data set and using a for loop to go through each point would be way to slow. I also can't use the covariance function cov(), because I need to use a mean which I calculated using this formula(mu symbol one)
Is there a way to adjust cov() to use the mean I want? Or is there another way I could do this without for loops?
Edit: Forgot to explain what the data matrix is like. Its an nx3 where each row is a data point.
It technically needs to work for the general case nxm but n is usually really big(1000 or more) while m is relatively small.
You can calculate your covariance matrix manually. Let data be the matrix containing all your variables (for example, [x y]) and mu your custom mean, proceed as follows:
n = size(data,1);
data_dem = data - (ones(n,1) * mu);
cov_mat = (data_dem.' * data_dem) ./ (n - 1);
Notice that I used the Bessel's Correction (n-1 instead of n) because the Matlab cov function uses it, unless you specify the third argument as 1:
cov_mat = cov(x,y,1);
C = cov(___,w) specifies the normalization weight for any of the
previous syntaxes. When w = 0 (default), C is normalized by the number
of observations-1. When w = 1, it is normalized by the number of
observations.

How can I use NxM matrix to be my initial condition in `pdepe`

I solved a PDE using Matlab solver, pdepe. The initial condition is the solution of an ODE, which I solved in a different m.file. Now, I have the ODE solution in a matrix form of size NxM. How I can use that to be my IC in pdepe? Is that even possible? When I use for loop, pdepe takes only the last iteration to be the initial condition. Any help is appreciated.
Per the pdepe documentation, the initial condition function for the solver has the syntax:
u = icFun(x);
where the initial value of the PDE at a specified value of x is returned in the column vector u.
So the only time an initial condition will be a N x M matrix is when the PDE is a system of N unknowns with M spatial mesh points.
Therefore, an N x M matrix could be used to populate the initial condition, but there would need to be some mapping that associates a given column with a specific value of x. For instance, in the main function that calls pdepe, there could be
% icData is the NxM matrix of data
% xMesh is an 1xM row vector that has the spatial value for each column of icData
icFun = #(x) icData(:,x==xMesh);
The only shortcoming of this approach is that the mesh of the initial condition, and therefore the pdepe solution, is constrained by the initial data. This can be overcome by using an interpolation scheme like:
% icData is the NxM matrix of data
% xMesh is an 1xM row vector that has the spatial value for each column of icData
icFun = #(x) interp1(xMesh,icData',x,'pchip')';
where the transposes are present to conform to the interpretation of the data by interp1.
it is easier for u to use 'method of line' style to define different conditions on each mesh rather than using pdepe
MOL is also more flexible to use in different situation like 3D problem
just saying :))
My experience is that the function defining the initial conditions must return a column vector, i.e. Nx1 matrix if you have N equations. Even if your xmesh is an array of M numbers, the matrix corresponding to the initial condition is still Nx1. You can still return a spatially varying initial condition, and my solution was the following.
I defined an anonymous function, pdeic, which was passed as an argument to pdepe:
pdeic=#(x) pdeic2(x,p1,p2,p3);
And I also defined pdeic2, which always returns a 3x1 column vector, but depending on x, the value is different:
function u0=pdeic2(x,extrap1,extrap2,extrap3)
if x==extrap3
u0=[extrap1;0;extrap2];
else
u0=[extrap1;0;0];
end
So going back to your original question, my guess would be that you have to pass the solution of your ODE to what is named 'pdeic2' in my example, and depending on X, return a column vector.

out of memory error when using diag function in matlab

I have an array of valued double M where size(M)=15000
I need to convert this array to a diagonal matrix with command diag(M)
but i get the famous error out of memory
I run matlab with option -nojvm to gain memory space
and with the optin 3GB switch on windows
i tried also to convert my array to double precision
but the problem persist
any other idea?
There are much better ways to do whatever you're probably trying to do than generating the full diagonal matrix (which will be extremely sparse).
Multiplying that matrix, which has 225 million elements, by other matrices will also take a very long time.
I suggest you restructure your algorithm to take advantage of the fact that:
diag(M)(a, b) =
M(a) | a == b
0 | a != b
You'll save a huge amount of time and memory and whoever is paying you will be happier.
This is what a diagonal matrix looks like:
Every entry except those along the diagonal of the matrix (the ones where row index equals the column index) is zero. Relating this example to your provided values, diag(M) = A and M(n) = An
Use saprse matrix
M = spdiags( M, 0, numel(M), numel(M) );
For more info see matlab doc on spdiags and on sparse matrices in general.
If you have an n-by-n square matrix, M, you can directly extract the diagonal elements into a row vector via
n = size(M,1); % Or length(M), but this is more general
D = M(1:n+1:end); % 1-by-n vector containing diagonal elements of M
If you have an older version of Matlab, the above may even be faster than using diag (if I recall, diag wasn't always a compiled function). Then, if you need to save memory and only need the diagonal of M and can get rid of the rest, you can do this:
M(:) = 0; % Zero out M
M(1:n+1:end) = D; % Insert diagonal elements back into M
clear D; % Clear D from memory
This should not allocate much more than about (n^2+n)*8 = n*(n+1)*8 bytes at any one time for double precision values (some will needed for indexing operations). There are other ways to do the above that might save a bit more if you need a (full, non-sparse) n-by-n diagonal matrix, but there's no way to get around that you'll need n^2*8 bytes at a minimum just to store the matrix of doubles.
However, you're still likely to run into problems. I'd investigate sparse datatypes as #user2379182 suggests. Or rework you algorithms. Or better yet, look into obtaining 64-bit Matlab and/or a 64-bit OS!

Matlab - how to compute PCA on a huge data set [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
MATLAB is running out of memory but it should not be
I want to perform PCA analysis on a huge data set of points. To be more specific, I have size(dataPoints) = [329150 132] where 328150 is the number of data points and 132 are the number of features.
I want to extract the eigenvectors and their corresponding eigenvalues so that I can perform PCA reconstruction.
However, when I am using the princomp function (i.e. [eigenVectors projectedData eigenValues] = princomp(dataPoints); I obtain the following error :
>> [eigenVectors projectedData eigenValues] = princomp(pointsData);
Error using svd
Out of memory. Type HELP MEMORY for your options.
Error in princomp (line 86)
[U,sigma,coeff] = svd(x0,econFlag); % put in 1/sqrt(n-1) later
However, if I am using a smaller data set, I have no problem.
How can I perform PCA on my whole dataset in Matlab? Have someone encountered this problem?
Edit:
I have modified the princomp function and tried to use svds instead of svd, but however, I am obtaining pretty much the same error. I have dropped the error bellow :
Error using horzcat
Out of memory. Type HELP MEMORY for your options.
Error in svds (line 65)
B = [sparse(m,m) A; A' sparse(n,n)];
Error in princomp (line 86)
[U,sigma,coeff] = svds(x0,econFlag); % put in 1/sqrt(n-1) later
Solution based on Eigen Decomposition
You can first compute PCA on X'X as #david said. Specifically, see the script below:
sz = [329150 132];
X = rand(sz);
[V D] = eig(X.' * X);
Actually, V holds the right singular vectors, and it holds the principal vectors if you put your data vectors in rows. The eigenvalues, D, are the variances among each direction. The singular vectors, which are the standard deviations, are computed as the square root of the variances:
S = sqrt(D);
Then, the left singular vectors, U, are computed using the formula X = USV'. Note that U refers to the principal components if your data vectors are in columns.
U = X*V*S^(-1);
Let us reconstruct the original data matrix and see the L2 reconstruction error:
X2 = U*S*V';
L2ReconstructionError = norm(X(:)-X2(:))
It is almost zero:
L2ReconstructionError =
6.5143e-012
If your data vectors are in columns and you want to convert your data into eigenspace coefficients, you should do U.'*X.
This code snippet takes around 3 seconds in my moderate 64-bit desktop.
Solution based on Randomized PCA
Alternatively, you can use a faster approximate method which is based on randomized PCA. Please see my answer in Cross Validated. You can directly compute fsvd and get U and V instead of using eig.
You may employ randomized PCA if the data size is too big. But, I think the previous way is sufficient for the size you gave.
My guess is that you have a huge data set. You don't need all of the svd coefficients. In this case, use svds instead of svd :
Taken directly from Matlab help:
s = svds(A,k) computes the k largest singular values and associated singular vectors of matrix A.
From your question, I understand that you don't call svd directly. But you might as well take a look at princomp (It is editable!) and alter the line that calls it.
You probably needed to calculate an n by n matrix in your computation somehow that is to say:
329150 * 329150 * 8btyes ~ 866GB`
of space which explains why you're getting a memory error. There seems to be an efficient way to calculate pca using princomp(X, 'econ') which I suggest you give it a try.
More on this in stackoverflow and mathworks..
Manually compute X'X (132x132) and svd on it. Or find NIPALS script.

How to have normalize data around the average for the column in MATLAB?

I am trying to take a matrix and normalize the values in each cell around the average for that column. By normalize I mean subtract the value in each cell from the mean value in that column i.e. subtract the mean for Column1 from the values in Column1...subtract mean for ColumnN from the values in ColumnN. I am looking for script in Matlab. Thanks!
You could use the function mean to get the mean of each column, then the function bsxfun to subtract that from each column:
M = bsxfun(#minus, M, mean(M, 1));
Additionally, starting in version R2016b, you can take advantage of the fact that MATLAB will perform implicit expansion of operands to the correct size for the arithmetic operation. This means you can simply do this:
M = M-mean(M, 1);
Try the mean function for starters. Passing a matrix to it will result in all the columns being averaged and returns a row vector.
Next, you need to subtract off the mean. To do that, the matrices must be the same size, so use repmat on your mean row vector.
a=rand(10);
abar=mean(a);
abar=repmat(abar,size(a,1),1);
anorm=a-abar;
or the one-liner:
anorm=a-repmat(mean(a),size(a,1),1);
% Assuming your matrix is in A
m = mean(A);
A_norm = A - repmat(m,size(A,1),1)
As has been pointed out, you'll want the mean function, which when called without any additional arguments gives the mean of each column in the input. A slight complication then comes up because you can't simply subtract the mean -- its dimensions are different from the original matrix.
So try this:
a = magic(4)
b = a - repmat(mean(a),[size(a,1) 1]) % subtract columnwise mean from elements in a
repmat replicates the mean to match the data dimensions.