I am trying to perform PCA on the MNIST data set. I have the following code so far.
...load data into MATLAB
% Centre data matrix
imagesMean = mean(images);
imagesShifted = images - imagesMean;
% Compute covariance matrix of mean shifted images
covariance = cov(imagesShifted);
Trying to do this gives me the following response:
Out of memory. Type "help memory" for your options.
Error in cov (line 155) c = (xc' * xc) ./ denom;
Error in PCA (line 27) covariance = cov(imagesShifted);
imagesShifted is a 784x60000 double matrix.
I am using a MacBook Pro 2015 with 16GB RAM and a 2.8 GHz processor and a dedicated graphics card.
I looked under the help menu for the memory command but the information only seems relevant to Windows machines. Also looked at the MathWorks website for resolving out of memory issues but wasn't sure how to proceed based on that information.
How can I get around this issue?
For large data sets, I suggest you to use the princomp function of matlab, with the flag 'econ' activated.
https://es.mathworks.com/help/stats/princomp.html
Or the pca function with the flag 'economy' or indicating the 'NumComponents' you wish.
https://es.mathworks.com/help/stats/pca.html
Related
I am looking for functions to perform segmentation of noisy medical images (grayscale) with GMM (Gaussian Mixture Models).
I have found in MATLAB:
gm = gmdistribution(mu,sigma)
idx = cluster(gm,X)
given X, my grayscale image.
How would you define mu and sigma? What size should they be? And how would you initialize them?
I have tried the following (given an image of size (576x720)):
mu = rand(5,size(X,2));
sigma = ones(720,720);
gm = gmdistribution(mu,sigma);
idx = cluster(gm,X);
but I get an error:
Error using wdensity (line 29)
Ill-conditioned covariance created.
Error in gmdistribution/cluster (line 59)
log_lh=wdensity(X,obj.mu, obj.Sigma, obj.PComponents, obj.SharedCov, CovType);
I have a basic idea of how GMM works, i.e. soft clustering, but I 'd like help of a more advanced person to understand what I'm doing here.
Wrong function. You are looking for fitgmdist(X,k), where you input your estimate of number of objects to be segmented as k. Then the program will attempt to calculate mu and sigma by using the EM-algorithm.
That ill conditioned covariance created -warning is typical, you are going to see that a lot if your data is noisy. I suggest regularization, by tweaking the 'RegularizationValue' -parameter, possibly setting constraints on the covariance-structures and/or filtering the noisy image. I've always had good results when using the BM3D (for 2D images) and BM4D -filters (for 3D images).
I'm happy to help if you have any specific questions but you are also going to have to do your homework on this. Image processing is hard, bunch of moving parts you need to handle before even the basic stuff starts to work reliably.
I am implementing the (dual) simplex algorithm in Matlab/Octave.
My algorithm works just fine for a small test problem, but as soon as I try a bigger problem as afiro.mps (from http://www.netlib.org/lp/data/) I get the warning "matrix singular to machine precision, rcond=0" and octave throws an error (or does not terminate). The exact command is:
x = zeros(n,1);
x(B) = A(:,B) \ b; % Compute primal variables
y = A(:,B)' \ c(B); % Compute dual variables
The problem is in standard form
min c*x
s.t. Ax=b
x>=0
A is a m-x-n matrix and B is the index vector of the inactive constraints (base variables).
As I am doing a two phased simplex I choose 1:size(A,1) as an initial base for the dual simplex.
The problem is read from a mps file via a self coded reader. I expect the mps file is read correctly, as the glpk function solves the problem correctly, when it has A,b,c as input parameters.
Is there some way to avoid the warning or do I have an error in my coding?
The basis matrices of linear programming problems usually have very bad condition numbers. This is one the difficulties when implementing a stable simplex algorithm.
You should have a look at this paper that explains this phenomenon:
On the Factorization of Simplex Basis Matrices
I am performing logistic regression in MATLAB with L2 regularization on text data. My program works well for small datasets. For larger sets, it keeps running infinitely.
I have seen the potentially duplicate question (matlab fminunc not quitting (running indefinitely)). In that question, the cost for initial theta was NaN and there was an error printed in the console. For my implementation, I am getting a real valued cost and there is no error even with verbose parameters being passed to fminunc(). Hence I believe this question might not be a duplicate.
I need help in scaling it to larger sets. The size of the training data I am currently working on is roughly 10k*12k (10k text files cumulatively containing 12k words). Thus, I have m=10k training examples and n=12k features.
My cost function is defined as follows:
function [J gradient] = costFunction(X, y, lambda, theta)
[m n] = size(X);
g = inline('1.0 ./ (1.0 + exp(-z))');
h = g(X*theta);
J =(1/m)*sum(-y.*log(h) - (1-y).*log(1-h))+ (lambda/(2*m))*norm(theta(2:end))^2;
gradient(1) = (1/m)*sum((h-y) .* X(:,1));
for i = 2:n
gradient(i) = (1/m)*sum((h-y) .* X(:,i)) - (lambda/m)*theta(i);
end
end
I am performing optimization using MATLAB's fminunc() function. The parameters I pass to fminunc() are:
options = optimset('LargeScale', 'on', 'GradObj', 'on', 'MaxIter', MAX_ITR);
theta0 = zeros(n, 1);
[optTheta, functionVal, exitFlag] = fminunc(#(t) costFunction(X, y, lambda, t), theta0, options);
I am running this code on a machine with these specifications:
Macbook Pro i7 2.8GHz / 8GB RAM / MATLAB R2011b
The cost function seems to behave correctly. For initial theta, I get acceptable values of J and gradient.
K>> theta0 = zeros(n, 1);
K>> [j g] = costFunction(X, y, lambda, theta0);
K>> j
j =
0.6931
K>> max(g)
ans =
0.4082
K>> min(g)
ans =
-2.7021e-05
The program takes incredibly long to run. I started profiling keeping MAX_ITR = 1 for fminunc(). With a single iteration, the program did not complete execution even after a couple of hours had elapsed. My questions are:
Am I doing something wrong mathematically?
Should I use any other optimizer instead of fminunc()? With LargeScale=on, fminunc() uses trust-region algorithms.
Is this problem cluster-scale and should not be run on a single machine?
Any other general tips will be appreciated. Thanks!
This helped solve the problem: I was able to get this working by setting the LargeScale flag to 'off' in fminunc(). From what I gather, LargeScale = 'on' uses trust region algorithms, while keeping it 'off' uses quasi-newton methods. Using quasi-newton methods and passing the gradient worked a lot faster for this particular problem and gave very nice results.
I was able to get this working by setting the LargeScale flag to 'off' in fminunc(). From what I gather, LargeScale = 'on' uses trust region algorithms, while keeping it 'off' uses quasi-newton methods. Using quasi-newton methods and passing the gradient worked a lot faster for this particular problem and gave very nice results.
Here is my advise:
-Set the Matlab flag to show debug output during run. If not just print out in your cost function the cost, which will allow you to monitor iteration count and error.
And second, which is very important:
Your problem is illposed, or so to say underdetermined. You have a 12k feature space and provide only 10k examples, which means for an unconstrained optimization the answer is -Inf. To make a quick example why this is, your problem is like:
Minimize x+y+z given that x+y-z = 2. Feature space dim 3, spanned vector space - 1d. I suggest use PCA or CCA to reduce the dimensionality of the the text files by retaining their variation up to 99%. This will probably give you a feature space ~100-200dim.
PS: Just to point out that the problem is very fram from cluster size requirement, which usually is 1kk+ data points and that fminunc is not at all an overkill, and LIBSVM has nothing to do with it because fminunc is just a quadratic optimizer, while LIBSVM is a classifier. To clear out LIBSVM uses something similar to fminunc just with different objective function.
Here's what I suspect to be the issue, based on my experience with this type of problem. You're using a dense representation for X instead of a sparse one. You're also seeing the typical effect in text classification that the number of terms increasing roughly linearly with the number of samples. Effectively, the cost of the matrix multiplication X*theta goes up quadratically with the number of samples.
By contrast, a good sparse matrix representation only iterates over the non-zero elements to do a matrix multiplication, which tends to be roughly constant per document if they're of appropriately constant length, causing linear instead of quadratic slowdown in the number of samples.
I'm not a Matlab guru, but I know it has a sparse matrix package, so try to use that.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
MATLAB is running out of memory but it should not be
I want to perform PCA analysis on a huge data set of points. To be more specific, I have size(dataPoints) = [329150 132] where 328150 is the number of data points and 132 are the number of features.
I want to extract the eigenvectors and their corresponding eigenvalues so that I can perform PCA reconstruction.
However, when I am using the princomp function (i.e. [eigenVectors projectedData eigenValues] = princomp(dataPoints); I obtain the following error :
>> [eigenVectors projectedData eigenValues] = princomp(pointsData);
Error using svd
Out of memory. Type HELP MEMORY for your options.
Error in princomp (line 86)
[U,sigma,coeff] = svd(x0,econFlag); % put in 1/sqrt(n-1) later
However, if I am using a smaller data set, I have no problem.
How can I perform PCA on my whole dataset in Matlab? Have someone encountered this problem?
Edit:
I have modified the princomp function and tried to use svds instead of svd, but however, I am obtaining pretty much the same error. I have dropped the error bellow :
Error using horzcat
Out of memory. Type HELP MEMORY for your options.
Error in svds (line 65)
B = [sparse(m,m) A; A' sparse(n,n)];
Error in princomp (line 86)
[U,sigma,coeff] = svds(x0,econFlag); % put in 1/sqrt(n-1) later
Solution based on Eigen Decomposition
You can first compute PCA on X'X as #david said. Specifically, see the script below:
sz = [329150 132];
X = rand(sz);
[V D] = eig(X.' * X);
Actually, V holds the right singular vectors, and it holds the principal vectors if you put your data vectors in rows. The eigenvalues, D, are the variances among each direction. The singular vectors, which are the standard deviations, are computed as the square root of the variances:
S = sqrt(D);
Then, the left singular vectors, U, are computed using the formula X = USV'. Note that U refers to the principal components if your data vectors are in columns.
U = X*V*S^(-1);
Let us reconstruct the original data matrix and see the L2 reconstruction error:
X2 = U*S*V';
L2ReconstructionError = norm(X(:)-X2(:))
It is almost zero:
L2ReconstructionError =
6.5143e-012
If your data vectors are in columns and you want to convert your data into eigenspace coefficients, you should do U.'*X.
This code snippet takes around 3 seconds in my moderate 64-bit desktop.
Solution based on Randomized PCA
Alternatively, you can use a faster approximate method which is based on randomized PCA. Please see my answer in Cross Validated. You can directly compute fsvd and get U and V instead of using eig.
You may employ randomized PCA if the data size is too big. But, I think the previous way is sufficient for the size you gave.
My guess is that you have a huge data set. You don't need all of the svd coefficients. In this case, use svds instead of svd :
Taken directly from Matlab help:
s = svds(A,k) computes the k largest singular values and associated singular vectors of matrix A.
From your question, I understand that you don't call svd directly. But you might as well take a look at princomp (It is editable!) and alter the line that calls it.
You probably needed to calculate an n by n matrix in your computation somehow that is to say:
329150 * 329150 * 8btyes ~ 866GB`
of space which explains why you're getting a memory error. There seems to be an efficient way to calculate pca using princomp(X, 'econ') which I suggest you give it a try.
More on this in stackoverflow and mathworks..
Manually compute X'X (132x132) and svd on it. Or find NIPALS script.
I'm trying to apply PCA on my data using princomp(x), that has been standardized.
The data is <16 x 1036800 double>. This runs our of memory which is too be expected except for the fact that this is a new computer, the computer holds 24GB of RAM for data mining. MATLAB even lists the 24GB available on a memory check.
Is MATLAB actually running out of memory while performing a PCA or is MATLAB not using the RAM to it's full potential? Any information or ideas would be helpful. (I may need to increase the virtual memory but assumed the 24GB would have sufficed.)
For a data matrix of size n-by-p, PRINCOMP will return a coefficient matrix of size p-by-p where each column is a principal component expressed using the original dimensions, so in your case you will create an output matrix of size:
1036800*1036800*8 bytes ~ 7.8 TB
Consider using PRINCOMP(X,'econ') to return only the PCs with significant variance
Alternatively, consider performing PCA by SVD: in your case n<<p, and the covariance matrix is impossible to compute. Therefore, instead of decomposing the p-by-p matrix XX', it is sufficient to only decompose the smaller n-by-n matrix X'X. Refer to this paper for reference.
EDIT:
Here's my implementation, the outputs of this function match those of PRINCOMP (the first three anyway):
function [PC,Y,varPC] = pca_by_svd(X)
% PCA_BY_SVD
% X data matrix of size n-by-p where n<<p
% PC columns are first n principal components
% Y data projected on those PCs
% varPC variance along the PCs
%
X0 = bsxfun(#minus, X, mean(X,1)); % shift data to zero-mean
[U,S,PC] = svd(X0,'econ'); % SVD decomposition
Y = X0*PC; % project X on PC
varPC = diag(S'*S)' / (size(X,1)-1); % variance explained
end
I just tried it on my 4GB machine, and it ran just fine:
» x = rand(16,1036800);
» [PC, Y, varPC] = pca_by_svd(x);
» whos
Name Size Bytes Class Attributes
PC 1036800x16 132710400 double
Y 16x16 2048 double
varPC 1x16 128 double
x 16x1036800 132710400 double
Update:
The princomp function became deprecated in favor of pca introduced in R2012b, which includes many more options.
Matlab has hardcoded limitations on matrix sizes. See this link. If you think you're not passing up those limits, then you probably have a bug in your code and actually are.
Mathworks engineer Stuart McGarrity recorded a nice webinar surveying diagnosis techniques and common solutions. If you're data is indeed within allowed limits, the issue might be memory fragmentation - which is easily solvable.