Is it possible to project a multidimensional data to a 2D map using LDA? It seems that the tool Matlab provided does not provide such functions...
Thanks for reply. My data now is having 6 classes, so does it mean that if I have 6 classes, I can only reduce it to 5 dimensions? Or can it be done in a similar way with PCA, which takes the top 2 eigenvalues, and use these 2 for projection? The PCA does not quite work for my problem as an unsupervised approach, so I am wondering if LDA might help.
LDA isn't really meant for dimensionality-reduction strictly speaking, especially in the cases where all your data belongs to one class. It's meant to come up with a single linear projection that is the most discriminative between between two classes. Thus, there's no real natural way to do this using LDA.
If your data all belongs to the same class, then you might be interested more in PCA (Principcal Component Analysis), which gives you the most important directions for the data ranked in order of importance. Other methods exist as well like ISOMAP (as mentioned by EMS in the comments) or self-organizing maps.
As a side note, LDA can help you reduce dimensionality if you know that you have multi-class data. It can help you reduce dimensionality down to k-1 dimensions if you have k-class data, but you didn't mention that this is the case.
EDIT: Credit goes to #EMS for helping to clarify this answer.
Related
I have run LDA with MATLAB using the fitcdiscr function and predict.
I have a feeling there may be some bugs in my code however and as a sanity check would like to identify which features are being most heavily weighted in the classification.
Can this be done?
There is a Coeffs field in your fitted object containing all the relevant information http://uk.mathworks.com/help/stats/classificationdiscriminant-class.html
In particular, if you fit a linear LDA there will be Linear field which is the linear operator used for projection. However, one should bear in mind that value of coefficients of linear models are not feature importances. There is much more in that to consider. Weight can be big because your feature have small values or because there is a highly biased distribution of the values. If you need feature selection technique - use feature selection methods (like L1 regularized models) otherwise you might easily get wrong conclusions from your data.
This question already has answers here:
Matlab - PCA analysis and reconstruction of multi dimensional data
(2 answers)
Closed 7 years ago.
I wanted to reduce a bigger dimension matrix i.e. 2000*768; to some lower dimensions i.e 200*768 or 400*400 (not fixed); using principal component analysis (PCA) in MatLab. I wanted to do it for feature dimension reduction. How can I do it easily? And please suggest me some tutorials to understand PCA better.
Thanks in advance.
PCA is a really useful tool for dimensionality reduction, but it should be used when you understand exactly what it is doing and what you are getting out of it. For a good intro click here - it is a decent explanation which is not too hard to follow. There is also this article which is a quick DIY walkthrough which may help you understand better what is going on.
Once you know what you are getting, PCA is easy in matlab. Just type pca(X) and you can perform it on data set X.
What you get out is very much dependent on what you get in (e.g. things like normalisation are very important for input data), and you can use extra parameters that are worth knowing about to set up you principal component analysis. See matlab's guide here.
What you are looking for in dimensionality reduction to best represent the data with as few components as possible. Using the explained output of [coeff,score,latent,tsquared,explained] = pca(X) you get a vector telling you how much of the data is explained by each principal component, which gives you a good indication of whether dimensionality reduction can be done.
I would like to perform classification on a small data set 65x9 using some of the Machine Learning Classification Methods (SVM, Decision Trees or any other).
So, before starting with the classification I would like to do attribute analyses with PCA in Matlab or Weka (preferred MatLab). I would like to obtain which Attribute contribute most to the performance of the classifier. So I can maybe reduce the number of some Attribute or/and include more in the future. Any example of PCA can find regarding this in MatLab or Weka?
Thanks
PCA is a unsupervised feature extraction method.
If your question is on selecting attributes to use with PCA, i don't know what your purpose is but it is unnecessary to do something like that to improve classification performance. Just use the whole attributes. PCA will give you best attributes in decreasing order for each instance.
If your question is on selecting attributes after PCA, you can chose a treshold (for example 0.95) and calculate #attributes enough for treshold beginning from the first attribute to last one. You can use the eigenvalues of covariance matrix to calculate and achive treshold in PCA.
After running PCA, we know that the first attribute is the best one, the second attribute is the best one after first etc...
I am trying to differentiate two populations. Each population is an NxM matrix in which N is fixed between the two and M is variable in length (N=column specific attributes of each run, M=run number). I have looked at PCA and K-means for differentiating the two, but I was curious of the best practice.
To my knowledge, in K-means, there is no initial 'calibration' in which the clusters are chosen such that known bimodal populations can be differentiated. It simply minimizes the distance and assigns the data to an arbitrary number of populations. I would like to tell the clustering algorithm that I want the best fit in which the two populations are separated. I can then use the fit I get from the initial clustering on future datasets. Any help, example code, or reading material would be appreciated.
-R
K-means and PCA are typically used in unsupervised learning problems, i.e. problems where you have a single batch of data and want to find some easier way to describe it. In principle, you could run K-means (with K=2) on your data, and then evaluate the degree to which your two classes of data match up with the data clusters found by this algorithm (note: you may want multiple starts).
It sounds to like you have a supervised learning problem: you have a training data set which has already been partitioned into two classes. In this case k-nearest neighbors (as mentioned by #amas) is probably the approach most like k-means; however Support Vector Machines can also be an attractive approach.
I frequently refer to The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition (Springer Series in Statistics) by Trevor Hastie (Author), Robert Tibshirani (Author), Jerome Friedman (Author).
It really depends on the data. But just to let you know K-means does get stuck at local minima so if you wanna use it try running it from different random starting points. PCA's might also be useful how ever like any other spectral clustering method you have much less control over the clustering procedure. I recommend that you cluster the data using k-means with multiple random starting points and c how it works then you can predict and learn for each the new samples with K-NN (I don't know if it is useful for your case).
Check Lazy learners and K-NN for prediction.
I am studying Support Vector Machines (SVM) by reading a lot of material. However, it seems that most of it focuses on how to classify the input 2D data by mapping it using several kernels such as linear, polynomial, RBF / Gaussian, etc.
My first question is, can SVM handle high-dimensional (n-D) input data?
According to what I found, the answer is YES!
If my understanding is correct, n-D input data will be
constructed in Hilbert hyperspace, then those data will be
simplified by using some approaches (such as PCA ?) to combine it together / project it back to 2D plane, so that
the kernel methods can map it into an appropriate shape such a line or curve can separate it into distinguish groups.
It means most of the guides / tutorials focus on step (3). But some toolboxes I've checked cannot plot if the input data greater than 2D. How can the data after be projected to 2D?
If there is no projection of data, how can they classify it?
My second question is: is my understanding correct?
My first question is, does SVM can handle high-dimensional (n-D) input data?
Yes. I have dealt with data where n > 2500 when using LIBSVM software: http://www.csie.ntu.edu.tw/~cjlin/libsvm/. I used linear and RBF kernels.
My second question is, does it correct my understanding?
I'm not entirely sure on what you mean here, so I'll try to comment on what you said most recently. I believe your intuition is generally correct. Data is "constructed" in some n-dimensional space, and a hyperplane of dimension n-1 is used to classify the data into two groups. However, by using kernel methods, it's possible to generate this information using linear methods and not consume all the memory of your computer.
I'm not sure if you've seen this already, but if you haven't, you may be interested in some of the information in this paper: http://pyml.sourceforge.net/doc/howto.pdf. I've copied and pasted a part of the text that may appeal to your thoughts:
A kernel method is an algorithm that depends on the data only through dot-products. When this is the case, the dot product can be replaced by a kernel function which computes a dot product in some possibly high dimensional feature space. This has two advantages: First, the ability to generate non-linear decision boundaries using methods designed for linear classifiers. Second, the use of kernel functions allows the user to apply a classifier to data that have no obvious fixed-dimensional vector space representation. The prime example of such data in bioinformatics are sequence, either DNA or protein, and protein structure.
It would also help if you could explain what "guides" you are referring to. I don't think I've ever had to project data on a 2-D plane before, and it doesn't make sense to do so anyway for data with a ridiculous amount of dimensions (or "features" as it is called in LIBSVM). Using selected kernel methods should be enough to classify such data.