Multi-label clustering - cluster-analysis

I have a question regarding a task that I am trying to solve. The data that I have are characterisation data,
meaning that I have a label (PASS/FAIL) for every single datapoint.
So my data matrix, is of n rows and m columns and the target variables are again a matrix of
n rows and m columns composed of binary values (0s and 1s).
My task is to apply clustering and partition all these datapoints into two clusters, one being for PASS
datapoints and the other for FAIL datapoints. I wasn't able to find an algorithm that can solve
this type of 'multi-label' problem with clustering.
I tried to implement algorithms like k-means but while tuning the number of clusters to initialise
I get k=6 which doesn't really make sense. In the data, outliers are already dropped and they
are normalised as well.
I have a large amount of features on my data matrix (eg. >3000) and I tried to apply
dimensionality reduction methods like PCA to at least drop the features that are more
irrelevant than the rest. But I am not sure if this would be applicable in my case when
I have a binary matrix as target variables.
Is there a specific algorithm that can solve this type of problem and if so, what is the
necessary pre-processing I should be doing before applying it?

Related

for how How to compute whether representational similarity matrix values are significant

I am new to RSA analysis in fMRI images. I used SPM 12 for preprocessing and first level analysis of my fMRI images and used RSA-toolbox to compute RDMs (representational dissimilarity matrix) for my conditions in an specific region of the brain. Now I have the RDM mateix for every single subject also have the overall RDM across all subjects. However, RSA-toolbox doesn't report any p value or significance test for the values in the RDM. How can I compute or determine which values in the RDM matrix are significant and which are not? I used pearson's r method for to compute RDMs. In paeticular I want to have an explaination about the mathematics that can be used to test significancy of these values.

Gaussian Mixture Model (GMM) giving only one cluster

I have a dataset that has 70 columns and 4.4 million rows. I want to perform clustering on it. I did TF-IDF first then I used clustering with K-means, Bisecting k-means and Gaussian Mixture Model (GMM). While the other techniques give me the specified number of clusters, GMM gives only one cluster. Example, in the code below, I want 20 clusters but it returns only 1 cluster. Is this happening because of the fact that I have many columns or it is merely caused by the nature of the data?
gmm = GaussianMixture(k = 20, tol = 0.000001, maxIter=10000, seed =1)
model = gmm.fit(rescaledData)
df1 = model.transform(rescaledData).select(['label','prediction'])
df1.groupBy('prediction').count().show() # this returns 1 row
In my opinion, the main reason behind of bad clustering performance of Pyspark GMM is that it's implementation is done using diagonal covariance matrix which do not take account of covariance between different features present within the dataset.
Check it's implementation here: https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/ml/clustering/GaussianMixture.scala
where they have cleary mentioned to be using diagonal covariance matrix because of curse of dimensionality.
#note This algorithm is limited in its number of features since it requires storing a covariance matrix which has size quadratic in the number of features. Even when the number of features does not exceed this limit, this algorithm may perform poorly on high-dimensional data. This is due to high-dimensional data (a) making it difficult to cluster at all (based on statistical/theoretical arguments) and (b) numerical issues with Gaussian distributions.

Is pointwise multiple linear regression possible in Matlab

I am attempting to run a pointwise multiple linear regression in Matlab, i.e., to obtain a regression coefficient for each point in my dataset.
I have three independent variables and one dependent variable. Each variable is a column vector with ~1.6 million records. Each data point represents a geographic location; my point in doing all this is to try and see the effects of the predictor variables on the response variable on a pixel-per-pixel basis.
I have already successfully run fitlm, regress, and mldivide; these functions get me the three regression coefficients for my data. However, I want to run a multiple regression through all my points independently, so that ultimately I will get three columns of regression coefficients of 1.6 million records each.
My data contains some NaN. These rows cannot be ignored; the final column vector must be the same size as the original vectors since the data point's location is related to real-world coordinates.
I've looked into the code for bsxfun but don't believe it can help me. I also tried using dot notation but that didn't work. My thinking now is to create a for loop and use mldivide one row at a time. However, when I tried using 'regress' on scalars (mocking one row of data), I got the error "X is rank deficient to within machine precision." I didn't get this error when I used mldivide.
Is doing a pointwise multiple linear regression even possible? It seems to me that my sample size is way too small. Any feedback on the feasibility of this, and whether a for loop is a good direction to pursue, would be greatly appreciated.

Principal Component Analysis in practice

I understand the concept of PCA, and what it's doing, but trying to apply the concept to my application is proving difficult.
I have a 1 by X matrix of a physiological signal (it's not EMG, but very similar, so think of it as EMG if it helps) which contains various noise and artefacts. What I've noticed of the noise is that some of it is very large and I would assume after PCA this would be the largest principal component, thus my idea of using PCA for some dimensional reduction.
My problem is that with a 1 by X matrix there is no covariance matrix, only the variance, and thus eigenvectors and all of PCA falls through.
I know I need to rearrange my data into a matrix more than 1D, but this is where I need some suggestions. Do I split my data into windows of equal length to create a large dimensional matrix which I can apply PCA to? Do I perform several trials of the same action so I have lots of data sets (this would be impractical for my application)?
Any suggestions or examples would be helpful. I'm using MATLAB to perform this task.

Clustering: a training dataset of variable data dimensions

I have a dataset of n data, where each data is represented by a set of extracted features. Generally, the clustering algorithms need that all input data have the same dimensions (the same number of features), that is, the input data X is a n*d matrix of n data points each of which has d features.
In my case, I've previously extracted some features from my data but the number of extracted features for each data is most likely to be different (I mean, I have a dataset X where data points have not the same number of features).
Is there any way to adapt them, in order to cluster them using some common clustering algorithms requiring data to be of the same dimensions.
Thanks
Sounds like the problem you have is that it's a 'sparse' data set. There are generally two options.
Reduce the dimensionality of the input data set using multi-dimensional scaling techniques. For example Sparse SVD (e.g. Lanczos algorithm) or sparse PCA. Then apply traditional clustering on the dense lower dimensional outputs.
Directly apply a sparse clustering algorithm, such as sparse k-mean. Note you can probably find a PDF of this paper if you look hard enough online (try scholar.google.com).
[Updated after problem clarification]
In the problem, a handwritten word is analyzed visually for connected components (lines). For each component, a fixed number of multi-dimensional features is extracted. We need to cluster the words, each of which may have one or more connected components.
Suggested solution:
Classify the connected components first, into 1000(*) unique component classifications. Then classify the words against the classified components they contain (a sparse problem described above).
*Note, the exact number of component classifications you choose doesn't really matter as long as it's high enough as the MDS analysis will reduce them to the essential 'orthogonal' classifications.
There are also clustering algorithms such as DBSCAN that in fact do not care about your data. All this algorithm needs is a distance function. So if you can specify a distance function for your features, then you can use DBSCAN (or OPTICS, which is an extension of DBSCAN, that doesn't need the epsilon parameter).
So the key question here is how you want to compare your features. This doesn't have much to do with clustering, and is highly domain dependant. If your features are e.g. word occurrences, Cosine distance is a good choice (using 0s for non-present features). But if you e.g. have a set of SIFT keypoints extracted from a picture, there is no obvious way to relate the different features with each other efficiently, as there is no order to the features (so one could compare the first keypoint with the first keypoint etc.) A possible approach here is to derive another - uniform - set of features. Typically, bag of words features are used for such a situation. For images, this is also known as visual words. Essentially, you first cluster the sub-features to obtain a limited vocabulary. Then you can assign each of the original objects a "text" composed of these "words" and use a distance function such as cosine distance on them.
I see two options here:
Restrict yourself to those features for which all your data-points have a value.
See if you can generate sensible default values for missing features.
However, if possible, you should probably resample all your data-points, so that they all have values for all features.