UMAP validation to calculate trustworthiness_vector problem - cluster-analysis

I have a dataset with over 200.000 data samples with 256 features, then, I used UMAP with n_components = 8, 16, 32, 64, to reduce data dimension fron 256 to 64, 32, 16, 8, respectively. I do not have labels. I want to use umap validation embedding data. But I encountered an error "0xC00000FD" when I ran the umap validation.trustworthiness_vector(source=df_raw_data.to_numpy(), embedding=df_embedding.to_numpy(), max_k=K), K = 30. And Segmentation fault on WSL.How can I handle this situation?
I have tried to reduce n_components = 2, but the problem still happened.

Related

Fatorial Fit in Matlab [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 months ago.
Improve this question
I have been using Curve Fitting app but canĀ“t find a way to make a fatorial fit in it. Any ideias of how to perform a fatorial fit in my data?
x = [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30];
y = [20, 30, 42, 56, 512, 729, 1000, 1331, 1728, 2197, 2744, 3375, 28800, 36992, 46818, 58482, 72200, 88200, 106722, 128018, 1068672, 1322500, 1622400, 1974375, 2384928, 2861082, 3410400];
I will not help you about "fatorial fit" as you probably wrote it incorrectly.
I suggest to start from graphical inspection..
The plot of y(x) isn't encouraging. The plot of ln(y) as a function of x shows some regularly spaced steps.
This draw to introduce a periodic steps function into the equation model, for example the floor function.
We plot z(x)=ln(y)- a.floor(x/8)
With a=1.89 the curve becomes roughly continuous and smooth but not linear.
To make it roughly linear one have to plot it in log-log scales.
A linear regression leads to the parameters of the related power function roughly z=c.x**p. Thus we get :
NOTE : The criteria of fitting was LMSRE (with LMSE the numerical result would be different : Slightly better for high values of y and much worse for small y ).

DBSCAN on 3d coordinates doesn't find clusters

I'm trying to cluster points in a 3D coordinates DataFrame of 1428 points.
The clusters are relatively flat planes that are elongated clouds DataFrame. They are very obvious clusters so I was hoping to try unsupervised clustering (not putting in the number of clusters expected) KMeans does not properly separate them and does require the number of clusters:
Kmeans plot results
The data looks as follows:
5 6 7
0 9207.495280 18922.083277 4932.864
1 5831.199280 3441.735280 5756.326
2 8985.735280 12511.719280 7099.844
3 8858.223280 28883.151280 5689.652
4 6801.399277 6468.759280 7142.524
... ... ... ...
1423 10332.927277 22041.855280 5136.252
1424 6874.971277 12937.563277 5467.216
1425 8952.471280 28849.887280 5710.522
1426 7900.611277 19128.255280 4803.122
1427 10234.635277 18734.631280 5631.286
[1428 rows x 3 columns]
I was hoping DBSCAN would deal better with this data. However, when I try the following (I played around with eps and min_samples but without success):
from sklearn.cluster import DBSCAN
dbscan = DBSCAN(eps=10, min_samples = 50)
clusters = dbscan.fit_predict(X)
print('Clusters found', dbscan.labels_)
len(clusters)
I get this output:
Clusters found [-1 -1 -1 ... -1 -1 -1]
1428
I have been confused about getting this to work, especially since Kmeans did work:
kmeans = sk_cluster.KMeans(init='k-means++', n_clusters=9, n_init=50)
kmeans.fit_predict(X)
centroids = kmeans.cluster_centers_
kmeans_labels = kmeans.labels_
error = kmeans.inertia_
print ("The total error of the clustering is: ", error)
print ('\nCluster labels')
The total error of the clustering is: 4994508618.792263
Cluster labels
[8 0 7 ... 3 8 1]
Remember this golden rule:
Always and always perform normalization on your data before feeding it to ML / DL algorithm.
Reason being, your columns have different range, probably one column has a range of [10000,20000] and other has [4000,5000] when you will plot these coordinates on a graph, they will be very very far away, Clustering/Classification will never work, maybe Regression will. Scaling brings the range of each of the column to same level but still maintaining the distance but with different scale. It is just like in google MAPS, when you zoom in scale decrease and when you zoom out scale increases.
You are free to choose the normalization algorithm, there are almost 20-30 available on sklearn.
Edit:
Use this code:
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
scaler.fit(X)
X_norm = scaler.transform(X)
from sklearn.cluster import DBSCAN
dbscan = DBSCAN(eps=0.05, min_samples = 3,leaf_size=30)
clusters = dbscan.fit_predict(X_norm)
np.unique(dbscan.labels_)
array([-1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32,
33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47])
What I found that as DBSCAN is a density based approach and I Tried sklearn normalizer(from sklearn.preprocessing import normalize) which basically converts into gaussian distribution, but it didn't work and it should not in case of DBSCAN as it requires each feature to have similar density.
So, I went with MinMax scaler as it should turn each features density similar and One thing to note, that as your data points after scaling, are less than 1, one should use epsilon in the similar range as well.
Kudos :)

Matlab performance sum and loop

I am trying to implement a function that mimics a 2d convolution on a picture https://www.tensorflow.org/api_docs/python/tf/nn/conv2d. I am not allowed to use a library for this. For a batch of pictures x(n,w,h,c), with size (50, 28, 28, 1), I calculate 32 features per pixel like so:
for im = 1:batch_size
for i=1:self.shape_input(2)
for j=1:self.shape_input(3)
y_unagg= x_virtual(im, i:(self.pad*2+i), j:(self.pad*2+j),:,:).*self.W_hat;
y(im,i,j,:) = reshape(sum(sum(sum(y_unagg, 2), 3), 4),...
1, 1, 1, self.shape_filter(4));
end
end
end
This takes roughly half a second. The second time around I calculate a batch with size (50, 14, 14, 32) and map it to 64 features. This time it takes 6 seconds. Is there any way I could speed this up?

Autoencoder - encoder vs decoder network size?

I've been reading up on autoencoders and all the examples I see mirror the encoder portion when building the decoder.
encoder = [128, 64, 32, 16, 3]
decoder = [3, 16, 32, 64, 128]
Is this just by convention?
Is there any specific reason the decoder should not have a different hidden layer structure than the encoder. For example...
encoder = [128, 64, 32, 16, 3]
decoder = [3, 8, 96, 128]
so long as the inputs and outputs match.
maybe I'm missing something obvious.
It's just a convention:
The architecture of a stacked autoencoder is typically symmetrical
with regards to the central hidden layer (the coding layer).
(c) Hands-On Machine Learning with Scikit-Learn and TensorFlow
In your case coding layer is layer with size=3, so stacked autoencoder has shape: 128, 64, 32, 16, 3, 16, 32, 64, 128.

How to put labels on each data points in stem plot using matlab

so this is my x and y data:
x = [29.745, 61.77, 42.57, 70.049, 108.51, 93.1, 135.47, 52.79, 77.91, 116.7, 100.71, 146.37, 125.53]
y = [6, 6, 12, 24, 24, 12, 24, 8, 24, 24, 24, 48, 8]
stem(x,y);
so i want to label each data point on my stem plot, this i want output i want:
i edit it using paint, can matlab do this vertical labeling? just what the image look like? please help.
Yes it can! You just need to provide the rotation property of text annotations with a value of 90 and it works fine.
Example:
clear
clc
x = [29.745, 61.77, 42.57, 70.049, 108.51, 93.1, 135.47, 52.79, 77.91, 116.7, 100.71, 146.37, 125.53]
y = [6, 6, 12, 24, 24, 12, 24, 8, 24, 24, 24, 48, 8]
hStem = stem(x,y);
%// Create labels.
Labels = {'none'; 'true';'false';'mean';'none';'';'true';'hints';'high';'low';'peas';'far';'mid'}
%// Get position of each stem 'bar'. Sorry I don't know how to name them.
X_data = get(hStem, 'XData');
Y_data = get(hStem, 'YData');
%// Assign labels.
for labelID = 1 : numel(X_data)
text(X_data(labelID), Y_data(labelID) + 3, Labels{labelID}, 'HorizontalAlignment', 'center','rotation',90);
end
Which gives the following:
The last label is a bit high so you might want to rescale the axes, but you get the idea.