How can function to find Hamming Distance be accelerated for bigger datas in postgreSQL? - postgresql

I have a potgreSQL data bank with more than 10,0000 entries and each entry has a bit array of size 10000. Is there any method to accelerate the Hamming Distance calculation of the bit arrays for the whole table. Thanks
i tried using different data types like bytea, text and numeric for saving bit array and for calculating hamming distance i tried XOR gate operations, text comparison and numeric addition respectively for each datatypes. But i could not optimize the function to make it super quick, currently it takes almost 2 sec for the operation. The target is 200 millisecond.

There is no possibilities to have good performances for hamming distance because it's a recursive process with a high algorithmic complexity and a very high memory footprint.
https://www.cs.swarthmore.edu/~brody/papers/random14-hamming-distance.pdf
It is not accurate to use it in some big datasets like RDBMS.
Some other comparing technics exists and have a lower complexity withour recursive process and with a minimal footprint... They are not as accurate as the Hamming Distance, but can do a good job, as the one I wrote :
See "inférence basique"
You can combine the two... First use inférence basique to reduce the set, second use hamming on some very few results...

Related

Compression algorithm for contiguous numbers

I'm looking for an efficient encoding for storing simulated coefficients.
The data has thousands of curves with each 512 contiguous numbers with single precision. The data may be stored as fixed point while it should preserve about 23-bit precision (compared to unity level).
The curves could look like those:
My best approach was to convert the numbers to 24-bit fixed point. Repeatedly I took the adjacent difference as long as the sum-of-squares decreases. When compressing the resulting data using LZMA (xz,lzip) I get about 7.5x compression (compared to float32).
Adjacent differences are good at the beginning, but they emphasize the quantization noise at each turn.
I've also tried the cosine transform after subtracting the slope/curve at the boundaries. The resulting compression was much weaker.
I tried AEC but LZMA compressed much stronger. The highest compression was using bzip3 (after adjacent differences).
I found no function to fit the data with high precision and a limited parameter count.
Is there a way to reduce the penalty of quantization noise when using adjacent differences?
Are there encodings which are better suited for this type of data?
You could try a higher-order predictor. Your "adjacent difference" is a zeroth-order predictor, where the next sample is predicted to be equal to the last sample. You take the differences between the actuals and the predictions, and then compress those differences.
You can try first, second, etc. order predictors. A first-order predictor would look at the last two samples, draw a line between those, and predict that the next sample will fall on the line. A second-order predictor would look at the last three samples, fit those to a parabola, and predict that the next sample will fall on the parabola. And so on.
Assuming that your samples are equally spaced on your x-axis, then the predictors for x[0] up through cubics are:
x[-1] (what you're using now)
2*x[-1] - x[-2]
3*x[-1] - 3*x[-2] + x[-3]
4*x[-1] - 6*x[-2] + 4*x[-3] - x[-4]
(Note that the coefficients are alternating-sign binomial coefficients.)
I doubt that the cubic polynomial predictor will be useful for you, but experiment with all of them to see if any help.
Assuming that the differences are small, you should use a variable-length integer to represent them. The idea would be to use one byte for each difference most of the time. For example, you could code seven bits of difference, say -64 to 63, in one byte with the high bit clear. If the difference doesn't fit in that, then make the high bit set, and have a second byte with another seven bits for a total of 14 with that second high bit clear. And so on for larger differences.

Pyspark columnSimilarities() usage for calculation of cosine similarities between products

I have a big dataset and need to calculate cosine similarities between products in the context of item-item collaborative filtering for product recommendations. As the data contains more than 50000 items and 25000 rows, I opted for using Spark and found the function columnSimilarities() which can be used on DistributedMatrix, specifically on a RowMatrix or IndexedRowMatrix.
But, there is 2 issues I'm wondering about.
1) In the documentation, it's mentioned that:
A RowMatrix is backed by an RDD of its rows, where each row is a local
vector. Since each row is represented by a local vector, the number of
columns is limited by the integer range but it should be much smaller
in practice.
As I have many products it seems that RowMatrix is not the best choice for building the similarity Matrix from my input which is a Spark Dataframe. That's why I decided to start by converting the dataframe to a CoordinateMatrix and then use toRowMatrix() because columnSimilarities() requires input parameter as RowMatrix. Meanwhile, I'm not sure of its performance..
2) I found out that:
the columnSimilarities method only returns the off diagonal entries of
the upper triangular portion of the similarity matrix.
reference
Does this mean I cannot get the similarity vectors of all the products?
So your current strategy is to compute the similarity between each item, i, and each other item. This means at best you have to compute the upper triangular of the distance matrix, I think that's (i^2 / 2) - i calculations. Then you have to sort for each of those i items.
If you are willing to trade off a little accuracy for runtime you can use approximate nearest neighbors (ANN). You might not find exactly the top NNS for an item but you will find very similar items and it will be orders of magnitude faster. No one dealing with moderately sized datasets calculates (or has the time to wait to calculate) the full set of distances.
Each ANN search method creates an index that will only generate a small set of candidates and compute distances within that subset (this is the fast part). The way the index is constructed provides different guarantees about the accuracy of the NN retrieval (this is the approximate part).
There are various ANN search libraries out there, annoy, nmslib, LSH. An accessible introduction is here: https://erikbern.com/2015/10/01/nearest-neighbors-and-vector-models-part-2-how-to-search-in-high-dimensional-spaces.html
HTH. Tim

Clustering, Large dataset, learning large number vocabulary words

I am try to do clustering from a large dataset dim:
rows: 1.4 million
cols:900
expected number of clusters: 10,000 (10k)
Problem is : size of my dataset 10Gb, and I have RAM of 16Gb. I am trying to implement in Matlab. It will be big help for me if someone could response to it.
P.S. So far i have tried with hierarchical clustering. in one paper, tehy have suggested to go for "fixed radius incremental pre-clustering". But I didnt understand the procedure.
Thanks in advance.
Use some algorithm that does not require a distance matrix. Instead, choose one that can be index accelerated.
Anuthing with a distance matrix will exceed your memory. But even when not requiring this (e.g., SLINK uses only O(n) memory) it still may take too long. Indexes could reduce the runtime to O(n log n) although on your data, indexes may have problems.
Index accelerated algorithms are for example: OPTICS, DBSCAN.
Just don't use the really bad Matlab scripts for these algorithms.

better building of kd-trees

Has anyone ever tried improving kd-trees using the following method?
Dividing each numeric dimension via some 1-d clustering method (e.g. Jenks Natural Breaks Optimization, or FayyadIranni or xyz...)
Sorting the dimensions on the expected value of the variance reduction within each division of that dimension
Building the KD-tree top-down selecting attributes from the order found in (2)
Breaking dimensions at each level of the KD-tree using the divisions found in (1)
And just to say the obvious. If (3) terminates when #rows is (say) less than 30 then nearest neighbor would require 30 distance measures, not N.
You want the tree to be balanced, so there is not much leeway in terms of where to split.
Also, you want the construction to be fast.
If you put in an O(n^2) method during construction, construction will likely be the new bottleneck.
In many cases, the very simple (original) k-d-tree is just as fast as any of the "optimized" techniques that try to determine the "best" splitting axis.

Data clustering algorithm

What is the most popular text clustering algorithm which deals with large dimensions and huge dataset and is fast?
I am getting confused after reading so many papers and so many approaches..now just want to know which one is used most, to have a good starting point for writing a clustering application for documents.
To deal with the curse of dimensionality you can try to determine the blind sources (ie topics) that generated your dataset. You could use Principal Component Analysis or Factor Analysis to reduce the dimensionality of your feature set and to compute useful indexes.
PCA is what is used in Latent Semantic Indexing, since SVD can be demonstrated to be PCA : )
Remember that you can lose interpretation when you obtain the principal components of your dataset or its factors, so you maybe wanna go the Non-Negative Matrix Factorization route. (And here is the punch! K-Means is a particular NNMF!) In NNMF the dataset can be explained just by its additive, non-negative components.
There is no one size fits all approach. Hierarchical clustering is an option always. If you want to have distinct groups formed out of the data, you can go with K-means clustering (it is also supposedly computationally less intensive).
The two most popular document clustering approaches, are hierarchical clustering and k-means. k-means is faster as it is linear in the number of documents, as opposed to hierarchical, which is quadratic, but is generally believed to give better results. Each document in the dataset is usually represented as an n-dimensional vector (n is the number of words), with the magnitude of the dimension corresponding to each word equal to its term frequency-inverse document frequency score. The tf-idf score reduces the importance of high-frequency words in similarity calculation. The cosine similarity is often used as a similarity measure.
A paper comparing experimental results between hierarchical and bisecting k-means, a cousin algorithm to k-means, can be found here.
The simplest approaches to dimensionality reduction in document clustering are: a) throw out all rare and highly frequent words (say occuring in less than 1% and more than 60% of documents: this is somewhat arbitrary, you need to try different ranges for each dataset to see impact on results), b) stopping: throw out all words in a stop list of common english words: lists can be found online, and c) stemming, or removing suffixes to leave only word roots. The most common stemmer is a stemmer designed by Martin Porter. Implementations in many languages can be found here. Usually, this will reduce the number of unique words in a dataset to a few hundred or low thousands, and further dimensionality reduction may not be required. Otherwise, techniques like PCA could be used.
I will stick with kmedoids, since you can compute the distance from any point to anypoint at the beggining of the algorithm, You only need to do this one time, and it saves you time, specially if there are many dimensions. This algorithm works by choosing as a center of a cluster the point that is nearer to it, not a centroid calculated in base of the averages of the points belonging to that cluster. Therefore you have all possible distance calculations already done for you in this algorithm.
In the case where you aren't looking for semantic text clustering (I can't tell if this is a requirement or not from your original question), try using Levenshtein distance and building a similarity matrix with it. From this, you can use k-medoids to cluster and subsequently validate your clustering through use of silhouette coefficients. Unfortunately, Levensthein can be quite slow, but there are ways to speed it up through uses of thresholds and other methods.
Another way to deal with the curse of dimensionality would be to find 'contrasting sets,', conjunctions of attribute-value pairs that are more prominent in one group than in the rest. You can then use those contrasting sets as dimensions either in lieu of the original attributes or with a restricted number of attributes.