Efficient way to extract a row and do cosine similarity - scala

In the code below, I get a dense Matrix V after doing SVD. What I want is
Given a set of values(say 3,7,9).
I want to extract the 3,7 and 9th row of Matrix V.
I want to calculate cosine similarity of these 3 rows with each row of Matrix V
I need to add the three cosine similarities obtained for of each row.
I finally need the index of row which have the maximum summation.
val data = Array(
Vectors.sparse(5, Seq((1, 1.0), (3, 7.0))),
Vectors.dense(2.0, 0.0, 3.0, 4.0, 5.0),
Vectors.dense(4.0, 0.0, 0.0, 6.0, 7.0))
val dataRDD = sc.parallelize(data)
val mat: RowMatrix = new RowMatrix(dataRDD)
// Compute the top 4 singular values and corresponding singular vectors.
val svd: SingularValueDecomposition[RowMatrix, Matrix] = mat.computeSVD(4, computeU = true)
val U: RowMatrix = svd.U // The U factor is a RowMatrix.
val s: Vector = svd.s // The singular values are stored in a local dense vector.
val V: Matrix = svd.V // The V factor is a local dense matrix.
Please advise an efficient method to do the same. I have been thinking of converting Matrix V to Indexed Row Matrix, But when I do use row iterator on V, How do I keep track of index of rows? Is there a better way to do it?

Related

Convert RDD of Matrix to RDD of Vector

I have a RDD[Matrix[Double]] and want to convert it to RDD[Vector] (Each row in the Matrix will be converted to a Vector).
I've seen related answer like Convert Matrix to RowMatrix in Apache Spark using Scala, but it's one Matrix to RDD of Vector. While my case is RDD of Matrix.
Use flatMap on code to convert Matrix to Seq[Vector]:
// from https://stackoverflow.com/a/28172826/1206998
def toSeqOfVector(m: Matrix): Seq[Vector] = {
val columns = m.toArray.grouped(m.numRows)
val rows = columns.toSeq.transpose // Skip this if you want a column-major RDD.
rows.map(row => new DenseVector(row.toArray))
}
val matrices: RDD[Matrix] = ??? // your input
val vectors: RDD[Vector] = matrices.flatMap(toSeqOfVector)
Note: I didn't test this code, but this is the principle

Cosine similarity of two sparse vectors in Scala Spark

I have a dataframe with two columns where each row has a Sparse Vector. I try to find a proper way to calculate the cosine similarity (or just the dot product) of the two vectors in each row.
However, I haven't been able to find any library or tutorial to do it for Sparse vectors.
The only way I found is the following:
Create a k X n matrix, where n items are described as k-dimensioned vectors. For representing each item as a k dimension vector, you can use ALS which represents each entity in a latent factor space. The dimension of this space (k) can be chosen by you. This k X n matrix can be represented as RDD[Vector].
Convert this k X n matrix to RowMatrix.
Use columnSimilarities() function to get a n X n matrix of similarities between n items.
I feel it is an overkill to calculate all the cosine similarities for each pair while I need it only for the specific pairs in my (quite big) dataframe.
In Spark 3 there is now method dot for a SparseVector object, which takes another vector as its argument.
If you want to do this in earlier versions, you could create a user defined function that follows this algorithm:
Take intersection of your vectors' indices.
Get two subarrays of your vectors' values based on the indices from the intersection.
Do pairwise multiplication of the elements of those two subarrays.
Sum the values resulting values from such pairwise multiplication.
Here's my realization of it:
import org.apache.spark.ml.linalg.SparseVector
def dotProduct(vec: SparseVector, vecOther: SparseVector) = {
val commonIndices = vec.indices intersect vecOther.indices
commonIndices.map(x => vec(x) * vecOther(x)).reduce(_+_)
}
I guess you know how to turn it into a Spark UDF from here and apply it to your dataframe's columns.
And if you normalize your sparse vectors with org.apache.spark.ml.feature.Normalizer before computing your dot product, you'll get cosine similarity in the end (by definition).
Great answer above #Sergey-Zakharov +1.
A few adds-on:
The reduce doesn't work on empty sequences.
Make sure computing L2 normalization.
val normalizer = new Normalizer()
.setInputCol("features")
.setOutputCol("normFeatures")
.setP(2.0)
val l2NormData = normalizer.transform(df_features)
and
val dotProduct = udf {(v1: SparseVector, v2: SparseVector) =>
v1.indices.intersect(v2.indices).map(x => v1(x) * v2(x)).reduceOption(_ + _).getOrElse(0.0)
}
and then
val df = dfA.crossJoin(broadcast(dfB))
.withColumn("dot", dotProduct(col("featuresA"), col("featuresB")))
If the number of vectors you want to calculate the dot product with is small, cache the RDD[Vector] table. Create a new table [cosine_vectors] that is a filter on the original table to only select the vectors you want the cosine similarities for. Broadcast join those two together and calculate.

Similarity matrix using a spark dataframe

For an input Dataframe the intent is to generate only half of the self-cartesian product. Given the cartesian product results in a symmetric matrix we only really need to calculate either the upper or the lower triangular portion above (resp below) the diagonal that is set to zeros:
The dataframe crossjoin :
val df3 = df2.crossJoin(df2)
will generate the FULL - which we do not want.
Given the similarity matrix is symmetric with 1's along the diagonal we do not need to calculate the upper half or the diagonal itself - as shown in the LOWER DiagO's below:
Any suggestions on how to obtain the result with the least computation?
The following is not a perfect answer: it does result in first generating the full cartesian product. But at least the output results are correct.
/** Generate schema for cartesian product of an input dataframe */
def joinSchema(df: DataFrame) =
types.StructType(df.schema.fields.map {
f => StructField(s"${f.name}_a", f.dataType, f.nullable)
} ++ df.schema.fields.map { f => StructField(s"${f.name}_b", f.dataType, f.nullable)}
)
// Create the cartesian product via crossJoin
val schema = joinSchema(dfIn)
val df3 = df2.crossJoin(dfIn)
val cartesianDf = spark.createDataFrame(df3.rdd, schema)
cartDf.createOrReplaceTempView("cartesian")
// Retain the lower triangular entries below the diagonal
select * from cartesian where id_a < id_b

Reducing Block Matrices to their Sum

Is there an efficient way to reduce a block matrix to a sum of all of its values? I'm looking to calculate the Euclidean distance between two block matrices (d2, as defined in the response here https://math.stackexchange.com/questions/507742/distance-similarity-between-two-matrices).
As a follow up, there doesn't appear to be a simple way to subtract two block matrices. Is there any way to multiply each by a constant?
Edit: Found a workaround for subtraction. V, W, and H are the three matrices. The negOneBlock is a matrix of size V which only contain negative ones.
V.add((W.multiply(H)).multiply(negOneBlock))
Applying a sum for each block and then reducing should quite efficient.
import org.apache.spark.mllib.linalg.distributed._
def sum(mat: BlockMatrix) = mat.blocks.map(_._2.toArray.sum).sum
where
_.blocks
creates a RDD[((Int, Int), Matrix)],
_._2
extracts Matrix, and
toArray.sum
aggregates all values in the block. For data like:
val mat: BlockMatrix = new CoordinateMatrix(sc.parallelize(Seq(
MatrixEntry(0, 10, 1.0), MatrixEntry(10, 1024, 2.0),
MatrixEntry(3000, 10, 3.0))
)).toBlockMatrix(128, 128)
sum(mat)
we get expected result which 6.0.

Concatenate Sparse Vectors in Spark?

Say you have two Sparse Vectors. As an example:
val vec1 = Vectors.sparse(2, List(0), List(1)) // [1, 0]
val vec2 = Vectors.sparse(2, List(1), List(1)) // [0, 1]
I want to concatenate these two vectors so that the result is equivalent to:
val vec3 = Vectors.sparse(4, List(0, 2), List(1, 1)) // [1, 0, 0, 1]
Does Spark have any such convenience method to do this?
If you have the data in a DataFrame, then VectorAssembler would be the right thing to use. For example:
from pyspark.ml.feature import VectorAssembler
dataset = spark.createDataFrame(
[(0, Vectors.sparse(10, {0: 0.6931, 5: 0.0, 7: 0.5754, 9: 0.2877}), Vectors.sparse(10, {3: 0.2877, 4: 0.6931, 5: 0.0, 6: 0.6931, 8: 0.6931}))],
["label", "userFeatures1", "userFeatures2"])
assembler = VectorAssembler(
inputCols=["userFeatures1", "userFeatures2"],
outputCol="features")
output = assembler.transform(dataset)
output.select("features", "label").show(truncate=False)
You would get the following output for this:
+---------------------------------------------------------------------------+-----+
|features |label|
+---------------------------------------------------------------------------+-----+
|(20,[0,7,9,13,14,16,18], [0.6931,0.5754,0.2877,0.2877,0.6931,0.6931,0.6931])|0|
+---------------------------------------------------------------------------+-----+
I think you have a slight problem understanding SparseVectors. Therefore I will make a little explanation about them, the first argument is the number of features | columns | dimensions of the data, besides every entry of the List in the second argument represent the position of the feature, and the values in the the third List represent the value for that column, therefore SparseVectors are locality sensitive, and from my point of view your approach is incorrect.
If you pay more attention you are summing or combining two vectors that have the same dimensions, hence the real result would be different, the first argument tells us that the vector has only 2 dimensions, so [1,0] + [0,1] => [1,1] and the correct representation would be Vectors.sparse(2, [0,1], [1,1]), not four dimensions.
In the other hand if each vector has two different dimensions and you are trying to combine them and represent them in a higher dimensional space, let's say four then your operation might be valid, however this functionality isn't provided by the SparseVector class, and you would have to program a function to do that, something like (a bit imperative but I accept suggestions):
def combine(v1:SparseVector, v2:SparseVector):SparseVector = {
val size = v1.size + v2.size
val maxIndex = v1.size
val indices = v1.indices ++ v2.indices.map(e => e + maxIndex)
val values = v1.values ++ v2.values
new SparseVector(size, indices, values)
}
If your vectors represent different columns of a dataframe, you can use VectorAssembler. Just need to set setInputcols (your 2 vectors) and Spark will make your wish come true ;)