Array of vectors to arrays of arrays in Scala - scala

I am new to Scala and I am struggling to cast Array(Vector)to Array(Array[Double]). I thought that I could apply .toArray method in foreach like this:
val cents = centers.foreach(toArray)
Where centers is an array of vectors. What is the correct way to do it?
The vectors are org.apache.spark.mllib.linalg.Vectors.

Related

Cosine similarity of two sparse vectors in Scala Spark

I have a dataframe with two columns where each row has a Sparse Vector. I try to find a proper way to calculate the cosine similarity (or just the dot product) of the two vectors in each row.
However, I haven't been able to find any library or tutorial to do it for Sparse vectors.
The only way I found is the following:
Create a k X n matrix, where n items are described as k-dimensioned vectors. For representing each item as a k dimension vector, you can use ALS which represents each entity in a latent factor space. The dimension of this space (k) can be chosen by you. This k X n matrix can be represented as RDD[Vector].
Convert this k X n matrix to RowMatrix.
Use columnSimilarities() function to get a n X n matrix of similarities between n items.
I feel it is an overkill to calculate all the cosine similarities for each pair while I need it only for the specific pairs in my (quite big) dataframe.
In Spark 3 there is now method dot for a SparseVector object, which takes another vector as its argument.
If you want to do this in earlier versions, you could create a user defined function that follows this algorithm:
Take intersection of your vectors' indices.
Get two subarrays of your vectors' values based on the indices from the intersection.
Do pairwise multiplication of the elements of those two subarrays.
Sum the values resulting values from such pairwise multiplication.
Here's my realization of it:
import org.apache.spark.ml.linalg.SparseVector
def dotProduct(vec: SparseVector, vecOther: SparseVector) = {
val commonIndices = vec.indices intersect vecOther.indices
commonIndices.map(x => vec(x) * vecOther(x)).reduce(_+_)
}
I guess you know how to turn it into a Spark UDF from here and apply it to your dataframe's columns.
And if you normalize your sparse vectors with org.apache.spark.ml.feature.Normalizer before computing your dot product, you'll get cosine similarity in the end (by definition).
Great answer above #Sergey-Zakharov +1.
A few adds-on:
The reduce doesn't work on empty sequences.
Make sure computing L2 normalization.
val normalizer = new Normalizer()
.setInputCol("features")
.setOutputCol("normFeatures")
.setP(2.0)
val l2NormData = normalizer.transform(df_features)
and
val dotProduct = udf {(v1: SparseVector, v2: SparseVector) =>
v1.indices.intersect(v2.indices).map(x => v1(x) * v2(x)).reduceOption(_ + _).getOrElse(0.0)
}
and then
val df = dfA.crossJoin(broadcast(dfB))
.withColumn("dot", dotProduct(col("featuresA"), col("featuresB")))
If the number of vectors you want to calculate the dot product with is small, cache the RDD[Vector] table. Create a new table [cosine_vectors] that is a filter on the original table to only select the vectors you want the cosine similarities for. Broadcast join those two together and calculate.

Multiply each column of a rowmatrix by an integer

I'm wondering what would be a good way of multiply each column of a RowMatrix
by an integer (each row being multiplied by a different integer).
I know I could for example create a diagonal mllib "Matrix" object containing
the values a1 ... an ( ai being the coefficient I want to multiply the ith
column of the RowMatrix by ), and then I could just use the matrix multiplication of mllib (multiplying a RowMatrix by a Matrix, which yields a RowMatrix as result). However this is not efficient probably and does not show how to
do stuff on a RowMatrix.
I'm new to writing functions on rowmatrices and tried looking a bit at some of
the already existing ones and was a bit confused.
Thanks for you help
It's unclear whether you want to multiply each row or each column by a different integer. Your title and second paragraph say each column, but your first sentence says each row. Regardless, these sorts of operations are probably most easily implemented by calling .rows and operating on the underlying RDD[Vector]. For instance:
def multiplyColumns(m: RowMatrix, xs: Array[Double]): RowMatrix = {
val newRowsRdd = m.rows.map {
row => Vectors.dense(row.toArray.zip(xs).map{case (a, b) => a * b})
}
new RowMatrix(newRowsRdd)
}

How to compute the average vector of a set of vectors in scala?

I have a RDD of the form
(3,CompactBuffer((-0.063763,0.060122,0.250393), (0.006971,-0.096478,0.123718), (-0.198281,-0.079444,-0.015460)))
I need to calculate the average of the vectors in the compactBuffer
val averagevector = filteredvectors.reduce((a,b) => sum(b)/b.size)
By doing a reduce Action something like the one shown above.
My averagevector should be something like (3, (avg(1), avg(2), avg(3))) where avg(1) is the average of all the first elements in the CompactBuffer shown above.

Scala Breeze DenseMatrix to SparseMatrix conversion

Im struggeling to find a way to quickly convert a DenseMatrix to SparseMatrix.
I tried flattening the DenseMatrix to an array, converting it to a Sparse Matrix and then reshaping it but this is not possible since there is no reshape function..
val dm = DenseMatrix((1,2,3),(0,0,0),(0,0,0))
val sm =CSCMatrix(dm.toArray)
sm.reshape(3,3)
error: value reshape is not a member of breeze.linalg.CSCMatrix[Int]
How about something like this:
val dm = DenseMatrix((1,2,3),(0,0,0),(0,0,0))
val sm = CSCMatrix.tabulate(dm.rows, dm.cols)(dm(_, _))

How can I find the index of the maximum values along rows of matrix Spark Scala?

I have a question about finding index of the maximum values along rows of matrix. How can I do this in Spark Scala? This function would be like argmax in numpy in Python.
What's the type of your matrix ? If it's a RowMatrix, you can access the RDD of its row vectors using rows.
Then it's a simple matter of finding the maximum of each vector of this RDD[Vector], if I understand correctly. You can therefore myMatrix.rows.map{_.toArray.max}.
If you have a DenseMatrix you can convert it to an Array, at which stage you'll have a list of elements in row-major form. You can also access the number of columns of your matrix with numCols, and then use the collections method grouped to obtain rows.
myMatrix.toArray.grouped(myMatrix.numCols).map{_.max}
I think you will have to get the values as an array to get the maximum value.
val dm: Matrix = Matrices.dense(3, 2, Array(1.0, 3.0, 5.0, 2.0, 4.0, 6.0))
val result = dm.toArray.max
println(result)