Geometric mean of columns in dataframe - pyspark

I use this code to compute the geometric mean of all rows within a dataframe :
from pyspark.sql.functions import rand, randn, sqrt
df = sqlContext.range(0, 10)
df = df.select(rand(seed=10).alias("c1"), randn(seed=27).alias("c2"))
df.show()
newdf = df.withColumn('total', sqrt(sum(df[col] for col in df.columns)))
newdf.show()
This displays :
To compute the geometric mean of the columns instead of rows I think this code should suffice :
newdf = df.withColumn('total', sqrt(sum(df[row] for row in df.rows)))
But this throws error : NameError: global name 'row' is not defined
So appears the api for accessing columns is not same as accessing rows.
Should I format the data to convert rows to columns and then re-use working algorithm : newdf = df.withColumn('total', sqrt(sum(df[col] for col in df.columns))) or is there a solution that processes the rows and columns as is ?

I am not sure you definition of geometric mean is correct. According to Wikipedia, the geometric mean is defined as the nth root of the product of n numbers. According to the same page, the geometric mean can also be expressed as the exponential of the arithmetic mean of logarithms. I shall be using this to calculate the geometric mean of each column.
You can calculate the geometric mean, by combining the column data for c1 and c2 into a new column called value storing the source column name in column. After the data has been reformatted, the geometric mean is determined by grouping by column (c1 or c2) and calculating the exponential of the arithmetic mean of the logarithmic value for each group. In this calculation NaN values are ignored.
from pyspark.sql import functions as F
df = sqlContext.range(0, 10)
df = df.select(F.rand(seed=10).alias("c1"), F.randn(seed=27).alias("c2"))
df_id = df.withColumn("id", F.monotonically_increasing_id())
kvp = F.explode(F.array([F.struct(F.lit(c).alias("column"), F.col(c).alias("value")) for c in df.columns])).alias("kvp")
df_pivoted = df_id.select(['id'] + [kvp]).select(['id'] + ["kvp.column", "kvp.value"])
df_geometric_mean = df_pivoted.groupBy(['column']).agg(F.exp(F.avg(F.log(df_pivoted.value))))
df_geometric_mean.withColumnRenamed("EXP(avg(LOG(value)))", "geometric_mean").show()
This returns:
+------+-------------------+
|column| geometric_mean|
+------+-------------------+
| c1|0.25618961513533134|
| c2| 0.415119290980354|
+------+-------------------+
These geometrics means, other than their precision, match the geometric mean return by scipy provided NaN values are ignored as well.
from scipy.stats.mstats import gmean
c1=[x['c1'] for x in df.collect() if x['c1']>0]
c2=[x['c2'] for x in df.collect() if x['c2']>0]
print 'c1 : {0}\r\nc2 : {1}'.format(gmean(c1),gmean(c2))
This snippet returns:
| c1|0.256189615135|
| c2|0.41511929098|

Related

Cosine similarity of two sparse vectors in Scala Spark

I have a dataframe with two columns where each row has a Sparse Vector. I try to find a proper way to calculate the cosine similarity (or just the dot product) of the two vectors in each row.
However, I haven't been able to find any library or tutorial to do it for Sparse vectors.
The only way I found is the following:
Create a k X n matrix, where n items are described as k-dimensioned vectors. For representing each item as a k dimension vector, you can use ALS which represents each entity in a latent factor space. The dimension of this space (k) can be chosen by you. This k X n matrix can be represented as RDD[Vector].
Convert this k X n matrix to RowMatrix.
Use columnSimilarities() function to get a n X n matrix of similarities between n items.
I feel it is an overkill to calculate all the cosine similarities for each pair while I need it only for the specific pairs in my (quite big) dataframe.
In Spark 3 there is now method dot for a SparseVector object, which takes another vector as its argument.
If you want to do this in earlier versions, you could create a user defined function that follows this algorithm:
Take intersection of your vectors' indices.
Get two subarrays of your vectors' values based on the indices from the intersection.
Do pairwise multiplication of the elements of those two subarrays.
Sum the values resulting values from such pairwise multiplication.
Here's my realization of it:
import org.apache.spark.ml.linalg.SparseVector
def dotProduct(vec: SparseVector, vecOther: SparseVector) = {
val commonIndices = vec.indices intersect vecOther.indices
commonIndices.map(x => vec(x) * vecOther(x)).reduce(_+_)
}
I guess you know how to turn it into a Spark UDF from here and apply it to your dataframe's columns.
And if you normalize your sparse vectors with org.apache.spark.ml.feature.Normalizer before computing your dot product, you'll get cosine similarity in the end (by definition).
Great answer above #Sergey-Zakharov +1.
A few adds-on:
The reduce doesn't work on empty sequences.
Make sure computing L2 normalization.
val normalizer = new Normalizer()
.setInputCol("features")
.setOutputCol("normFeatures")
.setP(2.0)
val l2NormData = normalizer.transform(df_features)
and
val dotProduct = udf {(v1: SparseVector, v2: SparseVector) =>
v1.indices.intersect(v2.indices).map(x => v1(x) * v2(x)).reduceOption(_ + _).getOrElse(0.0)
}
and then
val df = dfA.crossJoin(broadcast(dfB))
.withColumn("dot", dotProduct(col("featuresA"), col("featuresB")))
If the number of vectors you want to calculate the dot product with is small, cache the RDD[Vector] table. Create a new table [cosine_vectors] that is a filter on the original table to only select the vectors you want the cosine similarities for. Broadcast join those two together and calculate.

Similarity matrix using a spark dataframe

For an input Dataframe the intent is to generate only half of the self-cartesian product. Given the cartesian product results in a symmetric matrix we only really need to calculate either the upper or the lower triangular portion above (resp below) the diagonal that is set to zeros:
The dataframe crossjoin :
val df3 = df2.crossJoin(df2)
will generate the FULL - which we do not want.
Given the similarity matrix is symmetric with 1's along the diagonal we do not need to calculate the upper half or the diagonal itself - as shown in the LOWER DiagO's below:
Any suggestions on how to obtain the result with the least computation?
The following is not a perfect answer: it does result in first generating the full cartesian product. But at least the output results are correct.
/** Generate schema for cartesian product of an input dataframe */
def joinSchema(df: DataFrame) =
types.StructType(df.schema.fields.map {
f => StructField(s"${f.name}_a", f.dataType, f.nullable)
} ++ df.schema.fields.map { f => StructField(s"${f.name}_b", f.dataType, f.nullable)}
)
// Create the cartesian product via crossJoin
val schema = joinSchema(dfIn)
val df3 = df2.crossJoin(dfIn)
val cartesianDf = spark.createDataFrame(df3.rdd, schema)
cartDf.createOrReplaceTempView("cartesian")
// Retain the lower triangular entries below the diagonal
select * from cartesian where id_a < id_b

Matrix multiplication in py-spark using RDD

I have two matrices
# 3x3 matrix
X = [[10,7,3],[3 ,2,6],[5 ,8,7]]
# 3x4 matrix
Y = [[3,7,11,2],[2,7,4,10],[8,7,6,11]]
I want to multiply these two in spark using RDD. Can some one help me on this. This multiplication should not use any inbuilt function.
I was able to multiply the 2 using for loop in python as follows
for i in range(len(X)):
# iterate through columns of Y
for j in range(len(Y[0])):
# iterate through rows of Y
for k in range(len(Y)):
Output[i][j] += X[i][k] * Y[k][j]
#output is a 3*4 empty matrix
I am new to spark and using pyspark.
It is not so hard, you just have to write your matrix using a different notation.
X = [[10,7,3],[3 ,2,6],[5 ,8,7]]
Can be written as
X = (0,0,10),(0,1,7),(0,2,3)...
rdd_x = sc.parallelize((0,0,10),(0,1,7),(0,2,3)...)
rdd_y = sc.parallelize((0,0,3),(0,1,7),(0,2,11)...)
Now you can make the multiplication both using join or cartesian.
E.g.,
rdd_x.cartesian(rdd_y)\
.filter(lambda x: x [0][0] == x[1][1] and x[0][1] == x[1][0])\
.map(lambda x: (x[0][0],x[0][2] * x[1][2])).reduceByKey(lambda x,y: x+y).collect()
Your code works, but you should initialize Output, and only once,
Output=[[0]*4]*3
You're not using RDDs though, your teacher won't be happy.
Based on Andrea's answer, I came up with this solution:
rdd_x.cartesian(rdd_y)\
.filter(lambda x: (x[0][1] == x[1][0]))\
.map(lambda x: ((x[0][0],x[1][1]),x[0][2] * x[1][2])).reduceByKey(lambda x,y: x+y).collect()

Concatenate Sparse Vectors in Spark?

Say you have two Sparse Vectors. As an example:
val vec1 = Vectors.sparse(2, List(0), List(1)) // [1, 0]
val vec2 = Vectors.sparse(2, List(1), List(1)) // [0, 1]
I want to concatenate these two vectors so that the result is equivalent to:
val vec3 = Vectors.sparse(4, List(0, 2), List(1, 1)) // [1, 0, 0, 1]
Does Spark have any such convenience method to do this?
If you have the data in a DataFrame, then VectorAssembler would be the right thing to use. For example:
from pyspark.ml.feature import VectorAssembler
dataset = spark.createDataFrame(
[(0, Vectors.sparse(10, {0: 0.6931, 5: 0.0, 7: 0.5754, 9: 0.2877}), Vectors.sparse(10, {3: 0.2877, 4: 0.6931, 5: 0.0, 6: 0.6931, 8: 0.6931}))],
["label", "userFeatures1", "userFeatures2"])
assembler = VectorAssembler(
inputCols=["userFeatures1", "userFeatures2"],
outputCol="features")
output = assembler.transform(dataset)
output.select("features", "label").show(truncate=False)
You would get the following output for this:
+---------------------------------------------------------------------------+-----+
|features |label|
+---------------------------------------------------------------------------+-----+
|(20,[0,7,9,13,14,16,18], [0.6931,0.5754,0.2877,0.2877,0.6931,0.6931,0.6931])|0|
+---------------------------------------------------------------------------+-----+
I think you have a slight problem understanding SparseVectors. Therefore I will make a little explanation about them, the first argument is the number of features | columns | dimensions of the data, besides every entry of the List in the second argument represent the position of the feature, and the values in the the third List represent the value for that column, therefore SparseVectors are locality sensitive, and from my point of view your approach is incorrect.
If you pay more attention you are summing or combining two vectors that have the same dimensions, hence the real result would be different, the first argument tells us that the vector has only 2 dimensions, so [1,0] + [0,1] => [1,1] and the correct representation would be Vectors.sparse(2, [0,1], [1,1]), not four dimensions.
In the other hand if each vector has two different dimensions and you are trying to combine them and represent them in a higher dimensional space, let's say four then your operation might be valid, however this functionality isn't provided by the SparseVector class, and you would have to program a function to do that, something like (a bit imperative but I accept suggestions):
def combine(v1:SparseVector, v2:SparseVector):SparseVector = {
val size = v1.size + v2.size
val maxIndex = v1.size
val indices = v1.indices ++ v2.indices.map(e => e + maxIndex)
val values = v1.values ++ v2.values
new SparseVector(size, indices, values)
}
If your vectors represent different columns of a dataframe, you can use VectorAssembler. Just need to set setInputcols (your 2 vectors) and Spark will make your wish come true ;)

Calculate Spearman Correlation on a Spark DataFrame

I would like to run a Spearman correlation on data that is currently in a Spark DataFrame. Currently, only the Pearson correlation calculation is available to operate on columns in a DataFrame. It appears that I can do a Spearman correlation using Spark's MLlib, but I need to pass two RDD[Double] to the function. The columns I want to compare are Double according to the current schema.
Is there a way to select the columns I want and make the be an array of Doubles so that I can use the MLlib correlation function to get the Spearman correlation coefficient?
You can simply select columns of interest, extract values and compute statistics:
import sqlContext.implicits._
import org.apache.spark.mllib.stat.Statistics
// Generate some random data
scala.util.Random.setSeed(1)
val df = sc.parallelize(g.sample(1000).zip(g.sample(1000))).toDF("x", "y")
// Select columns and extract values
val rddX = df.select($"x").rdd.map(_.getDouble(0))
val rddY = df.select($"y").rdd.map(_.getDouble(0))
val correlation: Double = Statistics.corr(rddX, rddY, "spearman")
You should be able to do something like this
val firstRDD: RDD[Double] = yourDF.select("field1").map(row => row.getDouble(0))
val secondRDD: RDD[Double] = yourDF.select("field2").map(row => row.getDouble(0))
val corr = Statistics.corr(firstRDD, secondRDD, "spearman")