I would like to run a Spearman correlation on data that is currently in a Spark DataFrame. Currently, only the Pearson correlation calculation is available to operate on columns in a DataFrame. It appears that I can do a Spearman correlation using Spark's MLlib, but I need to pass two RDD[Double] to the function. The columns I want to compare are Double according to the current schema.
Is there a way to select the columns I want and make the be an array of Doubles so that I can use the MLlib correlation function to get the Spearman correlation coefficient?
You can simply select columns of interest, extract values and compute statistics:
import sqlContext.implicits._
import org.apache.spark.mllib.stat.Statistics
// Generate some random data
scala.util.Random.setSeed(1)
val df = sc.parallelize(g.sample(1000).zip(g.sample(1000))).toDF("x", "y")
// Select columns and extract values
val rddX = df.select($"x").rdd.map(_.getDouble(0))
val rddY = df.select($"y").rdd.map(_.getDouble(0))
val correlation: Double = Statistics.corr(rddX, rddY, "spearman")
You should be able to do something like this
val firstRDD: RDD[Double] = yourDF.select("field1").map(row => row.getDouble(0))
val secondRDD: RDD[Double] = yourDF.select("field2").map(row => row.getDouble(0))
val corr = Statistics.corr(firstRDD, secondRDD, "spearman")
Related
I have a RDD[Matrix[Double]] and want to convert it to RDD[Vector] (Each row in the Matrix will be converted to a Vector).
I've seen related answer like Convert Matrix to RowMatrix in Apache Spark using Scala, but it's one Matrix to RDD of Vector. While my case is RDD of Matrix.
Use flatMap on code to convert Matrix to Seq[Vector]:
// from https://stackoverflow.com/a/28172826/1206998
def toSeqOfVector(m: Matrix): Seq[Vector] = {
val columns = m.toArray.grouped(m.numRows)
val rows = columns.toSeq.transpose // Skip this if you want a column-major RDD.
rows.map(row => new DenseVector(row.toArray))
}
val matrices: RDD[Matrix] = ??? // your input
val vectors: RDD[Vector] = matrices.flatMap(toSeqOfVector)
Note: I didn't test this code, but this is the principle
For an input Dataframe the intent is to generate only half of the self-cartesian product. Given the cartesian product results in a symmetric matrix we only really need to calculate either the upper or the lower triangular portion above (resp below) the diagonal that is set to zeros:
The dataframe crossjoin :
val df3 = df2.crossJoin(df2)
will generate the FULL - which we do not want.
Given the similarity matrix is symmetric with 1's along the diagonal we do not need to calculate the upper half or the diagonal itself - as shown in the LOWER DiagO's below:
Any suggestions on how to obtain the result with the least computation?
The following is not a perfect answer: it does result in first generating the full cartesian product. But at least the output results are correct.
/** Generate schema for cartesian product of an input dataframe */
def joinSchema(df: DataFrame) =
types.StructType(df.schema.fields.map {
f => StructField(s"${f.name}_a", f.dataType, f.nullable)
} ++ df.schema.fields.map { f => StructField(s"${f.name}_b", f.dataType, f.nullable)}
)
// Create the cartesian product via crossJoin
val schema = joinSchema(dfIn)
val df3 = df2.crossJoin(dfIn)
val cartesianDf = spark.createDataFrame(df3.rdd, schema)
cartDf.createOrReplaceTempView("cartesian")
// Retain the lower triangular entries below the diagonal
select * from cartesian where id_a < id_b
I am trying to calculate the distance between the row in a dataframe and a vector( org.apache.spark.ml.linalg.Vector).
I plan to do anomaly detection with K-Means algorithm, so I got the center id which is a Vector then I can calculate the distance with row in a dataframe, but I got below error:
Vectors.sqdist(v1,centerid)
<console>:54: error: type mismatch;
found : scala.collection.immutable.Vector[org.apache.spark.sql.Row]
How to convert the Vector[org.apache.spark.sql.Row] to org.apache.spark.ml.linalg.Vector?
You can use VectorAssembler to convert your Row to a features Vector. Try this:
val df: DataFrame = ???
val vector = new VectorAssembler().setInputCols(Array("yourInputColumns")).setOutputCol("features")
vector.transform(df)
As output you will get a Dataframe with one column with the type
org.apache.spark.ml.linalg.Vector
I have an IndexedRowMatrix of doubles. I want to compute the sum of each row of the matrix and save the results to a Vector. After that I want to broadcast this vector.
I am creating an RDD of Doubles, which contains the sums, but I cannot turn it into a vector.
So, the question basically is how to create the Vector I want from the IndexedRowMatrix.
Collect to the driver and construct a vector:
import org.apache.spark.mllib.linalg.{Vector, Vectors}
val sc: SparkContext = ???
val rdd: RDD[Double] = ???
val vec: Vector = Vectors.dense(rdd.collect)
val broadcastVec = sc.broadcast(vec)
References:
https://spark.apache.org/docs/2.1.0/mllib-data-types.html#local-vector
https://spark.apache.org/docs/latest/programming-guide.html#broadcast-variables
I use this code to compute the geometric mean of all rows within a dataframe :
from pyspark.sql.functions import rand, randn, sqrt
df = sqlContext.range(0, 10)
df = df.select(rand(seed=10).alias("c1"), randn(seed=27).alias("c2"))
df.show()
newdf = df.withColumn('total', sqrt(sum(df[col] for col in df.columns)))
newdf.show()
This displays :
To compute the geometric mean of the columns instead of rows I think this code should suffice :
newdf = df.withColumn('total', sqrt(sum(df[row] for row in df.rows)))
But this throws error : NameError: global name 'row' is not defined
So appears the api for accessing columns is not same as accessing rows.
Should I format the data to convert rows to columns and then re-use working algorithm : newdf = df.withColumn('total', sqrt(sum(df[col] for col in df.columns))) or is there a solution that processes the rows and columns as is ?
I am not sure you definition of geometric mean is correct. According to Wikipedia, the geometric mean is defined as the nth root of the product of n numbers. According to the same page, the geometric mean can also be expressed as the exponential of the arithmetic mean of logarithms. I shall be using this to calculate the geometric mean of each column.
You can calculate the geometric mean, by combining the column data for c1 and c2 into a new column called value storing the source column name in column. After the data has been reformatted, the geometric mean is determined by grouping by column (c1 or c2) and calculating the exponential of the arithmetic mean of the logarithmic value for each group. In this calculation NaN values are ignored.
from pyspark.sql import functions as F
df = sqlContext.range(0, 10)
df = df.select(F.rand(seed=10).alias("c1"), F.randn(seed=27).alias("c2"))
df_id = df.withColumn("id", F.monotonically_increasing_id())
kvp = F.explode(F.array([F.struct(F.lit(c).alias("column"), F.col(c).alias("value")) for c in df.columns])).alias("kvp")
df_pivoted = df_id.select(['id'] + [kvp]).select(['id'] + ["kvp.column", "kvp.value"])
df_geometric_mean = df_pivoted.groupBy(['column']).agg(F.exp(F.avg(F.log(df_pivoted.value))))
df_geometric_mean.withColumnRenamed("EXP(avg(LOG(value)))", "geometric_mean").show()
This returns:
+------+-------------------+
|column| geometric_mean|
+------+-------------------+
| c1|0.25618961513533134|
| c2| 0.415119290980354|
+------+-------------------+
These geometrics means, other than their precision, match the geometric mean return by scipy provided NaN values are ignored as well.
from scipy.stats.mstats import gmean
c1=[x['c1'] for x in df.collect() if x['c1']>0]
c2=[x['c2'] for x in df.collect() if x['c2']>0]
print 'c1 : {0}\r\nc2 : {1}'.format(gmean(c1),gmean(c2))
This snippet returns:
| c1|0.256189615135|
| c2|0.41511929098|