Recursively calculate columns and add to Spark Dataframe in Scala - scala

I am new to Scala and Apache Spark. I am trying to calculate mean and standard deviation for a few columns in a Spark dataframe and append the result to the source dataframe. I am trying to do this recursively. Following is my function.
def get_meanstd_data(mergedDF: DataFrame, grpByList: Seq[String]): DataFrame = {
val normFactors = Iterator("factor_1", "factor_2", "factor_3", "factor_4")
def meanStdCalc(df: DataFrame, column: String): DataFrame = {
val meanDF = df.select("column_1", column).groupBy(grpByList.head, grpByList.tail: _*).
agg(mean(column).as("mean_" + column))
val stdDF = df.select("column_1", column).groupBy(grpByList.head, grpByList.tail: _*).
agg(stddev_pop(column).as("stddev_" + column))
val finalDF = meanDF.join(stdDF, usingColumns = grpByList, joinType = "left")
finalDF
}
def recursorFunc(df: DataFrame): DataFrame = {
#tailrec
def recursorHelper(acc: DataFrame): DataFrame = {
if (!normFactors.hasNext) acc
else recursorHelper(meanStdCalc(acc, normFactors.next()))
}
recursorHelper(df)
}
val finalDF = recursorFunc(mergedDF)
finalDF
}
But when I call the function, the resulting dataframe only contains mean and standard deviation of "factor_4". How do I get a dataframe with the mean and standard deviation of all factors appended to the original dataframe?
Any help is much appreciated.

Probably you don't need to use a custom recursive method and you could use fold.
Something like creating normFactors as List and using foldLeft:
val normFactors = Iterator("factor_1", "factor_2", "factor_3", "factor_4")
normFactors.foldLeft(mergedDF)((df, column) => meanStdCalc(df, column))
foldLeft allows you to use the DataFrame as an accumulator

Related

Pyspark equivalent of Scala Spark

I have the following code in Scala:
val checkedValues = inputDf.rdd.map(row => {
val size = row.length
val items = for (i <- 0 until size) yield {
val fieldName = row.schema.fieldNames(i)
val sourceField = sourceFields(fieldName) // sourceField is a map which returns another object
val value = Option(row.get(i))
sourceField.checkType(value)
}
items
})
Basically, the above snippet takes a Spark DataFrame, converts into an rdd and applies the map function to return an rdd which is just an collection of object with datatype and other information for each of the values in the DataFrame.
How would I go about writing something equivalent in Pyspark because schema is not an attribute of Row in Pyspark among other things?

How to pass DataSet(s) to a function that accepts DataFrame(s) as arguments in Apache Spark using Scala?

I have a library in Scala for Spark which contains many functions.
One example is the following function to unite two dataframes that have different columns:
def appendDF(df2: DataFrame): DataFrame = {
val cols1 = df.columns.toSeq
val cols2 = df2.columns.toSeq
def expr(sourceCols: Seq[String], targetCols: Seq[String]): Seq[Column] = {
targetCols.map({
case x if sourceCols.contains(x) => col(x)
case y => lit(null).as(y)
})
}
// both df's need to pass through `expr` to guarantee the same order, as needed for correct unions.
df.select(expr(cols1, cols1): _*).union(df2.select(expr(cols2, cols1): _*))
}
I would like to use this function (and many more) to Dataset[CleanRow] and not DataFrames. CleanRow is a simple class here that defines the names and types of the columns.
My educated guess is to convert the Dataset into Dataframe using .toDF() method. However, I would like to know whether there are better ways to do it.
From my understanding, there shouldn't be many differences between Dataset and Dataframe since Dataset are just Dataframe[Row]. Plus, I think that from Spark 2.x the APIs for DF and DS have been unified, so I was thinking that I could pass either of them interchangeably, but that's not the case.
If changing signature is possible:
import spark.implicits._
import org.apache.spark.sql.DataFrame
import org.apache.spark.sql.Dataset
def f[T](d: Dataset[T]): Dataset[T] = {d}
// You are able to pass a dataframe:
f(Seq(0,1).toDF()).show
// res1: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [value: int]
// You are also able to pass a dataset:
f(spark.createDataset(Seq(0,1)))
// res2: org.apache.spark.sql.Dataset[Int] = [value: int]

Spark Scala | create Dataframe Dyanmically

I would like to create dataframe names dynamically from a collection.
Please see below:
val set1 = Set("category1","category2","category3")
The following is a UDF which takes a string x from the set as input and generate the dataframe accordingly:
def catDfgen(x: String): DataFrame = {
spark.sql(s"select * from table where col1 = '$x'")
}
Now I need help here, to create not only DataFrame but also the DataFrame name should be dynamically generated in order to achieve
val category1DF = catDfgen($x)
val category2DF = catDfgen($x)
...etc. Would it be possible to do it using the code below?
set1.map( x => val $x+"DF" = catDfgen($x))
If not please suggest an effective method.
Suman, I believe the below might help your use-case
import org.apache.spark.sql.{DataFrame, SparkSession}
object Test extends App {
val spark: SparkSession = SparkSession.builder().master("local").getOrCreate()
val set1 = Set("category1","category2","category3")
val dfs: Map[String, DataFrame] = set1.map(x =>
(s"${x}DF", spark.sql(s"select * from table where col1 = '$x'").alias(s"${x}DF").toDF())
).toMap
dfs("category1DF").show()
spark.stop()
}

How to convert RDD[Row] to RDD[String]

I have a DataFrame called source, a table from mysql
val source = sqlContext.read.jdbc(jdbcUrl, "source", connectionProperties)
I have converted it to rdd by
val sourceRdd = source.rdd
but its RDD[Row] I need RDD[String]
to do transformations like
source.map(rec => (rec.split(",")(0).toInt, rec)), .subtractByKey(), etc..
Thank you
You can use Row. mkString(sep: String): String method in a map call like this :
val sourceRdd = source.rdd.map(_.mkString(","))
You can change the "," parameter by whatever you want.
Hope this help you, Best Regards.
What is your schema?
If it's just a String, you can use:
import spark.implicits._
val sourceDS = source.as[String]
val sourceRdd = sourceDS.rdd // will give RDD[String]
Note: use sqlContext instead of spark in Spark 1.6 - spark is a SparkSession, which is a new class in Spark 2.0 and is a new entry point to SQL functionality. It should be used instead of SQLContext in Spark 2.x
You can also create own case classes.
Also you can map rows - here source is of type DataFrame, we use partial function in map function:
val sourceRdd = source.rdd.map { case x : Row => x(0).asInstanceOf[String] }.map(s => s.split(","))

How to vectorize DataFrame columns for ML algorithms?

have a DataFrame with some categorical string values (e.g uuid|url|browser).
I would to convert it in a double to execute an ML algorithm that accept double matrix.
As convertion method I used StringIndexer (spark 1.4) that map my string values to double values, so I defined a function like this:
def str(arg: String, df:DataFrame) : DataFrame =
(
val indexer = new StringIndexer().setInputCol(arg).setOutputCol(arg+"_index")
val newDF = indexer.fit(df).transform(df)
return newDF
)
Now the issue is that i would iterate foreach column of a df, call this function and add (or convert) the original string column in the parsed double column, so the result would be:
Initial df:
[String: uuid|String: url| String: browser]
Final df:
[String: uuid|Double: uuid_index|String: url|Double: url_index|String: browser|Double: Browser_index]
Thanks in advance
You can simply foldLeft over the Array of columns:
val transformed: DataFrame = df.columns.foldLeft(df)((df, arg) => str(arg, df))
Still, I will argue that it is not a good approach. Since src discards StringIndexerModel it cannot be used when you get new data. Because of that I would recommend using Pipeline:
import org.apache.spark.ml.Pipeline
val transformers: Array[org.apache.spark.ml.PipelineStage] = df.columns.map(
cname => new StringIndexer()
.setInputCol(cname)
.setOutputCol(s"${cname}_index")
)
// Add the rest of your pipeline like VectorAssembler and algorithm
val stages: Array[org.apache.spark.ml.PipelineStage] = transformers ++ ???
val pipeline = new Pipeline().setStages(stages)
val model = pipeline.fit(df)
model.transform(df)
VectorAssembler can be included like this:
val assembler = new VectorAssembler()
.setInputCols(df.columns.map(cname => s"${cname}_index"))
.setOutputCol("features")
val stages = transformers :+ assembler
You could also use RFormula, which is less customizable, but much more concise:
import org.apache.spark.ml.feature.RFormula
val rf = new RFormula().setFormula(" ~ uuid + url + browser - 1")
val rfModel = rf.fit(dataset)
rfModel.transform(dataset)