Spark SQL is pretty clear to me. However, I am just getting started with spark's RDD API. As spark apply function to columns in parallel points out this should allow me to get rid of slow shuffles for
def handleBias(df: DataFrame, colName: String, target: String = this.target) = {
val w1 = Window.partitionBy(colName)
val w2 = Window.partitionBy(colName, target)
df.withColumn("cnt_group", count("*").over(w2))
.withColumn("pre2_" + colName, mean(target).over(w1))
.withColumn("pre_" + colName, coalesce(min(col("cnt_group") / col("cnt_foo_eq_1")).over(w1), lit(0D)))
.drop("cnt_group")
}
}
In pseudo code: df foreach column (handleBias(column)
So a minimal data frame is loaded up
val input = Seq(
(0, "A", "B", "C", "D"),
(1, "A", "B", "C", "D"),
(0, "d", "a", "jkl", "d"),
(0, "d", "g", "C", "D"),
(1, "A", "d", "t", "k"),
(1, "d", "c", "C", "D"),
(1, "c", "B", "C", "D")
)
val inputDf = input.toDF("TARGET", "col1", "col2", "col3TooMany", "col4")
but fails to map correctly
val rdd1_inputDf = inputDf.rdd.flatMap { x => {(0 until x.size).map(idx => (idx, x(idx)))}}
rdd1_inputDf.toDF.show
It fails with
java.lang.ClassNotFoundException: scala.Any
java.lang.ClassNotFoundException: scala.Any
An example can be found https://github.com/geoHeil/sparkContrastCoding respectively https://github.com/geoHeil/sparkContrastCoding/blob/master/src/main/scala/ColumnParallel.scala for the problem outlined in this question.
When you call .rdd on a DataFrame you get an RDD[Row] which is not strongly typed. If you want to be able to map over the elements you will need to pattern match over Row:
scala> val input = Seq(
| (0, "A", "B", "C", "D"),
| (1, "A", "B", "C", "D"),
| (0, "d", "a", "jkl", "d"),
| (0, "d", "g", "C", "D"),
| (1, "A", "d", "t", "k"),
| (1, "d", "c", "C", "D"),
| (1, "c", "B", "C", "D")
| )
input: Seq[(Int, String, String, String, String)] = List((0,A,B,C,D), (1,A,B,C,D), (0,d,a,jkl,d), (0,d,g,C,D), (1,A,d,t,k), (1,d,c,C,D), (1,c,B,C,D))
scala> val inputDf = input.toDF("TARGET", "col1", "col2", "col3TooMany", "col4")
inputDf: org.apache.spark.sql.DataFrame = [TARGET: int, col1: string ... 3 more fields]
scala> import org.apache.spark.sql.Row
import org.apache.spark.sql.Row
scala> val rowRDD = inputDf.rdd
rowRDD: org.apache.spark.rdd.RDD[org.apache.spark.sql.Row] = MapPartitionsRDD[3] at rdd at <console>:27
scala> val typedRDD = rowRDD.map{case Row(a: Int, b: String, c: String, d: String, e: String) => (a,b,c,d,e)}
typedRDD: org.apache.spark.rdd.RDD[(Int, String, String, String, String)] = MapPartitionsRDD[20] at map at <console>:29
scala> typedRDD.keyBy(_._1).groupByKey.foreach{println}
[Stage 7:> (0 + 0) / 4]
(0,CompactBuffer((A,B,C,D), (d,a,jkl,d), (d,g,C,D)))
(1,CompactBuffer((A,B,C,D), (A,d,t,k), (d,c,C,D), (c,B,C,D)))
Otherwise you can use a typed Dataset:
scala> val ds = input.toDS
ds: org.apache.spark.sql.Dataset[(Int, String, String, String, String)] = [_1: int, _2: string ... 3 more fields]
scala> ds.rdd
res2: org.apache.spark.rdd.RDD[(Int, String, String, String, String)] = MapPartitionsRDD[8] at rdd at <console>:30
scala> ds.rdd.keyBy(_._1).groupByKey.foreach{println}
[Stage 0:> (0 + 0) / 4]
(0,CompactBuffer((0,A,B,C,D), (0,d,a,jkl,d), (0,d,g,C,D)))
(1,CompactBuffer((1,A,B,C,D), (1,A,d,t,k), (1,d,c,C,D), (1,c,B,C,D)))
Related
Spark will process the data in parallel, but not the operations. In my DAG I want to call a function per column like
Spark processing columns in parallel the values for each column could be calculated independently from other columns. Is there any way to achieve such parallelism via spark-SQL API? Utilizing window functions Spark dynamic DAG is a lot slower and different from hard coded DAG helped to optimize the DAG by a lot but only executes in a serial fashion.
An example which contains a little bit more information can be found https://github.com/geoHeil/sparkContrastCoding
The minimum example below:
val df = Seq(
(0, "A", "B", "C", "D"),
(1, "A", "B", "C", "D"),
(0, "d", "a", "jkl", "d"),
(0, "d", "g", "C", "D"),
(1, "A", "d", "t", "k"),
(1, "d", "c", "C", "D"),
(1, "c", "B", "C", "D")
).toDF("TARGET", "col1", "col2", "col3TooMany", "col4")
val inputToDrop = Seq("col3TooMany")
val inputToBias = Seq("col1", "col2")
val targetCounts = df.filter(df("TARGET") === 1).groupBy("TARGET").agg(count("TARGET").as("cnt_foo_eq_1"))
val newDF = df.toDF.join(broadcast(targetCounts), Seq("TARGET"), "left")
newDF.cache
def handleBias(df: DataFrame, colName: String, target: String = target) = {
val w1 = Window.partitionBy(colName)
val w2 = Window.partitionBy(colName, target)
df.withColumn("cnt_group", count("*").over(w2))
.withColumn("pre2_" + colName, mean(target).over(w1))
.withColumn("pre_" + colName, coalesce(min(col("cnt_group") / col("cnt_foo_eq_1")).over(w1), lit(0D)))
.drop("cnt_group")
}
val joinUDF = udf((newColumn: String, newValue: String, codingVariant: Int, results: Map[String, Map[String, Seq[Double]]]) => {
results.get(newColumn) match {
case Some(tt) => {
val nestedArray = tt.getOrElse(newValue, Seq(0.0))
if (codingVariant == 0) {
nestedArray.head
} else {
nestedArray.last
}
}
case None => throw new Exception("Column not contained in initial data frame")
}
})
Now I want to apply my handleBias function to all the columns, unfortunately, this is not executed in parallel.
val res = (inputToDrop ++ inputToBias).toSet.foldLeft(newDF) {
(currentDF, colName) =>
{
logger.info("using col " + colName)
handleBias(currentDF, colName)
}
}
.drop("cnt_foo_eq_1")
val combined = ((inputToDrop ++ inputToBias).toSet).foldLeft(res) {
(currentDF, colName) =>
{
currentDF
.withColumn("combined_" + colName, map(col(colName), array(col("pre_" + colName), col("pre2_" + colName))))
}
}
val columnsToUse = combined
.select(combined.columns
.filter(_.startsWith("combined_"))
map (combined(_)): _*)
val newNames = columnsToUse.columns.map(_.split("combined_").last)
val renamed = columnsToUse.toDF(newNames: _*)
val cols = renamed.columns
val localData = renamed.collect
val columnsMap = cols.map { colName =>
colName -> localData.flatMap(_.getAs[Map[String, Seq[Double]]](colName)).toMap
}.toMap
values for each column could be calculated independently from other columns
While it is true it doesn't really help your case. You can generate a number of independent DataFrames, each one with its own additions, but it doesn't mean you can automatically combine this into a single execution plan.
Each application of handleBias shuffles your data twice and output DataFrames don't have the same data distribution as the parent DataFrame. This is why when you fold over the list of columns each addition has to be performed separately.
Theoretically you could design a pipeline which can be expressed (with pseudocode) like this:
add unique id:
df_with_id = df.withColumn("id", unique_id())
compute each df independently and convert to wide format:
dfs = for (c in columns)
yield handle_bias(df, c).withColumn(
"pres", explode([(pre_name, pre_value), (pre2_name, pre2_value)])
)
union all partial results:
combined = dfs.reduce(union)
pivot to convert from long to wide format:
combined.groupBy("id").pivot("pres._1").agg(first("pres._2"))
but I doubt it is worth all the fuss. The process you use is extremely heavy as it is and requires a significant network and disk IO.
If number of total levels (sum count(distinct x)) for x in columns)) is relatively low you can try to compute all statistics with a single pass using for example aggregateByKey with Map[Tuple2[_, _], StatCounter] otherwise consider downsampling to the level where you can compute statistics locally.
The Following code gives a dataframe having three values in each column as shown below.
import org.graphframes._
import org.apache.spark.sql.DataFrame
val v = sqlContext.createDataFrame(List(
("1", "Al"),
("2", "B"),
("3", "C"),
("4", "D"),
("5", "E")
)).toDF("id", "name")
val e = sqlContext.createDataFrame(List(
("1", "3", 5),
("1", "2", 8),
("2", "3", 6),
("2", "4", 7),
("2", "1", 8),
("3", "1", 5),
("3", "2", 6),
("4", "2", 7),
("4", "5", 8),
("5", "4", 8)
)).toDF("src", "dst", "property")
val g = GraphFrame(v, e)
val paths: DataFrame = g.bfs.fromExpr("id = '1'").toExpr("id = '5'").run()
paths.show()
val df=paths
df.select(df.columns.filter(_.startsWith("e")).map(df(_)) : _*).show
OutPut of Above Code is given below::
+-------+-------+-------+
| e0| e1| e2|
+-------+-------+-------+
|[1,2,8]|[2,4,7]|[4,5,8]|
+-------+-------+-------+
In the above output, we can see that each column has three values and they can be interpreted as follows.
e0 :
source 1, Destination 2 and distance 8
e1:
source 2, Destination 4 and distance 7
e2:
source 4, Destination 5 and distance 8
basically e0,e1, and e3 are the edges. I want to sum the third element of each column, i.e add the distance of each edge to get the total distance. How can I achieve this?
It can be done like this:
val total = df.columns.filter(_.startsWith("e"))
.map(c => col(s"$c.property")) // or col(c).getItem("property")
.reduce(_ + _)
df.withColumn("total", total)
I would make a collection of the columns to sum and then use a foldLeft on a UDF:
scala> val df = Seq((Array(1,2,8),Array(2,4,7),Array(4,5,8))).toDF("e0", "e1", "e2")
df: org.apache.spark.sql.DataFrame = [e0: array<int>, e1: array<int>, e2: array<int>]
scala> df.show
+---------+---------+---------+
| e0| e1| e2|
+---------+---------+---------+
|[1, 2, 8]|[2, 4, 7]|[4, 5, 8]|
+---------+---------+---------+
scala> val colsToSum = df.columns
colsToSum: Array[String] = Array(e0, e1, e2)
scala> val accLastUDF = udf((acc: Int, col: Seq[Int]) => acc + col.last)
accLastUDF: org.apache.spark.sql.UserDefinedFunction = UserDefinedFunction(<function2>,IntegerType,List(IntegerType, ArrayType(IntegerType,false)))
scala> df.withColumn("dist", colsToSum.foldLeft(lit(0))((acc, colName) => accLastUDF(acc, col(colName)))).show
+---------+---------+---------+----+
| e0| e1| e2|dist|
+---------+---------+---------+----+
|[1, 2, 8]|[2, 4, 7]|[4, 5, 8]| 23|
+---------+---------+---------+----+
I have a dataframe created which holds the join of 2 tables.
I want to compare each field of table1 to that of table2 (Schema is same)
Columns in Table A = colA1, colB1, colC1 , ...
Columns in Table B = colA2, colB2, colC2, ...
So, I need to filter out the data which satisfies the condition
(colA1 = colA2) AND (colB1 = colB2) AND (colC1 = colC2) and so on.
Since my table has a lot of fields, I tried to build a similar exp.
val filterCols = Seq("colA","colB","colC")
val sq = '"'
val exp = filterCols.map({ x => s"(join_df1($sq${x}1$sq) === join_df1($sq${x}2$sq))" }).mkString(" && ")
Resultant Exp : res28: String = (join_df1("colA1") === join_df1("colA2")) && (join_df1("colB1") === join_df1("colB2")) && (join_df1("colC1") === join_df1("colC2"))
Now when i try to substitute it to the dataframe, it throws me an error.
join_df1.filter($exp)
I am not sure whether I am doing it right .I need to find a way to substitute my expression and filter out value.
Any help is appreciated.
Thanks in advance
This is not valid SQL. Try:
val df = Seq(
("a", "a", "b", "b", "c", "c"),
("a", "A", "b", "B", "c", "C")).toDF("a1", "a2", "b1", "b2", "c1", "c2")
val filterCols = Seq("A", "B", "C")
val exp = filterCols.map(x => s"${x}1 = ${x}2").mkString(" AND ")
df.where(exp)
I have an rdd with 3 fields as mentioned below.
1,2,6
2,4,6
1,4,9
3,4,7
2,3,8
Now, from the above rdd, I want to get following rdd.
2,4,6
3,4,7
2,3,8
The resultant rdd does not have rows starting with 1, because 1 is nowhere in the second field in input rdd.
Ok, if I understood correctly what you want to do, there are two ways:
Split your RDD into two, where first RDD contains unique values of "second field" and second RDD is has "first value" as a key. Then join rdds together. The drawback of this approach is that distinct and join are slow operations.
val r: RDD[(String, String, Int)] = sc.parallelize(Seq(
("1", "2", 6),
("2", "4", 6),
("1", "4", 9),
("3", "4", 7),
("2", "3", 8)
))
val uniqueValues: RDD[(String, Unit)] = r.map(x => x._2 -> ()).distinct
val r1: RDD[(String, (String, String, Int))] = r.map(x => x._1 -> x)
val result: RDD[(String, String, Int)] = r1.join(uniqueValues).map {case (_, (x, _)) => x}
result.collect.foreach(println)
If your RDD is relatively small and Set of second values can fit completely in memory in all the nodes, then you can create that in-memory set as a first step, broadcast it to all nodes and then just filter your RDD:
val r: RDD[(String, String, Int)] = sc.parallelize(Seq(
("1", "2", 6),
("2", "4", 6),
("1", "4", 9),
("3", "4", 7),
("2", "3", 8)
))
val uniqueValues = sc.broadcast(r.map(x => x._2).distinct.collect.toSet)
val result: RDD[(String, String, Int)] = r.filter(x => uniqueValues.value.contains(x._1))
result.collect.foreach(println)
Both examples output:
(2,4,6)
(2,3,8)
(3,4,7)
I have the following data:
val RDDApp = sc.parallelize(List("A", "B", "C"))
val RDDUser = sc.parallelize(List(1, 2, 3))
val RDDInstalled = sc.parallelize(List((1, "A"), (1, "B"), (2, "B"), (2, "C"), (3, "A"))).groupByKey
val RDDCart = RDDUser.cartesian(RDDApp)
I want to map this data so that I have an RDD of tuples with (userId, Boolean if the letter is given for user). I thought I found a solution with this:
val results = RDDCart.map (entry =>
(entry._1, RDDInstalled.lookup(entry._1).contains(entry._2))
)
If I call results.first, I get org.apache.spark.SparkException: SPARK-5063. I see the problem with the Action within the Mapping function but do not know how I can work around it so that I get the same result.
Just join and mapValues:
RDDCart.join(RDDInstalled).mapValues{case (x, xs) => xs.toSeq.contains(x)}