Let's say I have following dataframe:
/*
+---------+--------+----------+--------+
|a |b | c |d |
+---------+--------+----------+--------+
| bob| -1| 5| -1|
| alice| -1| -1| -1|
+---------+--------+----------+--------+
*/
I want to remove columns which only have -1 in all rows (in this case b and d). I found a solution but when I run my job I found out it was very inefficient:
private def removeEmptyColumns(df: DataFrame): DataFrame = {
val types = List("IntegerType", "DoubleType", "LongType")
val dTypes: Array[(String, String)] = df.dtypes
dTypes.foldLeft(df)((d, t) => {
val colType = t._2
val colName = t._1
if (types.contains(colType)) {
if (colType.equals("IntegerType")) {
if (d.select(colName).filter(col(colName) =!= -1).take(1).length == 0) d.drop(colName)
else d
} else if (colType.equals("DoubleType")) {
if (d.select(colName).filter(col(colName) =!= -1.0).take(1).length == 0) d.drop(colName)
else d
} else {
if (d.select(colName).filter(col(colName) =!= -1).take(1).length == 0) d.drop(colName)
else d
}
} else {
d
}
})
}
Is there a better solution or way to improve my existing code?
(I think this line val count = d.select(colName).distinct.count is the bottleneck)
I am using Spark 2.2 atm.
Many thanks
Instead of counting number of distinct values try to check if there exist any other value that is not -1
d.select(colName).filter(_ != -1).take(1).length == 0
Another approach
Instead of going through the dataframe n times (once for each column) you can try to collect statistics all at once
val summary = d.agg(
max(col1).as(s"${col1}_max"), min(col1).as(s"${col1}_min")),
max(col2).as(s"${col1}_max"), min(col2).as(s"${col2}_min")),
...)
.first
Then compare if min and max value for the column is the same -1
Related
I have a property table like below, in the dataframe
In the columns to rename,
I have to rename the column based on this input
If the cust_id flag is yes I just want to join with the customer table
In the final output I want to show the hashed column values with the actual column name
val maintab_df = maintable
val cust_df = customertable
Joining main table and customer table after renaming the main table column e to a.
maintable.a = customertable.a
Here's an example of how to do it:
propertydf.show
+-----------------+------------+
|columns-to-rename|cust_id_flag|
+-----------------+------------+
|(e to a),(d to b)| Y|
+-----------------+------------+
val columns_to_rename = propertydf.head(1)(0).getAs[String]("columns-to-rename")
val cust_id_flag = propertydf.head(1)(0).getAs[String]("cust_id_flag")
val parsed_columns = columns_to_rename.split(",")
.map(c => c.replace("(", "").replace(")", "")
.split(" to "))
// parsed_columns: Array[Array[String]] = Array(Array(e, a), Array(d, b))
val rename_columns = maintab_df.columns.map(c => {
val matched = parsed_columns.filter(p => c == p(0))
if (matched.size != 0)
col(c).as(matched(0)(1).toString)
else
col(c)
})
// rename_columns: Array[org.apache.spark.sql.Column] = Array(e AS `a`, f, c, d AS `b`)
val select_columns = maintab_df.columns.map(c => {
val matched = parsed_columns.filter(p => c == p(0))
if (matched.size != 0)
col(matched(0)(1) + "_hash").as(matched(0)(1).toString)
else
col(c)
})
// select_columns: Array[org.apache.spark.sql.Column] = Array(a_hash AS `a`, f, c, b_hash AS `b`)
val join_cond = parsed_columns.map(_(1))
// join_cond: Array[String] = Array(a, b)
if (cust_id_flag == "Y") {
val result = maintab_df.select(rename_columns:_*)
.join(cust_df, join_cond)
.select(select_columns:_*)
} else {
val result = maintab_df
}
result.show
+------+---+---+--------+
| a| f| c| b|
+------+---+---+--------+
|*****!| 1| 11| &&&&|
| ****%| 2| 12|;;;;;;;;|
|*****#| 3| 13| \\\\\\|
+------+---+---+--------+
I am trying to map the values of one column in my dataframe to a new value and put it into a new column using a UDF, but I am unable to get the UDF to accept a parameter that isn't also a column. For example I have a dataframe dfOriginial like this:
+-----------+-----+
|high_scores|count|
+-----------+-----+
| 9| 1|
| 21| 2|
| 23| 3|
| 7| 6|
+-----------+-----+
And I'm trying to get a sense of the bin the numeric value falls into, so I may construct a list of bins like this:
case class Bin(binMax:BigDecimal, binWidth:BigDecimal) {
val binMin = binMax - binWidth
// only one of the two evaluations can include an "or=", otherwise a value could fit in 2 bins
def fitsInBin(value: BigDecimal): Boolean = value > binMin && value <= binMax
def rangeAsString(): String = {
val sb = new StringBuilder()
sb.append(trimDecimal(binMin)).append(" - ").append(trimDecimal(binMax))
sb.toString()
}
}
And then I want to transform my old dataframe like this to make dfBin:
+-----------+-----+---------+
|high_scores|count|bin_range|
+-----------+-----+---------+
| 9| 1| 0 - 10 |
| 21| 2| 20 - 30 |
| 23| 3| 20 - 30 |
| 7| 6| 0 - 10 |
+-----------+-----+---------+
So that I can ultimately get a count of the instances of the bins by calling .groupBy("bin_range").count().
I am trying to generate dfBin by using the withColumn function with an UDF.
Here's the code with the UDF I am attempting to use:
val convertValueToBinRangeUDF = udf((value:String, binList:List[Bin]) => {
val number = BigDecimal(value)
val bin = binList.find( bin => bin.fitsInBin(number)).getOrElse(Bin(BigDecimal(0), BigDecimal(0)))
bin.rangeAsString()
})
val binList = List(Bin(10, 10), Bin(20, 10), Bin(30, 10), Bin(40, 10), Bin(50, 10))
val dfBin = dfOriginal.withColumn("bin_range", convertValueToBinRangeUDF(col("high_scores"), binList))
But it's giving me a type mismatch:
Error:type mismatch;
found : List[Bin]
required: org.apache.spark.sql.Column
val valueCountsWithBin = valuesCounts.withColumn(binRangeCol, convertValueToBinRangeUDF(col(columnName), binList))
Seeing the definition of an UDF makes me think it should handle the conversion fine, but it's clearly not, any ideas?
The problem is that parameters to an UDF should all be of column type. One solution would be to convert binList into a column and pass it to the UDF similar to the current code.
However, it is simpler to adjust the UDF slightly and turn it into a def. In this way you can easily pass other non-column type data:
def convertValueToBinRangeUDF(binList: List[Bin]) = udf((value:String) => {
val number = BigDecimal(value)
val bin = binList.find( bin => bin.fitsInBin(number)).getOrElse(Bin(BigDecimal(0), BigDecimal(0)))
bin.rangeAsString()
})
Usage:
val dfBin = valuesCounts.withColumn("bin_range", convertValueToBinRangeUDF(binList)($"columnName"))
Try this -
scala> case class Bin(binMax:BigDecimal, binWidth:BigDecimal) {
| val binMin = binMax - binWidth
|
| // only one of the two evaluations can include an "or=", otherwise a value could fit in 2 bins
| def fitsInBin(value: BigDecimal): Boolean = value > binMin && value <= binMax
|
| def rangeAsString(): String = {
| val sb = new StringBuilder()
| sb.append(binMin).append(" - ").append(binMax)
| sb.toString()
| }
| }
defined class Bin
scala> val binList = List(Bin(10, 10), Bin(20, 10), Bin(30, 10), Bin(40, 10), Bin(50, 10))
binList: List[Bin] = List(Bin(10,10), Bin(20,10), Bin(30,10), Bin(40,10), Bin(50,10))
scala> spark.udf.register("convertValueToBinRangeUDF", (value: String) => {
| val number = BigDecimal(value)
| val bin = binList.find( bin => bin.fitsInBin(number)).getOrElse(Bin(BigDecimal(0), BigDecimal(0)))
| bin.rangeAsString()
| })
res13: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(<function1>,StringType,Some(List(StringType)))
//-- Testing with one record
scala> val dfOriginal = spark.sql(s""" select "9" as `high_scores`, "1" as count """)
dfOriginal: org.apache.spark.sql.DataFrame = [high_scores: string, count: string]
scala> dfOriginal.createOrReplaceTempView("dfOriginal")
scala> val dfBin = spark.sql(s""" select high_scores, count, convertValueToBinRangeUDF(high_scores) as bin_range from dfOriginal """)
dfBin: org.apache.spark.sql.DataFrame = [high_scores: string, count: string ... 1 more field]
scala> dfBin.show(false)
+-----------+-----+---------+
|high_scores|count|bin_range|
+-----------+-----+---------+
|9 |1 |0 - 10 |
+-----------+-----+---------+
Hope this will help.
This is my function apply rule, the col mdp_codcat,mdp_idregl, usedRef changechanges according to the data in array bRef.
def withMdpCodcat(bRef: Broadcast[Array[RefRglSDC]])(dataFrame: DataFrame):DataFrame ={var matchRule = false
var i = 0
while (i < bRef.value.size && !matchRule) {
if ((bRef.value(i).sensop.isEmpty || bRef.value(i).sensop.equals(col("signe")))
&& (bRef.value(i).cdopcz.isEmpty || Lib.matchCdopcz(strTail(col("cdopcz")).toString(), bRef.value(i).cdopcz))
&& (bRef.value(i).libope.isEmpty || Lib.matchRule(col("lib_ope").toString(), bRef.value(i).libope))
&& (bRef.value(i).qualib.isEmpty || Lib.matchRule(col("qualif_lib_ope").toString(), bRef.value(i).qualib))) {
matchRule = true
dataFrame.withColumn("mdp_codcat", lit(bRef.value(i).codcat))
dataFrame.withColumn("mdp_idregl", lit(bRef.value(i).idregl))
dataFrame.withColumn("usedRef", lit("SDC"))
}else{
dataFrame.withColumn("mdp_codcat", lit("NOT_CATEGORIZED"))
dataFrame.withColumn("mdp_idregl", lit("-1"))
dataFrame.withColumn("usedRef", lit(""))
}
i += 1
}
dataFrame
}
dataFrame : "cdenjp", "cdguic", "numcpt", "mdp_codcat", "mdp_idregl" , mdp_codcat","mdp_idregl","usedRef" if match add mdp_idregl, mdp_idregl,mdp_idregl with value bRef
Example - my dataframe :
val DF = Seq(("tt", "aa","bb"),("tt1", "aa1","bb2"),("tt1", "aa1","bb2")).toDF("t","a","b)
+---+---+---+---+
| t| a| b| c|
+---+---+---+---+
| tt| aa| bb| cc|
|tt1|aa1|bb2|cc3|
+---+---+---+---+
file.text content :
,aa,bb,cc
,aa1,bb2,cc3
tt4,aa4,bb4,cc4
tt1,aa1,,cc6
case class TOTO(a: String, b:String, c: String, d:String)
val text = sc.textFile("file:///home/X176616/file")
val bRef= textFromCsv.map(row => row.split(",", -1))
.map(c => TOTO(c(0), c(1), c(2), c(3))).collect().sortBy(_.a)
def withMdpCodcat(bRef: Broadcast[Array[RefRglSDC]])(dataFrame: DataFrame):DataFrame
dataframe.withColumn("mdp_codcat_new", "NOT_FOUND") //first init not found, change if while if match
var matchRule = false
var i = 0
while (i < bRef.value.size && !matchRule) {
if ((bRef.value(i).a.isEmpty || bRef.value(i).a.equals(signe))
&& (bRef.value(i).b.isEmpty || Lib.matchCdopcz(col(b), bRef.value(i).b))
&& (bRef.value(i).c.isEmpty || Lib.matchRule(col(c), bRef.value(i).c))
)) {
matchRule = true
dataframe.withColumn("mdp_codcat_new", bRef.value(i).d)
dataframe.withColumn("mdp_mdp_idregl_new" = bRef.value(i).e
}
i += 1
}
Finally df if condition true
bRef.value(i).a.isEmpty || bRef.value(i).a.equals(signe))
&& (bRef.value(i).b.isEmpty || Lib.matchCdopcz(b.substring(1).toInt.toString, bRef.value(i).b))
&& (bRef.value(i).c.isEmpty || Lib.matchRule(c, bRef.value(i).c)
+---+---+---+---+-----------+----------+
| t| a| b| c|mdp_codcat |mdp_idregl|
+---+---+---+---+-----------|----------+
| tt| aa| bb| cc|cc | other |
| ab|aa1|bb2|cc3|cc4 | toto | from bRef if true in while
| cd|aa1|bb2|cc3|cc4 | titi |
| b|a1 |b2 |c3 |NO_FOUND |NO_FOUND | (not_found if conditional false)
+---+---+---+---+----------------------+
+---+---+---+---+----------------------+
You can not create a dataframe schema depending on a runtime value. I would try to do it simpler. First I´d create the three columns with a default value:
dataFrame.withColumn("mdp_codcat", lit(""))
dataFrame.withColumn("mdp_idregl", lit(""))
dataFrame.withColumn("usedRef", lit(""))
Then you can use a udf with your broadcasted value:
def mdp_codcat(bRef: Broadcast[Array[RefRglSDC]]) = udf { (field: String) =>
{
// Your while and if stuff
// return your update data
}}
And apply each udf to each field:
dataframe.withColumn("mdp_codcat_new", mdp_codcat(bRef)("mdp_codcat"))
Maybe it can help
As an example in scala, I have a list and every item which matches a condition I want to appear twice (may not be the best option for this use case - but idea which counts):
l.flatMap {
case n if n % 2 == 0 => List(n, n)
case n => List(n)
}
I would like to do something similar in Spark - iterate over rows in a DataFrame and if a row matches a certain condition then I need to duplicate the row with some modifications in the copy. How can this be done?
For example, if my input is the table below:
| name | age |
|-------|-----|
| Peter | 50 |
| Paul | 60 |
| Mary | 70 |
I want to iterate through the table and test each row against multiple conditions, and for each condition that matches, an entry should be created with the name of the matched condition.
E.g. condition #1 is "age > 60" and condition #2 is "name.length <=4". This should result in the following output:
| name | age |condition|
|-------|-----|---------|
| Paul | 60 | 2 |
| Mary | 70 | 1 |
| Mary | 70 | 2 |
You can filter matching-conditions dataframes and then finally union all of them.
import org.apache.spark.sql.functions._
val condition1DF = df.filter($"age" > 60).withColumn("condition", lit(1))
val condition2DF = df.filter(length($"name") <= 4).withColumn("condition", lit(2))
val finalDF = condition1DF.union(condition2DF)
you should have your desired output as
+----+---+---------+
|name|age|condition|
+----+---+---------+
|Mary|70 |1 |
|Paul|60 |2 |
|Mary|70 |2 |
+----+---+---------+
I hope the answer is helpful
You can also use a combination of an UDF and explode(), like in the following example:
// set up example data
case class Pers1 (name:String,age:Int)
val d = Seq(Pers1("Peter",50), Pers1("Paul",60), Pers1("Mary",70))
val df = spark.createDataFrame(d)
// conditions logic - complex as you'd like
// probably should use a Set instead of Sequence but I digress..
val conditions:(String,Int)=>Seq[Int] = { (name,age) =>
(if(age > 60) Seq(1) else Seq.empty) ++
(if(name.length <=4) Seq(2) else Seq.empty)
}
// define UDF for spark
import org.apache.spark.sql.functions.udf
val conditionsUdf = udf(conditions)
// explode() works just like flatmap
val result = df.withColumn("condition",
explode(conditionsUdf(col("name"), col("age"))))
result.show
+----+---+---------+
|name|age|condition|
+----+---+---------+
|Paul| 60| 2|
|Mary| 70| 1|
|Mary| 70| 2|
+----+---+---------+
Here is one way to flatten it with rdd.flatMap:
import org.apache.spark.sql.types._
import org.apache.spark.sql.Row
val new_rdd = (df.rdd.flatMap(r => {
val conditions = Seq((1, r.getAs[Int](1) > 60), (2, r.getAs[String](0).length <= 4))
conditions.collect{ case (i, c) if c => Row.fromSeq(r.toSeq :+ i) }
}))
val new_schema = StructType(df.schema :+ StructField("condition", IntegerType, true))
spark.createDataFrame(new_rdd, new_schema).show
+----+---+---------+
|name|age|condition|
+----+---+---------+
|Paul| 60| 2|
|Mary| 70| 1|
|Mary| 70| 2|
+----+---+---------+
I´m trying to find a way, to calculate the Median for a given Dataframe.
val df = sc.parallelize(Seq(("a",1.0),("a",2.0),("a",3.0),("b",6.0), ("b", 8.0))).toDF("col1", "col2")
+----+----+
|col1|col2|
+----+----+
| a| 1.0|
| a| 2.0|
| a| 3.0|
| b| 6.0|
| b| 8.0|
+----+----+
Now I want to do sth like that:
df.groupBy("col1").agg(calcmedian("col2"))
the result should look like this:
+----+------+
|col1|median|
+----+------+
| a| 2.0|
| b| 7.0|
+----+------+`
therefore calcmedian() has to be a UDAF, but the problem is, the "evaluate" method of the UDAF only takes a Row, but i need the whole table to sort the values and return the median...
// Once all entries for a group are exhausted, spark will evaluate to get the final result
def evaluate(buffer: Row) = {...}
Is this possible somehow? or is there another nice workaround? I want to stress, that i know how to calculate the median on a dataset with "one group". But i don´t want to use this algorithm in a "foreach" loop as this is inefficient!
Thank you!
edit:
that´s what i tried so far:
object calcMedian extends UserDefinedAggregateFunction {
// Schema you get as an input
def inputSchema = new StructType().add("col2", DoubleType)
// Schema of the row which is used for aggregation
def bufferSchema = new StructType().add("col2", DoubleType)
// Returned type
def dataType = DoubleType
// Self-explaining
def deterministic = true
// initialize - called once for each group
def initialize(buffer: MutableAggregationBuffer) = {
buffer(0) = 0.0
}
// called for each input record of that group
def update(buffer: MutableAggregationBuffer, input: Row) = {
buffer(0) = input.getDouble(0)
}
// if function supports partial aggregates, spark might (as an optimization) comput partial results and combine them together
def merge(buffer1: MutableAggregationBuffer, buffer2: Row) = {
buffer1(0) = input.getDouble(0)
}
// Once all entries for a group are exhausted, spark will evaluate to get the final result
def evaluate(buffer: Row) = {
val tile = 50
var median = 0.0
//PROBLEM: buffer is a Row --> I need DataFrame here???
val rdd_sorted = buffer.sortBy(x => x)
val c = rdd_sorted.count()
if (c == 1){
median = rdd_sorted.first()
}else{
val index = rdd_sorted.zipWithIndex().map(_.swap)
val last = c
val n = (tile/ 100d) * (c*1d)
val k = math.floor(n).toLong
val d = n - k
if( k <= 0) {
median = rdd_sorted.first()
}else{
if (k <= c){
median = index.lookup(last - 1).head
}else{
if(k >= c){
median = index.lookup(last - 1).head
}else{
median = index.lookup(k-1).head + d* (index.lookup(k).head - index.lookup(k-1).head)
}
}
}
}
} //end of evaluate
try this:
import org.apache.spark.functions._
val result = data.groupBy("col1").agg(callUDF("percentile_approx", col("col2"), lit(0.5)))