How to create a data frame in a for loop with the variable that is iterating in loop - scala

So I have a huge data frame which is combination of individual tables, it has an identifier column at the end which specifies the table number as shown below
+----------------------------+
| col1 col2 .... table_num |
+----------------------------+
| x y 1 |
| a b 1 |
| . . . |
| . . . |
| q p 2 |
+----------------------------+
(original table)
I have to split this into multiple little dataframes based on table num. The number of tables combined to create this is pretty large so it's not feasible to individually create the disjoint subset dataframes, so I was thinking if I made a for loop iterating over min to max values of table_num I could achieve this task but I can't seem to do it, any help is appreciated.
This is what I came up with
for (x < min(table_num) to max(table_num)) {
var df(x)= spark.sql("select * from df1 where state = x")
df(x).collect()
but I don't think the declaration is right.
so essentially what I need is df's that look like this
+-----------------------------+
| col1 col2 ... table_num |
+-----------------------------+
| x y 1 |
| a b 1 |
+-----------------------------+
+------------------------------+
| col1 col2 ... table_num |
+------------------------------+
| xx xy 2 |
| aa bb 2 |
+------------------------------+
+-------------------------------+
| col1 col2 ... table_num |
+-------------------------------+
| xxy yyy 3 |
| aaa bbb 3 |
+-------------------------------+
... and so on ...
(how I would like the Dataframes split)

In Spark Arrays can be almost in data type. When made as vars you can dynamically add and remove elements from them. Below I am going to isolate the table nums into their own array, this is so I can easily iterate through them. After isolated I go through a while loop to add each table as a unique element to the DF Holder Array. To query the elements of the array use DFHolderArray(n-1) where n is the position you want to query with 0 being the first element.
//This will go and turn the distinct row nums in a queriable (this is 100% a word) array
val tableIDArray = inputDF.selectExpr("table_num").distinct.rdd.map(x=>x.mkString.toInt).collect
//Build the iterator
var iterator = 1
//holders for DF and transformation step
var tempDF = spark.sql("select 'foo' as bar")
var interimDF = tempDF
//This will be an array for dataframes
var DFHolderArray : Array[org.apache.spark.sql.DataFrame] = Array(tempDF)
//loop while the you have note reached end of array
while(iterator<=tableIDArray.length) {
//Call the table that is stored in that location of the array
tempDF = spark.sql("select * from df1 where state = '" + tableIDArray(iterator-1) + "'")
//Fluff
interimDF = tempDF.withColumn("User_Name", lit("Stack_Overflow"))
//If logic to overwrite or append the DF
DFHolderArray = if (iterator==1) {
Array(interimDF)
} else {
DFHolderArray ++ Array(interimDF)
}
iterator = iterator + 1
}
//To query the data
DFHolderArray(0).show(10,false)
DFHolderArray(1).show(10,false)
DFHolderArray(2).show(10,false)
//....

Approach is to collect all unique keys and build respective data frames. I added some functional flavor to it.
Sample dataset:
name,year,country,id
Bayern Munich,2014,Germany,7747
Bayern Munich,2014,Germany,7747
Bayern Munich,2014,Germany,7746
Borussia Dortmund,2014,Germany,7746
Borussia Mönchengladbach,2014,Germany,7746
Schalke 04,2014,Germany,7746
Schalke 04,2014,Germany,7753
Lazio,2014,Germany,7753
Code:
val df = spark.read.format(source = "csv")
.option("header", true)
.option("delimiter", ",")
.option("inferSchema", true)
.load("groupby.dat")
import spark.implicits._
//collect data for each key into a data frame
val uniqueIds = df.select("id").distinct().map(x => x.mkString.toInt).collect()
// List buffer to hold separate data frames
var dataframeList: ListBuffer[org.apache.spark.sql.DataFrame] = ListBuffer()
println(uniqueIds.toList)
// filter data
uniqueIds.foreach(x => {
val tempDF = df.filter(col("id") === x)
dataframeList += tempDF
})
//show individual data frames
for (tempDF1 <- dataframeList) {
tempDF1.show()
}

One approach would be to write the DataFrame as partitioned Parquet files and read them back into a Map, as shown below:
import org.apache.spark.sql.functions._
import spark.implicits._
val df = Seq(
("a", "b", 1), ("c", "d", 1), ("e", "f", 1),
("g", "h", 2), ("i", "j", 2)
).toDF("c1", "c2", "table_num")
val filePath = "/path/to/parquet/files"
df.write.partitionBy("table_num").parquet(filePath)
val tableNumList = df.select("table_num").distinct.map(_.getAs[Int](0)).collect
// tableNumList: Array[Int] = Array(1, 2)
val dfMap = ( for { n <- tableNumList } yield
(n, spark.read.parquet(s"$filePath/table_num=$n").withColumn("table_num", lit(n)))
).toMap
To access the individual DataFrames from the Map:
dfMap(1).show
// +---+---+---------+
// | c1| c2|table_num|
// +---+---+---------+
// | a| b| 1|
// | c| d| 1|
// | e| f| 1|
// +---+---+---------+
dfMap(2).show
// +---+---+---------+
// | c1| c2|table_num|
// +---+---+---------+
// | g| h| 2|
// | i| j| 2|
// +---+---+---------+

Related

Maximum of some specific columns in a spark scala dataframe

I have a dataframe like this.
+---+---+---+---+
| M| c2| c3| d1|
+---+---+---+---+
| 1|2_1|4_3|1_2|
| 2|3_4|4_5|1_2|
+---+---+---+---+
I have to transform this df should look like below. Here, c_max = max(c2,c3) after splitting with _.ie, all the columns (c2 and c3) have to be splitted with _ and then getting the max.
In the actual scenario, I have 50 columns ie, c2,c3....c50 and need to take the max from this.
+---+---+---+---+------+
| M| c2| c3| d1|c_Max |
+---+---+---+---+------+
| 1|2_1|4_3|1_2| 4 |
| 2|3_4|4_5|1_2| 5 |
+---+---+---+---+------+
Here is one way using expr and build-in array functions for Spark >= 2.4.0:
import org.apache.spark.sql.functions.{expr, array_max, array}
val df = Seq(
(1, "2_1", "3_4", "1_2"),
(2, "3_4", "4_5", "1_2")
).toDF("M", "c2", "c3", "d1")
// get max c for each c column
val c_cols = df.columns.filter(_.startsWith("c")).map{ c =>
expr(s"array_max(cast(split(${c}, '_') as array<int>))")
}
df.withColumn("max_c", array_max(array(c_cols:_*))).show
Output:
+---+---+---+---+-----+
| M| c2| c3| d1|max_c|
+---+---+---+---+-----+
| 1|2_1|3_4|1_2| 4|
| 2|3_4|4_5|1_2| 5|
+---+---+---+---+-----+
For older versions use the next code:
val c_cols = df.columns.filter(_.startsWith("c")).map{ c =>
val c_ar = split(col(c), "_").cast("array<int>")
when(c_ar.getItem(0) > c_ar.getItem(1), c_ar.getItem(0)).otherwise(c_ar.getItem(1))
}
df.withColumn("max_c", greatest(c_cols:_*)).show
Use greatest function:
val df = Seq((1, "2_1", "3_4", "1_2"),(2, "3_4", "4_5", "1_2"),
).toDF("M", "c2", "c3", "d1")
// get all `c` columns and split by `_` to get the values after the underscore
val c_cols = df.columns.filter(_.startsWith("c"))
.flatMap{
c => Seq(split(col(c), "_").getItem(0).cast("int"),
split(col(c), "_").getItem(1).cast("int")
)
}
// apply greatest func
val c_max = greatest(c_cols: _*)
// add new column
df.withColumn("c_Max", c_max).show()
Gives:
+---+---+---+---+-----+
| M| c2| c3| d1|c_Max|
+---+---+---+---+-----+
| 1|2_1|3_4|1_2| 4|
| 2|3_4|4_5|1_2| 5|
+---+---+---+---+-----+
In spark >= 2.4.0, you can use the array_max function and get some code that would work even with columns containing more than 2 values. The idea is to start by concatenating all the columns (concat column). For that, I use concat_ws on an array of all the columns I want to concat, that I obtain with array(cols.map(col) :_*). Then I split the resulting string to get a big array of strings containing all the values of all the columns. I cast it to an array of ints and I call array_max on it.
val cols = (2 to 50).map("c"+_)
val result = df
.withColumn("concat", concat_ws("_", array(cols.map(col) :_*)))
.withColumn("array_of_ints", split('concat, "_").cast(ArrayType(IntegerType)))
.withColumn("c_max", array_max('array_of_ints))
.drop("concat", "array_of_ints")
In spark < 2.4, you can define array_max yourself like this:
val array_max = udf((s : Seq[Int]) => s.max)
The previous code does not need to be modified. Note however that UDFs can be slower than predefined spark SQL functions.

Spark dataframe - Replace tokens of a common string with column values for each row using scala

I have a dataframe with 3 columns - number (Integer), Name (String), Color (String). Below is the result of df.show with repartition option.
val df = sparkSession.read.format("csv").option("header", "true").option("inferschema", "true").option("delimiter", ",").option("decoding", "utf8").load(fileName).repartition(5).toDF()
+------+------+------+
|Number| Name| Color|
+------+------+------+
| 4|Orange|Orange|
| 3| Apple| Green|
| 1| Apple| Red|
| 2|Banana|Yellow|
| 5| Apple| Red|
+------+------+------+
My objective is to create list of strings corresponding to each row by replacing the tokens in common dynamic string which I am passing as parameter to the method with the column values
For example: commonDynamicString = Column.Name with Column.Color color
In this string, my tokens are Column.Name and Column.Color. I need to replace these values for all the rows with respective values in that column. Note: this string can change dynamically hence hardcoding won’t work.
I don't want to use RDD unless no other option is available with dataframe.
Below are the approaches I tried but couldn't achieve my objective.
Option 1:
val a = df.foreach(t => {
finalValue = commonString.replace("Column.Number", t.getAs[Any]("Number").toString())
.replace("DF.Name", t.getAs("Name"))
.replace("DF.Color", t.getAs("Color"))
println ("finalValue: " +finalValue)
})
With this approach, the finalValue prints as expected. However, I cannot create a listbuffer or pass the final string from here as a list to other function as foreach returns Unit
and spark throws error.
Option 2: I am thinking about this option but would need some guidance to understand if foldleft or window or any other spark functions can be used to create a 4th column called "Final"
using withColumn option and use a UDF where I can extract all the tokens using regex pattern matching - "Column.\w+" and do replace operation for the tokens?
+------+------+------+--------------------------+
|Number| Name| Color| Final |
+------+------+------+--------------------------+
| 4|Orange|Orange|Orange with orange color |
| 3| Apple| Green|Apple with Green color |
| 1| Apple| Red|Apple with Red color |
| 2|Banana|Yellow|Banana with Yellow color |
| 5| Apple| Red|Apple with Red color |
+------+------+------+--------------------------+
Can someone help me with this problem and also to let me know if I am thinking in the right direction to use spark for handling large datasets?
Thanks!
If I understand your requirement correctly, you can create a column method, say, parseStatement which takes a String-type statement and returns a Column with the following steps:
Parse the input statement to count number of tokens
Generate a Regex pattern in the form of ^(.*?)(token1)(.*?)(token2) ... (.*?)$
Apply pattern matching to assemble a colList consisting of lit(g1), col(g2), lit(g3), col(g4), ..., where the g?s are the extracted Regex groups
Concatenate the Column-type items
Here's the sample code:
import spark.implicits._
import org.apache.spark.sql.Column
import org.apache.spark.sql.functions._
def parseStatement(stmt: String): Column = {
val token = "Column."
val tokenPattern = """Column\.(\w+)"""
val literalPattern = "(.*?)"
val colCount = stmt.sliding(token.length).count(_ == token)
val pattern = (0 to colCount * 2).map{
case i if (i % 2 == 0) => literalPattern
case _ => tokenPattern
}.mkString
val colList = ("^" + pattern + "$").r.findAllIn(stmt).
matchData.toList.flatMap(_.subgroups).
zipWithIndex.map{
case (g, i) if (i % 2 == 0) => lit(g)
case (g, i) => col(g)
}
concat(colList: _*)
}
val df = Seq(
(4, "Orange", "Orange"),
(3, "Apple", "Green"),
(1, "Apple", "Red"),
(2, "Banana", "Yellow"),
(5, "Apple", "Red")
).toDF("Number", "Name", "Color")
val statement = "Column.Name with Column.Color color"
df.withColumn("Final", parseStatement(statement)).
show(false)
// +------+------+------+------------------------+
// |Number|Name |Color |Final |
// +------+------+------+------------------------+
// |4 |Orange|Orange|Orange with Orange color|
// |3 |Apple |Green |Apple with Green color |
// |1 |Apple |Red |Apple with Red color |
// |2 |Banana|Yellow|Banana with Yellow color|
// |5 |Apple |Red |Apple with Red color |
// +------+------+------+------------------------+
Note that concat takes column-type parameters, hence the need of col() for column values and lit() for literals.

Doing left outer join on multiple data frames in spark scala

I am newbie in Spark. I trying to achieve below use case using scala.
-DataFrame 1
| col A | col B |
-----------------
| 1 | a |
| 2 | a |
| 3 | a |
-DataFrame 2
| col A | col B |
-----------------
| 1 | b |
| 3 | b |
-DataFrame 3
| col A | col B |
-----------------
| 2 | c |
| 3 | c |
Final Output frame should be
| col A | col B |
-----------------
| 1 | a,b |
| 2 | a,c |
| 3 | a,b,c |
Number of frames are not limited to 3 , it can be any number less than 100.So I am using for each in which I am printing each of the data frame.
Can some one please help me how I can create final data frame in which I can have output in above format with N data frames.
I appreciate your help.
I see this question today. I suggest that you use python to solve it. It's easier to write than scala. Here are they:
from pyspark.sql import SQLContext
from pyspark.sql.functions import concat_ws
d1=sc.parallelize([(1, "a"), (2, "a"), (3,"a")]).toDF().toDF("Col_A","Col_B")
d2=sc.parallelize([(1, "b"), (2, "b")]).toDF().toDF("Col_A", "Col_B")
d3=sc.parallelize([(2, "c"), (3, "c")]).toDF().toDF("Col_A", "Col_B")
d4=d1.join(d2,'Col_A','left').join(d3,'Col_A','left').select(d1.Col_A.alias("col A"),concat_ws(',',d1.Col_B,d2.Col_B,d3.Col_B).alias("col B"))
df4.show()
+-----+-----+
|col A|col B|
+-----+-----+
| 1
| a,b|
| 2
|a,b,c|
| 3
| a,c|
+-----+-----+
You see the result!
You can use foldLeft to iteratively merge data with outer join
import org.apache.spark.sql.Row
import org.apache.spark.sql.functions._
val df1 = Seq((1, "a"), (2, "a"), (3, "a")).toDF("Col A", "Col B")
val df2 = Seq((1, "b"), (2, "b")).toDF("Col A", "Col B")
val df3 = Seq((2, "c"), (3, "c")).toDF("Col A", "Col B")
val dfs = Seq(df2, df3)
val bs = (0 to dfs.size).map(i => s"Col B $i")
dfs.foldLeft(df1)(
(acc, df) => acc.join(df, Seq("Col A"), "fullouter")
).toDF("Col A" +: bs: _*).select($"Col A", array(bs map col: _*)).map {
case Row(a: Int, bs: Seq[_]) =>
// Drop nulls and concat
(a, bs.filter(_ != null).map(_.toString).mkString(","))
}.toDF("Col A", "Col B").show
// +-----+-----+
// |Col A|Col B|
// +-----+-----+
// | 1| a,b|
// | 3| a,c|
// | 2|a,b,c|
// +-----+-----+
but if you really think
it can be any number less than 100
then it is just unrealistic. join is the most expensive operation in Spark, and even with all the optimizer improvements, it is just not gonna work.

How can I rank items in a RDD to build a streak?

I have an RDD containing data like this: (downloadId: String, date: LocalDate, downloadCount: Int). The date and download-id are unique and the download-count is for the date.
What I've been trying to accomplish is to get the number of consecutive days (going backwards from the current date) that a download-id was in the top 100 of all download-ids. So if a given download was in the top-100 today, yesterday and the day before, then it's streak would be 3.
In SQL, I guess this could be solved using window functions. I've seen similar questions like this. How to add a running count to rows in a 'streak' of consecutive days
(I'm rather new to Spark but wasn't sure to how to map-reduce an RDD to even begin solving a problem like this.)
Some more information, the dates are the last 30 days and there are approximately unique 4M download-ids per day.
I suggest you work with DataFrames, as they are much easier to use than RDDs. Leo's answer is shorter, but I couldn't find where it was filtering for the top 100 downloads, so I decide to post my answer as well. It does not depend on window functions, but it is bound on the number of days in the past you want to streak by. Since you said you only use the last 30 days' data, that should not be a problem.
As a first Step, I wrote some code to generate a DF similar to what you described. You don't need to run this first block (if you do, reduce the number of rows unless you have a cluster to try it on, it's heavy on memory). You can see how to transform the RDD (theData) into a DF (baseData). You should define a schema for it, like I did.
import java.time.LocalDate
import scala.util.Random
val maxId = 10000
val numRows = 15000000
val lastDate = LocalDate.of(2017, 12, 31)
// Generates the data. As a convenience for working with Dataframes, I converted the dates to epoch days.
val theData = sc.parallelize(1.to(numRows).map{
_ => {
val id = Random.nextInt(maxId)
val nDownloads = Random.nextInt((id / 1000 + 1))
Row(id, lastDate.minusDays(Random.nextInt(30)).toEpochDay, nDownloads)
}
})
//Working with Dataframes is much simples, so I'll generate a DF named baseData from the RDD
val schema = StructType(
StructField("downloadId", IntegerType, false) ::
StructField("date", LongType, false) ::
StructField("downloadCount", IntegerType, false) :: Nil)
val baseData = sparkSession.sqlContext.createDataFrame(theData, schema)
.groupBy($"downloadId", $"date")
.agg(sum($"downloadCount").as("downloadCount"))
.cache()
Now you have the data you want in a DF called baseData. The next step is to restrict it to the top 100 for each day - you should discard the data you don't before doing any additional heavy transformations.
import org.apache.spark.sql.types._
import org.apache.spark.sql.{DataFrame, Row}
def filterOnlyTopN(data: DataFrame, n: Int = 100): DataFrame = {
// For each day in the data, let's find the cutoff # of downloads to make it into the top N
val getTopNCutoff = udf((downloads: Seq[Long]) => {
val reverseSortedDownloads = downloads.sortBy{- _ }
if (reverseSortedDownloads.length >= n)
reverseSortedDownloads.drop(n - 1).head
else
reverseSortedDownloads.last
})
val topNLimitsByDate = data.groupBy($"date").agg(collect_set($"downloadCount").as("downloads"))
.select($"date", getTopNCutoff($"downloads").as("cutoff"))
// And then, let's throw away the records below the top 100
data.join(topNLimitsByDate, Seq("date"))
.filter($"downloadCount" >= $"cutoff")
.drop("cutoff", "downloadCount")
}
val relevantData = filterOnlyTopN(baseData)
Now that you have the relevantData DF with only the data you need, you can calculate the streak for them. I have left the ids with no streaks as streak 0, you can filter those out by using streaks.filter($"streak" > lit(0)).
def getStreak(df: DataFrame, fromDate: Long): DataFrame = {
val calcStreak = udf((dateList: Seq[Long]) => {
if (!dateList.contains(fromDate))
0
else {
val relevantDates = dateList.sortBy{- _ } // Order the dates descending
.dropWhile(_ != fromDate) // And drop everything until we find the starting day we are interested in
if (relevantDates.length == 1) // If there's only one day left, it's a one day streak
1
else // Otherwise, let's count the streak length (this works if no dates are left, too - but not with only 1 day)
relevantDates.sliding(2) // Take days by pairs
.takeWhile{twoDays => twoDays(1) == twoDays(0) - 1} // While the pair is of consecutive days
.length+1 // And the streak will be the number of consecutive pairs + 1 (the initial day of the streak)
}
})
df.groupBy($"downloadId").agg(collect_list($"date").as("dates")).select($"downloadId", calcStreak($"dates").as("streak"))
}
val streaks = getStreak(relevantData, lastDate.toEpochDay)
streaks.show()
+------------+--------+
| downloadId | streak |
+------------+--------+
| 8086 | 0 |
| 9852 | 0 |
| 7253 | 0 |
| 9376 | 0 |
| 7833 | 0 |
| 9465 | 1 |
| 7880 | 0 |
| 9900 | 1 |
| 7993 | 0 |
| 9427 | 1 |
| 8389 | 1 |
| 8638 | 1 |
| 8592 | 1 |
| 6397 | 0 |
| 7754 | 1 |
| 7982 | 0 |
| 7554 | 0 |
| 6357 | 1 |
| 7340 | 0 |
| 6336 | 0 |
+------------+--------+
And there you have the streaks DF with the data you need.
Using a similar approach in the listed PostgreSQL link, you can apply Window function in Spark as well. Spark's DataFrame API doesn't have encoders for java.time.LocalDate, so you'll need to convert it to, say, java.sql.Date.
Here're the steps: First, transfrom the RDD to a DataFrame with supported date format; next, create a UDF to compute the baseDate which requires a date and a per-id chronological row-number (generated using Window function) as parameters. Another Window function is applied to calculate per-id-baseDate row-number, which is the wanted streak value:
import java.time.LocalDate
val rdd = sc.parallelize(Seq(
(1, LocalDate.parse("2017-12-13"), 2),
(1, LocalDate.parse("2017-12-16"), 1),
(1, LocalDate.parse("2017-12-17"), 1),
(1, LocalDate.parse("2017-12-18"), 2),
(1, LocalDate.parse("2017-12-20"), 1),
(1, LocalDate.parse("2017-12-21"), 3),
(2, LocalDate.parse("2017-12-15"), 2),
(2, LocalDate.parse("2017-12-16"), 1),
(2, LocalDate.parse("2017-12-19"), 1),
(2, LocalDate.parse("2017-12-20"), 1),
(2, LocalDate.parse("2017-12-21"), 2),
(2, LocalDate.parse("2017-12-23"), 1)
))
val df = rdd.map{ case (id, date, count) => (id, java.sql.Date.valueOf(date), count) }.
toDF("downloadId", "date", "downloadCount")
def baseDate = udf( (d: java.sql.Date, n: Long) =>
new java.sql.Date(new java.util.Date(d.getTime).getTime - n * 24 * 60 * 60 * 1000)
)
import org.apache.spark.sql.expressions.Window
val dfStreak = df.withColumn("rowNum", row_number.over(
Window.partitionBy($"downloadId").orderBy($"date")
)
).withColumn(
"baseDate", baseDate($"date", $"rowNum")
).select(
$"downloadId", $"date", $"downloadCount", row_number.over(
Window.partitionBy($"downloadId", $"baseDate").orderBy($"date")
).as("streak")
).orderBy($"downloadId", $"date")
dfStreak.show
+----------+----------+-------------+------+
|downloadId| date|downloadCount|streak|
+----------+----------+-------------+------+
| 1|2017-12-13| 2| 1|
| 1|2017-12-16| 1| 1|
| 1|2017-12-17| 1| 2|
| 1|2017-12-18| 2| 3|
| 1|2017-12-20| 1| 1|
| 1|2017-12-21| 3| 2|
| 2|2017-12-15| 2| 1|
| 2|2017-12-16| 1| 2|
| 2|2017-12-19| 1| 1|
| 2|2017-12-20| 1| 2|
| 2|2017-12-21| 2| 3|
| 2|2017-12-23| 1| 1|
+----------+----------+-------------+------+

Iterate through rows in DataFrame and transform one to many

As an example in scala, I have a list and every item which matches a condition I want to appear twice (may not be the best option for this use case - but idea which counts):
l.flatMap {
case n if n % 2 == 0 => List(n, n)
case n => List(n)
}
I would like to do something similar in Spark - iterate over rows in a DataFrame and if a row matches a certain condition then I need to duplicate the row with some modifications in the copy. How can this be done?
For example, if my input is the table below:
| name | age |
|-------|-----|
| Peter | 50 |
| Paul | 60 |
| Mary | 70 |
I want to iterate through the table and test each row against multiple conditions, and for each condition that matches, an entry should be created with the name of the matched condition.
E.g. condition #1 is "age > 60" and condition #2 is "name.length <=4". This should result in the following output:
| name | age |condition|
|-------|-----|---------|
| Paul | 60 | 2 |
| Mary | 70 | 1 |
| Mary | 70 | 2 |
You can filter matching-conditions dataframes and then finally union all of them.
import org.apache.spark.sql.functions._
val condition1DF = df.filter($"age" > 60).withColumn("condition", lit(1))
val condition2DF = df.filter(length($"name") <= 4).withColumn("condition", lit(2))
val finalDF = condition1DF.union(condition2DF)
you should have your desired output as
+----+---+---------+
|name|age|condition|
+----+---+---------+
|Mary|70 |1 |
|Paul|60 |2 |
|Mary|70 |2 |
+----+---+---------+
I hope the answer is helpful
You can also use a combination of an UDF and explode(), like in the following example:
// set up example data
case class Pers1 (name:String,age:Int)
val d = Seq(Pers1("Peter",50), Pers1("Paul",60), Pers1("Mary",70))
val df = spark.createDataFrame(d)
// conditions logic - complex as you'd like
// probably should use a Set instead of Sequence but I digress..
val conditions:(String,Int)=>Seq[Int] = { (name,age) =>
(if(age > 60) Seq(1) else Seq.empty) ++
(if(name.length <=4) Seq(2) else Seq.empty)
}
// define UDF for spark
import org.apache.spark.sql.functions.udf
val conditionsUdf = udf(conditions)
// explode() works just like flatmap
val result = df.withColumn("condition",
explode(conditionsUdf(col("name"), col("age"))))
result.show
+----+---+---------+
|name|age|condition|
+----+---+---------+
|Paul| 60| 2|
|Mary| 70| 1|
|Mary| 70| 2|
+----+---+---------+
Here is one way to flatten it with rdd.flatMap:
import org.apache.spark.sql.types._
import org.apache.spark.sql.Row
val new_rdd = (df.rdd.flatMap(r => {
val conditions = Seq((1, r.getAs[Int](1) > 60), (2, r.getAs[String](0).length <= 4))
conditions.collect{ case (i, c) if c => Row.fromSeq(r.toSeq :+ i) }
}))
val new_schema = StructType(df.schema :+ StructField("condition", IntegerType, true))
spark.createDataFrame(new_rdd, new_schema).show
+----+---+---------+
|name|age|condition|
+----+---+---------+
|Paul| 60| 2|
|Mary| 70| 1|
|Mary| 70| 2|
+----+---+---------+