Is it possible to explode multiple columns into one new column in spark? I have a dataframe which looks like this:
userId varA varB
1 [0,2,5] [1,2,9]
desired output:
userId bothVars
1 0
1 2
1 5
1 1
1 2
1 9
What I have tried so far:
val explodedDf = df.withColumn("bothVars", explode($"varA")).drop("varA")
.withColumn("bothVars", explode($"varB")).drop("varB")
which doesn't work. Any suggestions is much appreciated.
You could wrap the two arrays into one and flatten the nested array before exploding it, as shown below:
val df = Seq(
(1, Seq(0, 2, 5), Seq(1, 2, 9)),
(2, Seq(1, 3, 4), Seq(2, 3, 8))
).toDF("userId", "varA", "varB")
df.
select($"userId", explode(flatten(array($"varA", $"varB"))).as("bothVars")).
show
// +------+--------+
// |userId|bothVars|
// +------+--------+
// | 1| 0|
// | 1| 2|
// | 1| 5|
// | 1| 1|
// | 1| 2|
// | 1| 9|
// | 2| 1|
// | 2| 3|
// | 2| 4|
// | 2| 2|
// | 2| 3|
// | 2| 8|
// +------+--------+
Note that flatten is available on Spark 2.4+.
Use array_union and then use explode function.
scala> df.show(false)
+------+---------+---------+
|userId|varA |varB |
+------+---------+---------+
|1 |[0, 2, 5]|[1, 2, 9]|
|2 |[1, 3, 4]|[2, 3, 8]|
+------+---------+---------+
scala> df
.select($"userId",explode(array_union($"varA",$"varB")).as("bothVars"))
.show(false)
+------+--------+
|userId|bothVars|
+------+--------+
|1 |0 |
|1 |2 |
|1 |5 |
|1 |1 |
|1 |9 |
|2 |1 |
|2 |3 |
|2 |4 |
|2 |2 |
|2 |8 |
+------+--------+
array_union is available in Spark 2.4+
I have a dataframe with 100 columns and col names like col1, col2, col3.... I want to apply certain transformation on the values of columns based on condition matches. I can store the column names in a array of string. And pass the value each element of the array in withColumn and based on When condition i can transform the values of the column vertically.
But the question is, as Dataframe is immutable, so each updated version is need to store in a new variable and also new dataframe need to pass in withColumn to transform for next iteration.
Is there any way to create array of dataframe so that new dataframe can be stored as a element of array and it can iterate based on the value of iterator.
Or is there any other way to handle the same.
var arr_df : Array[DataFrame] = new Array[DataFrame](60)
--> This throws error "not found type DataFrame"
val df(0) = df1.union(df2)
for(i <- 1 to 99){
val df(i) = df(i-1).withColumn(col(i), when(col(i)> 0, col(i) +
1).otherwise(col(i)))
Here col(i) is an array of strings that stores the name of the columns of the original datframe .
As a example :
scala> val original_df = Seq((1,2,3,4),(2,3,4,5),(3,4,5,6),(4,5,6,7),(5,6,7,8),(6,7,8,9)).toDF("col1","col2","col3","col4")
original_df: org.apache.spark.sql.DataFrame = [col1: int, col2: int ... 2 more fields]
scala> original_df.show()
+----+----+----+----+
|col1|col2|col3|col4|
+----+----+----+----+
| 1| 2| 3| 4|
| 2| 3| 4| 5|
| 3| 4| 5| 6|
| 4| 5| 6| 7|
| 5| 6| 7| 8|
| 6| 7| 8| 9|
+----+----+----+----+
I want to iterate 3 columns : col1, col2, col3 if the value of that column is greater than 3, then it will be updated by +1
Check below code.
scala> df.show(false)
+----+----+----+----+
|col1|col2|col3|col4|
+----+----+----+----+
|1 |2 |3 |4 |
|2 |3 |4 |5 |
|3 |4 |5 |6 |
|4 |5 |6 |7 |
|5 |6 |7 |8 |
|6 |7 |8 |9 |
+----+----+----+----+
scala> val requiredColumns = df.columns.zipWithIndex.filter(_._2 < 3).map(_._1).toSet
requiredColumns: scala.collection.immutable.Set[String] = Set(col1, col2, col3)
scala> val allColumns = df.columns
allColumns: Array[String] = Array(col1, col2, col3, col4)
scala> val columnExpr = allColumns.filterNot(requiredColumns(_)).map(col(_)) ++ requiredColumns.map(c => when(col(c) > 3, col(c) + 1).otherwise(col(c)).as(c))
scala> df.select(columnExpr:_*).show(false)
+----+----+----+----+
|col1|col2|col3|col4|
+----+----+----+----+
|1 |2 |3 |4 |
|2 |3 |5 |5 |
|3 |5 |6 |6 |
|5 |6 |7 |7 |
|6 |7 |8 |8 |
|7 |8 |9 |9 |
+----+----+----+----+
If I understand you right, you are trying to do a dataframe wise operation. you dont need to iterate for this . I can show you how it can be done in pyspark. probably it can be taken over in scala.
from pyspark.sql import functions as F
tst= sqlContext.createDataFrame([(1,7,0),(1,8,4),(1,0,10),(5,1,90),(7,6,0),(0,3,11)],schema=['col1','col2','col3'])
expr = [F.when(F.col(coln)>3,F.col(coln)+1).otherwise(F.col(coln)).alias(coln) for coln in tst.columns if 'col3' not in coln]
tst1= tst.select(*expr)
results:
tst1.show()
+----+----+
|col1|col2|
+----+----+
| 1| 8|
| 1| 9|
| 1| 0|
| 6| 1|
| 8| 7|
| 0| 3|
+----+----+
This should give you the desired result
You can iterate over all columns and apply the condition in single line as below,
original_df.select(original_df.columns.map(c => (when(col(c) > lit(3), col(c)+1).otherwise(col(c))).alias(c)):_*).show()
+----+----+----+----+
|col1|col2|col3|col4|
+----+----+----+----+
| 1| 2| 3| 5|
| 2| 3| 5| 6|
| 3| 5| 6| 7|
| 5| 6| 7| 8|
| 6| 7| 8| 9|
| 7| 8| 9| 10|
+----+----+----+----+
You can use foldLeft whenever you want to make changes on multiple columns as below
val original_df = Seq(
(1,2,3,4),
(2,3,4,5),
(3,4,5,6),
(4,5,6,7),
(5,6,7,8),
(6,7,8,9)
).toDF("col1","col2","col3","col4")
//Filter the columns that yuou want to update
val columns = original_df.columns
columns.foldLeft(original_df){(acc, colName) =>
acc.withColumn(colName, when(col(colName) > 3, col(colName) + 1).otherwise(col(colName)))
}
.show(false)
Output:
+----+----+----+----+
|col1|col2|col3|col4|
+----+----+----+----+
|1 |2 |3 |5 |
|2 |3 |5 |6 |
|3 |5 |6 |7 |
|5 |6 |7 |8 |
|6 |7 |8 |9 |
|7 |8 |9 |10 |
+----+----+----+----+
I am a newbie in Scala-Spark. I have a dataframe like the one below that I need to split into different chunks of data based into a group ID and process them independently in parallel.
+----+-------+-----+-------+
|user|feature|value|groupID
+----+-------+-----+-------+
| 1| 100| 1| A|
| 2| 20B| 0| B|
| 3| 30A| 1| B|
| 4| 40A| 1| B|
| 5| 50A| 1| A|
| 6| 10A| 0| B|
| 7| 200| 1| A|
| 8| 30B| 1| B|
| 9| 400| 0| A|
| 10| 50C| 0| A|
+----+-------+-----+-------+
1 Step I need to split it to have two different df like these ones: I can user a filter for this. But I am not sure if (due to the large number of different dataframes they will produce) I should save them into ADLS as parquets or keep them in memory.
+----+-------+-----+-------+
|user|feature|value|groupID
+----+-------+-----+-------+
| 1| 100| 1| A|
| 5| 50A| 1| A|
| 7| 200| 1| A|
| 9| 400| 0| A|
| 10| 50C| 0| A|
+----+-------+-----+-------+
+----+-------+-----+-------+
|user|feature|value|groupID
+----+-------+-----+-------+
| 2| 20B| 0| B|
| 3| 30A| 1| B|
| 4| 40A| 1| B|
| 6| 10A| 0| B|
| 8| 30B| 1| B|
+----+-------+-----+-------+
2 Step Process independently each dataframe in a parallel fashion and get independent processed dataframes.
To give some context:
The number of groupIds will be high therefore they cannot be hardcoded.
The processing of each dataframe would ideally happen in parallel.
I ask for a brief idea on how to proceed: I have seen .par.foreach (but is not clear to me how to apply this on a dynamic number of dataframes and how to store them independently nor if the best efficient way)
Check below code.
scala> df.show(false)
+----+-------+-----+-------+
|user|feature|value|groupID|
+----+-------+-----+-------+
|1 |100 |1 |A |
|2 |20B |0 |B |
|3 |30A |1 |B |
|4 |40A |1 |B |
|5 |50A |1 |A |
|6 |10A |0 |B |
|7 |200 |1 |A |
|8 |30B |1 |B |
|9 |400 |0 |A |
|10 |50C |0 |A |
+----+-------+-----+-------+
Get distinct groupid values from dataframe.
scala> val groupIds = df.select($"groupID").distinct.as[String].collect // Get distinct group ids.
groupIds: Array[String] = Array(B, A)
Use .par for parallel process. You need add your logic inside map.
scala> groupIds.par.map(groupid => df.filter($"groupId" === lit(groupid))).foreach(_.show(false)) // here you might need add your logic to save or any other inside map function not foreach.., for example I have added logic to show dataframe content in foreach.
+----+-------+-----+-------+
|user|feature|value|groupID|
+----+-------+-----+-------+
|2 |20B |0 |B |
|3 |30A |1 |B |
|4 |40A |1 |B |
|6 |10A |0 |B |
|8 |30B |1 |B |
+----+-------+-----+-------+
+----+-------+-----+-------+
|user|feature|value|groupID|
+----+-------+-----+-------+
|1 |100 |1 |A |
|5 |50A |1 |A |
|7 |200 |1 |A |
|9 |400 |0 |A |
|10 |50C |0 |A |
+----+-------+-----+-------+
I have an input spark-dataframe named df as
+---------------+---+---+---+-----------+
|Main_CustomerID| P1| P2| P3|Total_Count|
+---------------+---+---+---+-----------+
| 725153| 1| 0| 2| 3|
| 873008| 0| 0| 3| 3|
| 625109| 1| 1| 0| 2|
+---------------+---+---+---+-----------+
Here,Total_Count is the sum of P1,P2,P3 and P1,P2,P3 were the product names. I need to find the frequency of each product by dividing the values of products with its Total_Count. I need to create a new spark-dataframe named frequencyTable as follows,
+---------------+------------------+---+------------------+-----------+
|Main_CustomerID| P1| P2| P3|Total_Count|
+---------------+------------------+---+------------------+-----------+
| 725153|0.3333333333333333|0.0|0.6666666666666666| 3|
| 873008| 0.0|0.0| 1.0| 3|
| 625109| 0.5|0.5| 0.0| 2|
+---------------+------------------+---+------------------+-----------+
I have done this using Scala as,
val df_columns = df.columns.toSeq
var frequencyTable = df
for (index <- df_columns) {
if (index != "Main_CustomerID" && index != "Total_Count") {
frequencyTable = frequencyTable.withColumn(index, df.col(index) / df.col("Total_Count"))
}
}
But I don't prefer this for loop because my df is of larger size. What is the optimized solution?
If you have dataframe as
val df = Seq(
("725153", 1, 0, 2, 3),
("873008", 0, 0, 3, 3),
("625109", 1, 1, 0, 2)
).toDF("Main_CustomerID", "P1", "P2", "P3", "Total_Count")
+---------------+---+---+---+-----------+
|Main_CustomerID|P1 |P2 |P3 |Total_Count|
+---------------+---+---+---+-----------+
|725153 |1 |0 |2 |3 |
|873008 |0 |0 |3 |3 |
|625109 |1 |1 |0 |2 |
+---------------+---+---+---+-----------+
You can simply use foldLeft on the columns except Main_CustomerID, Total_Count i.e. on P1 P2 and P3
val df_columns = df.columns.toSet - "Main_CustomerID" - "Total_Count" toList
df_columns.foldLeft(df){(tempdf, colName) => tempdf.withColumn(colName, df.col(colName) / df.col("Total_Count"))}.show(false)
which should give you
+---------------+------------------+---+------------------+-----------+
|Main_CustomerID|P1 |P2 |P3 |Total_Count|
+---------------+------------------+---+------------------+-----------+
|725153 |0.3333333333333333|0.0|0.6666666666666666|3 |
|873008 |0.0 |0.0|1.0 |3 |
|625109 |0.5 |0.5|0.0 |2 |
+---------------+------------------+---+------------------+-----------+
I hope the answer is helpful
I'm searching for a scala analogue of python .transform()
Namely, i need to create a new feature - a group mean of a corresponding: class
val df = Seq(
("a", 1),
("a", 3),
("b", 3),
("b", 7)
).toDF("class", "val")
+-----+---+
|class|val|
+-----+---+
| a| 1|
| a| 3|
| b| 3|
| b| 7|
+-----+---+
val grouped_df = df.groupBy('class)
Here's python implementation:
df["class_mean"] = grouped_df["class"].transform(
lambda x: x.mean())
So, the desired result:
+-----+---+----------+
|class|val|class_mean|
+-----+---+---+------+
| a| 1| 2.0|
| a| 3| 2.0|
| b| 3| 5.0|
| b| 7| 5.0|
+-----+---+----------+
You can use
df.groupBy("class").agg(mean("val").as("class_mean"))
If you can want all the columns then you can use window function
val w = Window.partitionBy("class")
df.withColumn("class_mean", mean("val").over(w))
.show(false)
Output:
+-----+---+----------+
|class|val|class_mean|
+-----+---+----------+
|b |3 |5.0 |
|b |7 |5.0 |
|a |1 |2.0 |
|a |3 |2.0 |
+-----+---+----------+