Pyspark get +1 every time if token in array - pyspark

I have next df table in pyspark:
+---------------------------
|id |list_tokens |
+---------------------------
|id1 |['A','A','B'] |
|id2 |['D','P','E','P','P']|
|id3 |['B','C'] |
|id4 |['A','C'] |
+---------------------------
I have compiled a list of unique tokens and selected the most popular.
For example I have list:
[A, B, C]
I want to make a table with [A, B, C] tokens as rows and id user as a columns and fill it +1 if list of tokens for this user contain tokens in list most popular tokens and 0 otherwise.
Example:
+-----------+-----------+---------+--------+-------+
| token| id1 | id2 |id3 |id4 |
+-----------+-----------+---------+--------+-------+
|A |2 |0 |0 |1 |
|B |1 |0 |1 |0 |
|C |0 |0 |1 |1 |
+-----------+-----------+---------+--------+-------+

You can use explode to split the array into rows and pivot to count for each values.
target = ['A', 'B', 'C']
df.select(f.col('id'), f.explode('list_tokens').alias('token')) \
.withColumn('filter', f.array([f.lit(t) for t in target])) \
.filter('array_contains(filter, token)') \
.groupBy('id').pivot('token').count().fillna(0) \
.show()
+---+---+---+---+
| id| A| B| C|
+---+---+---+---+
|id3| 0| 1| 1|
|id1| 2| 1| 0|
|id4| 1| 0| 1|
+---+---+---+---+

Related

How to perform one to many mapping on spark scala dataframe column using flatmaps

I am looking for specifically a flatmap solution to a problem of mocking the data column in a spark-scala dataframe by using data duplicacy technique like 1 to many mapping inside flatmap
My given data is something like this
|id |name|marks|
+---+----+-----+
|1 |ABCD|12 |
|2 |CDEF|12 |
|3 |FGHI|14 |
+---+----+-----+
and my expectation after doing 1 to 3 mapping of the id column will be something like this
|id |name|marks|
+---+----+-----+
|1 |ABCD|12 |
|2 |CDEF|12 |
|3 |FGHI|14 |
|2 |null|null |
|3 |null|null |
|1 |null|null |
|2 |null|null |
|1 |null|null |
|3 |null|null |
+---+----+-----+
Please feel free to let me know if there is any clarification required on the requirement part
Thanks in advance!!!
I see that you are attempting to generate data with a requirement of re-using values in the ID column.
You can just select the ID column and generate random values and do a union back to your original dataset.
For example:
val data = Seq((1,"asd",15), (2,"asd",20), (3,"test",99)).toDF("id","testName","marks")
+---+--------+-----+
| id|testName|marks|
+---+--------+-----+
| 1| asd| 15|
| 2| asd| 20|
| 3| test| 99|
+---+--------+-----+
import org.apache.spark.sql.types._
val newRecords = data.select("id").withColumn("testName", concat(lit("name_"), lit(rand()*10).cast(IntegerType).cast(StringType))).withColumn("marks", lit(rand()*100).cast(IntegerType))
+---+--------+-----+
| id|testName|marks|
+---+--------+-----+
| 1| name_2| 35|
| 2| name_9| 20|
| 3| name_3| 7|
+---+--------+-----+
val result = data.unionAll(newRecords)
+---+--------+-----+
| id|testName|marks|
+---+--------+-----+
| 1| asd| 15|
| 2| asd| 20|
| 3| test| 99|
| 1| name_2| 35|
| 2| name_9| 20|
| 3| name_3| 7|
+---+--------+-----+
you can run the randomisation portion of the code using a loop and do a union of all the generated dataframes.

Spark: explode multiple columns into one

Is it possible to explode multiple columns into one new column in spark? I have a dataframe which looks like this:
userId varA varB
1 [0,2,5] [1,2,9]
desired output:
userId bothVars
1 0
1 2
1 5
1 1
1 2
1 9
What I have tried so far:
val explodedDf = df.withColumn("bothVars", explode($"varA")).drop("varA")
.withColumn("bothVars", explode($"varB")).drop("varB")
which doesn't work. Any suggestions is much appreciated.
You could wrap the two arrays into one and flatten the nested array before exploding it, as shown below:
val df = Seq(
(1, Seq(0, 2, 5), Seq(1, 2, 9)),
(2, Seq(1, 3, 4), Seq(2, 3, 8))
).toDF("userId", "varA", "varB")
df.
select($"userId", explode(flatten(array($"varA", $"varB"))).as("bothVars")).
show
// +------+--------+
// |userId|bothVars|
// +------+--------+
// | 1| 0|
// | 1| 2|
// | 1| 5|
// | 1| 1|
// | 1| 2|
// | 1| 9|
// | 2| 1|
// | 2| 3|
// | 2| 4|
// | 2| 2|
// | 2| 3|
// | 2| 8|
// +------+--------+
Note that flatten is available on Spark 2.4+.
Use array_union and then use explode function.
scala> df.show(false)
+------+---------+---------+
|userId|varA |varB |
+------+---------+---------+
|1 |[0, 2, 5]|[1, 2, 9]|
|2 |[1, 3, 4]|[2, 3, 8]|
+------+---------+---------+
scala> df
.select($"userId",explode(array_union($"varA",$"varB")).as("bothVars"))
.show(false)
+------+--------+
|userId|bothVars|
+------+--------+
|1 |0 |
|1 |2 |
|1 |5 |
|1 |1 |
|1 |9 |
|2 |1 |
|2 |3 |
|2 |4 |
|2 |2 |
|2 |8 |
+------+--------+
array_union is available in Spark 2.4+

Iterate Over a Dataframe as each time column is passing to do transformation

I have a dataframe with 100 columns and col names like col1, col2, col3.... I want to apply certain transformation on the values of columns based on condition matches. I can store the column names in a array of string. And pass the value each element of the array in withColumn and based on When condition i can transform the values of the column vertically.
But the question is, as Dataframe is immutable, so each updated version is need to store in a new variable and also new dataframe need to pass in withColumn to transform for next iteration.
Is there any way to create array of dataframe so that new dataframe can be stored as a element of array and it can iterate based on the value of iterator.
Or is there any other way to handle the same.
var arr_df : Array[DataFrame] = new Array[DataFrame](60)
--> This throws error "not found type DataFrame"
val df(0) = df1.union(df2)
for(i <- 1 to 99){
val df(i) = df(i-1).withColumn(col(i), when(col(i)> 0, col(i) +
1).otherwise(col(i)))
Here col(i) is an array of strings that stores the name of the columns of the original datframe .
As a example :
scala> val original_df = Seq((1,2,3,4),(2,3,4,5),(3,4,5,6),(4,5,6,7),(5,6,7,8),(6,7,8,9)).toDF("col1","col2","col3","col4")
original_df: org.apache.spark.sql.DataFrame = [col1: int, col2: int ... 2 more fields]
scala> original_df.show()
+----+----+----+----+
|col1|col2|col3|col4|
+----+----+----+----+
| 1| 2| 3| 4|
| 2| 3| 4| 5|
| 3| 4| 5| 6|
| 4| 5| 6| 7|
| 5| 6| 7| 8|
| 6| 7| 8| 9|
+----+----+----+----+
I want to iterate 3 columns : col1, col2, col3 if the value of that column is greater than 3, then it will be updated by +1
Check below code.
scala> df.show(false)
+----+----+----+----+
|col1|col2|col3|col4|
+----+----+----+----+
|1 |2 |3 |4 |
|2 |3 |4 |5 |
|3 |4 |5 |6 |
|4 |5 |6 |7 |
|5 |6 |7 |8 |
|6 |7 |8 |9 |
+----+----+----+----+
scala> val requiredColumns = df.columns.zipWithIndex.filter(_._2 < 3).map(_._1).toSet
requiredColumns: scala.collection.immutable.Set[String] = Set(col1, col2, col3)
scala> val allColumns = df.columns
allColumns: Array[String] = Array(col1, col2, col3, col4)
scala> val columnExpr = allColumns.filterNot(requiredColumns(_)).map(col(_)) ++ requiredColumns.map(c => when(col(c) > 3, col(c) + 1).otherwise(col(c)).as(c))
scala> df.select(columnExpr:_*).show(false)
+----+----+----+----+
|col1|col2|col3|col4|
+----+----+----+----+
|1 |2 |3 |4 |
|2 |3 |5 |5 |
|3 |5 |6 |6 |
|5 |6 |7 |7 |
|6 |7 |8 |8 |
|7 |8 |9 |9 |
+----+----+----+----+
If I understand you right, you are trying to do a dataframe wise operation. you dont need to iterate for this . I can show you how it can be done in pyspark. probably it can be taken over in scala.
from pyspark.sql import functions as F
tst= sqlContext.createDataFrame([(1,7,0),(1,8,4),(1,0,10),(5,1,90),(7,6,0),(0,3,11)],schema=['col1','col2','col3'])
expr = [F.when(F.col(coln)>3,F.col(coln)+1).otherwise(F.col(coln)).alias(coln) for coln in tst.columns if 'col3' not in coln]
tst1= tst.select(*expr)
results:
tst1.show()
+----+----+
|col1|col2|
+----+----+
| 1| 8|
| 1| 9|
| 1| 0|
| 6| 1|
| 8| 7|
| 0| 3|
+----+----+
This should give you the desired result
You can iterate over all columns and apply the condition in single line as below,
original_df.select(original_df.columns.map(c => (when(col(c) > lit(3), col(c)+1).otherwise(col(c))).alias(c)):_*).show()
+----+----+----+----+
|col1|col2|col3|col4|
+----+----+----+----+
| 1| 2| 3| 5|
| 2| 3| 5| 6|
| 3| 5| 6| 7|
| 5| 6| 7| 8|
| 6| 7| 8| 9|
| 7| 8| 9| 10|
+----+----+----+----+
You can use foldLeft whenever you want to make changes on multiple columns as below
val original_df = Seq(
(1,2,3,4),
(2,3,4,5),
(3,4,5,6),
(4,5,6,7),
(5,6,7,8),
(6,7,8,9)
).toDF("col1","col2","col3","col4")
//Filter the columns that yuou want to update
val columns = original_df.columns
columns.foldLeft(original_df){(acc, colName) =>
acc.withColumn(colName, when(col(colName) > 3, col(colName) + 1).otherwise(col(colName)))
}
.show(false)
Output:
+----+----+----+----+
|col1|col2|col3|col4|
+----+----+----+----+
|1 |2 |3 |5 |
|2 |3 |5 |6 |
|3 |5 |6 |7 |
|5 |6 |7 |8 |
|6 |7 |8 |9 |
|7 |8 |9 |10 |
+----+----+----+----+

Process multiple dataframes in parallel Scala

I am a newbie in Scala-Spark. I have a dataframe like the one below that I need to split into different chunks of data based into a group ID and process them independently in parallel.
+----+-------+-----+-------+
|user|feature|value|groupID
+----+-------+-----+-------+
| 1| 100| 1| A|
| 2| 20B| 0| B|
| 3| 30A| 1| B|
| 4| 40A| 1| B|
| 5| 50A| 1| A|
| 6| 10A| 0| B|
| 7| 200| 1| A|
| 8| 30B| 1| B|
| 9| 400| 0| A|
| 10| 50C| 0| A|
+----+-------+-----+-------+
1 Step I need to split it to have two different df like these ones: I can user a filter for this. But I am not sure if (due to the large number of different dataframes they will produce) I should save them into ADLS as parquets or keep them in memory.
+----+-------+-----+-------+
|user|feature|value|groupID
+----+-------+-----+-------+
| 1| 100| 1| A|
| 5| 50A| 1| A|
| 7| 200| 1| A|
| 9| 400| 0| A|
| 10| 50C| 0| A|
+----+-------+-----+-------+
+----+-------+-----+-------+
|user|feature|value|groupID
+----+-------+-----+-------+
| 2| 20B| 0| B|
| 3| 30A| 1| B|
| 4| 40A| 1| B|
| 6| 10A| 0| B|
| 8| 30B| 1| B|
+----+-------+-----+-------+
2 Step Process independently each dataframe in a parallel fashion and get independent processed dataframes.
To give some context:
The number of groupIds will be high therefore they cannot be hardcoded.
The processing of each dataframe would ideally happen in parallel.
I ask for a brief idea on how to proceed: I have seen .par.foreach (but is not clear to me how to apply this on a dynamic number of dataframes and how to store them independently nor if the best efficient way)
Check below code.
scala> df.show(false)
+----+-------+-----+-------+
|user|feature|value|groupID|
+----+-------+-----+-------+
|1 |100 |1 |A |
|2 |20B |0 |B |
|3 |30A |1 |B |
|4 |40A |1 |B |
|5 |50A |1 |A |
|6 |10A |0 |B |
|7 |200 |1 |A |
|8 |30B |1 |B |
|9 |400 |0 |A |
|10 |50C |0 |A |
+----+-------+-----+-------+
Get distinct groupid values from dataframe.
scala> val groupIds = df.select($"groupID").distinct.as[String].collect // Get distinct group ids.
groupIds: Array[String] = Array(B, A)
Use .par for parallel process. You need add your logic inside map.
scala> groupIds.par.map(groupid => df.filter($"groupId" === lit(groupid))).foreach(_.show(false)) // here you might need add your logic to save or any other inside map function not foreach.., for example I have added logic to show dataframe content in foreach.
+----+-------+-----+-------+
|user|feature|value|groupID|
+----+-------+-----+-------+
|2 |20B |0 |B |
|3 |30A |1 |B |
|4 |40A |1 |B |
|6 |10A |0 |B |
|8 |30B |1 |B |
+----+-------+-----+-------+
+----+-------+-----+-------+
|user|feature|value|groupID|
+----+-------+-----+-------+
|1 |100 |1 |A |
|5 |50A |1 |A |
|7 |200 |1 |A |
|9 |400 |0 |A |
|10 |50C |0 |A |
+----+-------+-----+-------+

apply an aggregate result to all ungrouped rows of a dataframe in spark

assume there is a dataframe as follows:
machine_id | value
1| 5
1| 3
2| 6
2| 9
2| 14
I want to produce a final dataframe like this
machine_id | value | diff
1| 5| 1
1| 3| -1
2| 6| -4
2| 10| 0
2| 14| 4
the values in "diff" column is computed as groupBy($"machine_id").avg($"value") - value.
note that the avg for machine_id==1 is (5+3)/2 = 4 and for machine_id ==2 is (6+10+14)/3 = 10
What is the best way to produce such a final dataframe in Apache Spark?
You can use Window function to get the desired output
Given the dataframe as
+----------+-----+
|machine_id|value|
+----------+-----+
|1 |5 |
|1 |3 |
|2 |6 |
|2 |10 |
|2 |14 |
+----------+-----+
You can use following code
df.withColumn("diff", avg("value").over(Window.partitionBy("machine_id")))
.withColumn("diff", 'value - 'diff)
to get the final result as
+----------+-----+----+
|machine_id|value|diff|
+----------+-----+----+
|1 |5 |1.0 |
|1 |3 |-1.0|
|2 |6 |-4.0|
|2 |10 |0.0 |
|2 |14 |4.0 |
+----------+-----+----+