Pyspark - Depth-First Search on Dataframe - pyspark

I have the following pyspark application that generates sequences of child/parent processes from a csv of child/parent process id's. Considering the problem as a tree, I'm using an iterative depth-first search starting at leaf nodes (a process that has no children) and iterating through my file to create these closures where process 1 is the parent to process 2 which is the parent of process 3 so on and so forth.
In other words, given a csv as shown below, is it possible to implement a depth-first search (iteratively or recursively) using pyspark dataframes & appropriate pyspark-isms to generate said closures without having to use the .collect() function (which is incredible expensive)?
from pyspark.sql.functions import monotonically_increasing_id
import copy
from pyspark.sql import SQLContext
from pyspark import SparkContext
class Test():
def __init__(self):
self.process_list = []
def main():
test = Test()
sc = SparkContext.getOrCreate()
sqlContext = SQLContext(sc)
df = sc.textFile("<path to csv>")
df = df.map(lambda line: line.split(","))
header = df.first()
data = df.filter(lambda row: row != header)
data = data.toDF(header)
data.createOrReplaceTempView("flat")
data = sqlContext.sql("select doc_process_pid, doc_parent_pid from flat
where doc_parent_pid is not null AND
doc_process_pid is not null")
data = data.select(monotonically_increasing_id().alias("rowId"), "*")
data.createOrReplaceTempView("data")
leaf_df = sqlContext.sql("select doc_process_pid, doc_parent_pid from data
where doc_parent_pid != -1 AND
doc_process_pid == -1")
leaf_df = leaf_df.rdd.collect()
data = sqlContext.sql("select doc_process_pid, doc_parent_pid from data
where doc_process_pid != -1")
data.createOrReplaceTempView("data")
for row in leaf_df:
path = []
rowID = row[0]
data = data.filter(data['rowId'] != rowID)
parentID = row[4]
path.append(parentID)
while (True):
next_df = sqlContext.sql(
"select doc_process_pid, doc_parent_pid from data where
doc_process_pid == " + str(parentID))
next_df_rdd = next_df.collect()
print("parent: ", next_df_rdd[0][1])
parentID = next_df_rdd[0][1]
if (int(parentID) != -1):
path.append(next_df_rdd[0][1])
else:
test.process_list.append(copy.deepcopy(path))
break
print("final: ", test.process_list)
main()
Here is my csv:
doc_process_pid doc_parent_pid
1 -1
2 1
6 -1
7 6
8 7
9 8
21 -1
22 21
24 -1
25 24
26 25
27 26
28 27
29 28
99 6
107 99
108 -1
109 108
222 109
1000 7
1001 1000
-1 9
-1 22
-1 29
-1 107
-1 1001
-1 222
-1 2
It represents child/parent process relationships. If we consider this as a tree, then leaf nodes are defined by doc_process_id == -1 and root nodes are process where doc_parent_process == -1.
The code above generates two data frames:
Leaf Nodes:
+---------------+--------------+
|doc_process_pid|doc_parent_pid|
+---------------+--------------+
| -1| 9|
| -1| 22|
| -1| 29|
| -1| 107|
| -1| 1001|
| -1| 222|
| -1| 2|
+---------------+--------------+
The remaining child/parent processes sans leaf nodes:
+---------------+--------------+
|doc_process_pid|doc_parent_pid|
+---------------+--------------+
| 1| -1|
| 2| 1|
| 6| -1|
| 7| 6|
| 8| 7|
| 9| 8|
| 21| -1|
| 22| 21|
| 24| -1|
| 25| 24|
| 26| 25|
| 27| 26|
| 28| 27|
| 29| 28|
| 99| 6|
| 107| 99|
| 108| -1|
| 109| 108|
| 222| 109|
| 1000| 7|
+---------------+--------------+
The output would be:
[[1, 2],
[6, 99, 107],
[6, 99, 7, 1000, 1001],
[6, 7, 1000, 8, 9],
[21, 22],
[24, 25, 26, 27, 28, 29],
[108, 109, 222]])
Thoughts? While this it a bit specific, I want to emphasize the generalized question of performing depth-first searches to generate closures of sequences represented in this DataFrame format.
Thanks in advance for the help!

I don't think pyspark it's the best language to do this.
A solution would be to iterate through the tree node levels joining the dataframe with itself everytime.
Let's create our dataframe, no need to split it into leaf and other nodes, we'll just keep the original dataframe:
data = spark.createDataFrame(
sc.parallelize(
[[1, -1], [2, 1], [6, -1], [7, 6], [8, 7], [9, 8], [21,-1], [22,21], [24,-1], [25,24], [26,25], [27,26], [28,27],
[29,28], [99, 6], [107,99], [108,-1], [109,108], [222,109], [1000,7], [1001,1000], [ -1,9], [ -1,22], [ -1,29],
[ -1,107], [ -1, 1001], [ -1,222], [ -1,2]]
),
["doc_process_pid", "doc_parent_pid"]
)
We'll now create two dataframes from this tree, one will be our building base and the other one will be our construction bricks:
df1 = data.filter("doc_parent_pid = -1").select(data.doc_process_pid.alias("node"))
df2 = data.select(data.doc_process_pid.alias("son"), data.doc_parent_pid.alias("node")).filter("node != -1")
Let's define a function for step i of the construction:
def add_node(df, i):
return df.filter("node != -1").join(df2, "node", "inner").withColumnRenamed("node", "node" + str(i)).withColumnRenamed("son", "node")
Let's define our initial state:
from pyspark.sql.types import *
df = df1
i = 0
df_end = spark.createDataFrame(
sc.emptyRDD(),
StructType([StructField("branch", ArrayType(LongType()), True)])
)
When a branch is fully constructed we take it out of dfand put it in df_end:
import pyspark.sql.functions as psf
while df.count() > 0:
i = i + 1
df = add_node(df, i)
df_end = df.filter("node = -1").drop('node').select(psf.array(*[c for c in reversed(df.columns) if c != "node"]).alias("branch")).unionAll(
df_end
)
df = df.filter("node != -1")
At the end, df is empty and we have
df_end.show(truncate=False)
+------------------------+
|branch |
+------------------------+
|[24, 25, 26, 27, 28, 29]|
|[6, 7, 8, 9] |
|[6, 7, 1000, 1001] |
|[108, 109, 222] |
|[6, 99, 107] |
|[21, 22] |
|[1, 2] |
+------------------------+
The worst case for this algorithm is as many joins as the maximum branch length.

Related

Window function with PySpark

I have a PySpark Dataframe and my goal is to create a Flag column whose value depends on the value of the Amount column.
Basically, for each Group, I want to know if in any of the first three months, there is an amount greater than 0 and if that is the case, the value of the Flag column will be 1 for all the group, otherwise the value will be 0.
I will include an example to clarify a bit better.
Initial PySpark Dataframe:
Group
Month
Amount
A
1
0
A
2
0
A
3
35
A
4
0
A
5
0
B
1
0
B
2
0
C
1
0
C
2
0
C
3
0
C
4
13
D
1
0
D
2
24
D
3
0
Final PySpark Dataframe:
Group
Month
Amount
Flag
A
1
0
1
A
2
0
1
A
3
35
1
A
4
0
1
A
5
0
1
B
1
0
0
B
2
0
0
C
1
0
0
C
2
0
0
C
3
0
0
C
4
13
0
D
1
0
1
D
2
24
1
D
3
0
1
Basically, what I want is for each group, to sum the amount of the first 3 months. If that sum is greater than 0, the flag is 1 for all the elements of the group, and otherwise is 0.
You can create the flag column by applying a Window function. Create a psuedo-column which becomes 1 if the criteria is met and then finally sum over the psuedo-column and if it's greater than 0, then there was atleast once row that met the criteria and set the flag to 1.
from pyspark.sql import functions as F
from pyspark.sql import Window as W
data = [("A", 1, 0, ),
("A", 2, 0, ),
("A", 3, 35, ),
("A", 4, 0, ),
("A", 5, 0, ),
("B", 1, 0, ),
("B", 2, 0, ),
("C", 1, 0, ),
("C", 2, 0, ),
("C", 3, 0, ),
("C", 4, 13, ),
("D", 1, 0, ),
("D", 2, 24, ),
("D", 3, 0, ), ]
df = spark.createDataFrame(data, ("Group", "Month", "Amount", ))
ws = W.partitionBy("Group").orderBy("Month").rowsBetween(W.unboundedPreceding, W.unboundedFollowing)
criteria = F.when((F.col("Month") < 4) & (F.col("Amount") > 0), F.lit(1)).otherwise(F.lit(0))
(df.withColumn("flag", F.when(F.sum(criteria).over(ws) > 0, F.lit(1)).otherwise(F.lit(0)))
).show()
"""
+-----+-----+------+----+
|Group|Month|Amount|flag|
+-----+-----+------+----+
| A| 1| 0| 1|
| A| 2| 0| 1|
| A| 3| 35| 1|
| A| 4| 0| 1|
| A| 5| 0| 1|
| B| 1| 0| 0|
| B| 2| 0| 0|
| C| 1| 0| 0|
| C| 2| 0| 0|
| C| 3| 0| 0|
| C| 4| 13| 0|
| D| 1| 0| 1|
| D| 2| 24| 1|
| D| 3| 0| 1|
+-----+-----+------+----+
"""
You can use Window function with count and when.
w = Window.partitionBy('Group')
df = df.withColumn('Flag', F.count(
F.when((F.col('Month') < 4) & (F.col('Amount') > 0), True)).over(w))
.withColumn('Flag', F.when(F.col('Flag') > 0, 1).otherwise(0))

Find top N game for every ID based on total time using spark and scala

Find top N Game for every id who watched based on total time so here is my input dataframe:
InputDF:
id | Game | Time
1 A 10
2 B 100
1 A 100
2 C 105
1 N 103
2 B 102
1 N 90
2 C 110
And this is the output that I am expecting:
OutputDF:
id | Game | Time|
1 N 193
1 A 110
2 C 215
2 B 202
Here what I have tried but it is not working as expected:
val windowDF = Window.partitionBy($"id").orderBy($"Time".desc)
InputDF.withColumn("rank", row_number().over(windowDF))
.filter("rank<=10")
Your top-N ranking applies only to individual time rather than total time per game. A groupBy/sum to compute total time followed by a ranking on the total time will do:
val df = Seq(
(1, "A", 10),
(2, "B", 100),
(1, "A", 100),
(2, "C", 105),
(1, "N", 103),
(2, "B", 102),
(1, "N", 90),
(2, "C", 110)
).toDF("id", "game", "time")
import org.apache.spark.sql.expressions.Window
val win = Window.partitionBy($"id").orderBy($"total_time".desc)
df.
groupBy("id", "game").agg(sum("time").as("total_time")).
withColumn("rank", row_number.over(win)).
where($"rank" <= 10).
show
// +---+----+----------+----+
// | id|game|total_time|rank|
// +---+----+----------+----+
// | 1| N| 193| 1|
// | 1| A| 110| 2|
// | 2| C| 215| 1|
// | 2| B| 202| 2|
// +---+----+----------+----+

Conditional Concatenation in Spark

I have a dataframe with the below structure :
+----------+------+------+----------------+--------+------+
| date|market|metric|aggregator_Value|type |rank |
+----------+------+------+----------------+--------+------+
|2018-08-05| m1| 16 | m1|median | 1 |
|2018-08-03| m1| 5 | m1|median | 2 |
|2018-08-01| m1| 10 | m1|mean | 3 |
|2018-08-05| m2| 35 | m2|mean | 1 |
|2018-08-03| m2| 25 | m2|mean | 2 |
|2018-08-01| m2| 5 | m2|mean | 3 |
+----------+------+------+----------------+--------+------+
In this dataframe the rank column is calculated on the order of date and groupings of the market column.
Like this
val w_rank = Window.partitionBy("market").orderBy(desc("date"))
val outputDF2=outputDF1.withColumn("rank",rank().over(w_rank))
I want to extract the concatenated value of the metric column in the output dataframe when the rank=1 , with the condition that if the type="median" in rank=1 row is then concatenate all the metric values with that market .Otherwise if the type="mean" in rank=1 row , then concatenate only the previous 2 metric values .Like this
+----------+------+------+----------------+--------+---------+
| date|market|metric|aggregator_Value|type |result |
+----------+------+------+----------------+--------+---------+
|2018-08-05| m1| 16 | m1|median |10|5|16 |
|2018-08-05| m2| 35 | m1|mean |25|35 |
+----------+------+------+----------------+--------+---------+
How can I achieve this ?
You could nullify column metric according to the specific condition and apply collect_list followed by concat_ws to get the wanted result, as show below:
val df = Seq(
("2018-08-05", "m1", 16, "m1", "median", 1),
("2018-08-03", "m1", 5, "m1", "median", 2),
("2018-08-01", "m1", 10, "m1", "mean", 3),
("2018-08-05", "m2", 35, "m2", "mean", 1),
("2018-08-03", "m2", 25, "m2", "mean", 2),
("2018-08-01", "m2", 5, "m2", "mean", 3)
).toDF("date", "market", "metric", "aggregator_value", "type", "rank")
val win_desc = Window.partitionBy("market").orderBy(desc("date"))
val win_asc = Window.partitionBy("market").orderBy(asc("date"))
df.
withColumn("rank1_type", first($"type").over(win_desc.rowsBetween(Window.unboundedPreceding, 0))).
withColumn("cond_metric", when($"rank1_type" === "mean" && $"rank" > 2, null).otherwise($"metric")).
withColumn("result", concat_ws("|", collect_list("cond_metric").over(win_asc))).
where($"rank" === 1).
show
// +----------+------+------+----------------+------+----+----------+-----------+-------+
// | date|market|metric|aggregator_value| type|rank|rank1_type|cond_metric| result|
// +----------+------+------+----------------+------+----+----------+-----------+-------+
// |2018-08-05| m1| 16| m1|median| 1| median| 16|10|5|16|
// |2018-08-05| m2| 35| m2| mean| 1| mean| 35| 25|35|
// +----------+------+------+----------------+------+----+----------+-----------+-------+

Pyspark - Count non zero columns in a spark data frame for each row

I have dataframe, I need to count number of non zero columns by row in Pyspark.
ID COL1 COL2 COL3
1 0 1 -1
2 0 0 0
3 -17 20 15
4 23 1 0
Expected Output:
ID COL1 COL2 COL3 Count
1 0 1 -1 2
2 0 0 0 0
3 -17 20 15 3
4 23 1 0 1
There are various approaches to achieve this, below I am listing one of the simple approaches -
df = sqlContext.createDataFrame([
[1, 0, 1, -1],
[2, 0, 0, 0],
[3, -17, 20, 15],
[4, 23, 1, 0]],
["ID", "COL1", "COL2", "COL3"]
)
#Check columns list removing ID columns
df.columns[1:]
['COL1', 'COL2', 'COL3']
#import functions
from pyspark.sql import functions as F
#Adding new column count having sum/addition(if column !=0 then 1 else 0)
df.withColumn(
"count",
sum([
F.when(F.col(cl) != 0, 1).otherwise(0) for cl in df.columns[1:]
])
).show()
+---+----+----+----+-----+
| ID|COL1|COL2|COL3|count|
+---+----+----+----+-----+
| 1| 0| 1| -1| 2|
| 2| 0| 0| 0| 0|
| 3| -17| 20| 15| 3|
| 4| 23| 1| 0| 2|
+---+----+----+----+-----+

Scala Spark - How reduce a dataframe with many couple columns in a single couple columns?

i have a dataframe with many couple (count and score) columns.
This situation is not a pivot, but similar an unpivot.
Example:
|house_score | house_count | mobile_score | mobile_count | sport_score | sport_count | ....<other couple columns>.....|
| 20 2 48 6 6 78 |
| 40 78 47 74 69 6 |
I want a new dateframe with only two columns, score e count. The new dataframe reduce all couple columns in a only couple columns.
_________________
| score | count |
| 20 | 2 |
| 40 | 78 |
| 48 | 6 |
| 47 | 74 |
| 6 | 78 |
| 69 | 6 |
|_______________|
What's the best solution (elegant code/performance)?
You can achieve this using a foldLeft over the column names (excluding the part after the _). This is reasonably efficient since all intensive operations are distributed, and the code is fairly clean and concise.
// df from example
val df = sc.parallelize(List((20,2,48,6,6,78), (40,78,47,74,69,6) )).toDF("house_score", "house_count", "mobile_score", "mobile_count", "sport_score", "sport_count")
// grab column names (part before the _)
val cols = df.columns.map(col => col.split("_")(0)).distinct
// fold left over all columns
val result = cols.tail.foldLeft(
// init with cols.head column
df.select(col(s"${cols.head}_score").as("score"), col(s"${cols.head}_count").as("count"))
){case (acc,c) => {
// union current column c
acc.unionAll(df.select(col(s"${c}_score").as("score"), col(s"${c}_count").as("count")))
}}
result.show
Using unionAlls as suggested in another answer will require you to scan the data multiple times and on each scan project the df to only 2 columns. From a performance perspective scanning the data multiple times should be avoided if you can do the work in 1 pass especially if you have large datasets that are not cacheable or you need to do many scans.
You can do it in 1 pass, by generating all the tuples (score, count) and then flat mapping them. I let you decide how elegant it is:
scala> :paste
// Entering paste mode (ctrl-D to finish)
val df = List((20,2,48,6,6,78), (40,78,47,74,69,6))
.toDF("house_score", "house_count", "mobile_score", "mobile_count", "sport_score", "sport_count")
df.show
val result = df
.flatMap(r => Range(0, 5, 2).map(i => (r.getInt(i), r.getInt(i + 1))))
.toDF("score", "count")
result.show
// Exiting paste mode, now interpreting.
+-----------+-----------+------------+------------+-----------+-----------+
|house_score|house_count|mobile_score|mobile_count|sport_score|sport_count|
+-----------+-----------+------------+------------+-----------+-----------+
| 20| 2| 48| 6| 6| 78|
| 40| 78| 47| 74| 69| 6|
+-----------+-----------+------------+------------+-----------+-----------+
+-----+-----+
|score|count|
+-----+-----+
| 20| 2|
| 48| 6|
| 6| 78|
| 40| 78|
| 47| 74|
| 69| 6|
+-----+-----+
df: org.apache.spark.sql.DataFrame = [house_score: int, house_count: int ... 4 more fields]
result: org.apache.spark.sql.DataFrame = [score: int, count: int]