Find top N game for every ID based on total time using spark and scala - scala

Find top N Game for every id who watched based on total time so here is my input dataframe:
InputDF:
id | Game | Time
1 A 10
2 B 100
1 A 100
2 C 105
1 N 103
2 B 102
1 N 90
2 C 110
And this is the output that I am expecting:
OutputDF:
id | Game | Time|
1 N 193
1 A 110
2 C 215
2 B 202
Here what I have tried but it is not working as expected:
val windowDF = Window.partitionBy($"id").orderBy($"Time".desc)
InputDF.withColumn("rank", row_number().over(windowDF))
.filter("rank<=10")

Your top-N ranking applies only to individual time rather than total time per game. A groupBy/sum to compute total time followed by a ranking on the total time will do:
val df = Seq(
(1, "A", 10),
(2, "B", 100),
(1, "A", 100),
(2, "C", 105),
(1, "N", 103),
(2, "B", 102),
(1, "N", 90),
(2, "C", 110)
).toDF("id", "game", "time")
import org.apache.spark.sql.expressions.Window
val win = Window.partitionBy($"id").orderBy($"total_time".desc)
df.
groupBy("id", "game").agg(sum("time").as("total_time")).
withColumn("rank", row_number.over(win)).
where($"rank" <= 10).
show
// +---+----+----------+----+
// | id|game|total_time|rank|
// +---+----+----------+----+
// | 1| N| 193| 1|
// | 1| A| 110| 2|
// | 2| C| 215| 1|
// | 2| B| 202| 2|
// +---+----+----------+----+

Related

Creating a new column using info from another df

I'm trying to create a new column based off information from another data table.
df1
Loc Time Wage
1 192 1
3 192 2
1 193 3
5 193 3
7 193 5
2 194 7
df2
Loc City
1 NYC
2 Miami
3 LA
4 Chicago
5 Houston
6 SF
7 DC
desired output:
Loc Time Wage City
1 192 1 NYC
3 192 2 LA
1 193 3 NYC
5 193 3 Houston
7 193 5 DC
2 194 7 Miami
The actual dataframes vary quite largely in terms of row numbers, but its something along the lines of that. I think this might be achievable through .map but I haven't found much documentation for that online. join doesn't really seem to fit this situation.
join is exactly what you need. Try running this in the spark-shell
import spark.implicits._
val col1 = Seq("loc", "time", "wage")
val data1 = Seq((1, 192, 1), (3, 193, 2), (1, 193, 3), (5, 193, 3), (7, 193, 5), (2, 194, 7))
val col2 = Seq("loc", "city")
val data2 = Seq((1, "NYC"), (2, "Miami"), (3, "LA"), (4, "Chicago"), (5, "Houston"), (6, "SF"), (7, "DC"))
val df1 = spark.sparkContext.parallelize(data1).toDF(col1: _*)
val df2 = spark.sparkContext.parallelize(data2).toDF(col2: _*)
val outputDf = df1.join(df2, Seq("loc")) // join on the column "loc"
outputDf.show()
This will output
+---+----+----+-------+
|loc|time|wage| city|
+---+----+----+-------+
| 1| 192| 1| NYC|
| 1| 193| 3| NYC|
| 2| 194| 7| Miami|
| 3| 193| 2| LA|
| 5| 193| 3|Houston|
| 7| 193| 5| DC|
+---+----+----+-------+

How to sum(case when then) in SparkSQL DataFrame just like sql?

I'm new to SparkSQL, and I want to calculate the percentage in my data with every status.
Here is my data like below:
A B
11 1
11 3
12 1
13 3
12 2
13 1
11 1
12 2
So,I can do it in SQL like this:
select (C.oneTotal / C.total) as onePercentage,
(C.twoTotal / C.total) as twotPercentage,
(C.threeTotal / C.total) as threPercentage
from (select count(*) as total,
A,
sum(case when B = '1' then 1 else 0 end) as oneTotal,
sum(case when B = '2' then 1 else 0 end) as twoTotal,
sum(case when B = '3' then 1 else 0 end) as threeTotal
from test
group by A) as C;
But in the SparkSQL DataFrame, first I calculate totalCount in every status like below:
// wrong code
val cc = transDF.select("transData.*").groupBy("A")
.agg(count("transData.*").alias("total"),
sum(when(col("B") === "1", 1)).otherwise(0)).alias("oneTotal")
sum(when(col("B") === "2", 1).otherwise(0)).alias("twoTotal")
The problem is the sum(when)'s result is zero.
Do I have wrong use with it?
How to implement it in SparkSQL just like my above SQL? And then calculate the portion of every status?
Thank you for your help. In the end, I solve it with sum(when). below is my current code.
val cc = transDF.select("transData.*").groupBy("A")
.agg(count("transData.*").alias("total"),
sum(when(col("B") === "1", 1).otherwise(0)).alias("oneTotal"),
sum(when(col("B") === "2", 1).otherwise(0)).alias("twoTotal"))
.select(col("total"),
col("A"),
col("oneTotal") / col("total").alias("oneRate"),
col("twoTotal") / col("total").alias("twoRate"))
Thanks again.
you can use sum(when(... or also count(when.., the second option being shorter to write:
val df = Seq(
(11, 1),
(11, 3),
(12, 1),
(13, 3),
(12, 2),
(13, 1),
(11, 1),
(12, 2)
).toDF("A", "B")
df
.groupBy($"A")
.agg(
count("*").as("total"),
count(when($"B"==="1",$"A")).as("oneTotal"),
count(when($"B"==="2",$"A")).as("twoTotal"),
count(when($"B"==="3",$"A")).as("threeTotal")
)
.select(
$"A",
($"oneTotal"/$"total").as("onePercentage"),
($"twoTotal"/$"total").as("twoPercentage"),
($"threeTotal"/$"total").as("threePercentage")
)
.show()
gives
+---+------------------+------------------+------------------+
| A| onePercentage| twoPercentage| threePercentage|
+---+------------------+------------------+------------------+
| 12|0.3333333333333333|0.6666666666666666| 0.0|
| 13| 0.5| 0.0| 0.5|
| 11|0.6666666666666666| 0.0|0.3333333333333333|
+---+------------------+------------------+------------------+
alternatively, you could produce a "long" table with window-functions:
df
.groupBy($"A",$"B").count()
.withColumn("total",sum($"count").over(Window.partitionBy($"A")))
.select(
$"A",
$"B",
($"count"/$"total").as("percentage")
).orderBy($"A",$"B")
.show()
+---+---+------------------+
| A| B| percentage|
+---+---+------------------+
| 11| 1|0.6666666666666666|
| 11| 3|0.3333333333333333|
| 12| 1|0.3333333333333333|
| 12| 2|0.6666666666666666|
| 13| 1| 0.5|
| 13| 3| 0.5|
+---+---+------------------+
As far as I understood you want to implement the logic like above sql showed in the question.
one way is like below example
package examples
import org.apache.log4j.Level
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.functions._
object AggTest extends App {
val logger = org.apache.log4j.Logger.getLogger("org")
logger.setLevel(Level.WARN)
val spark = SparkSession.builder.appName(getClass.getName)
.master("local[*]").getOrCreate
import spark.implicits._
val df = Seq(
(11, 1),
(11, 3),
(12, 1),
(13, 3),
(12, 2),
(13, 1),
(11, 1),
(12, 2)
).toDF("A", "B")
df.show(false)
df.createOrReplaceTempView("test")
spark.sql(
"""
|select (C.oneTotal / C.total) as onePercentage,
| (C.twoTotal / C.total) as twotPercentage,
| (C.threeTotal / C.total) as threPercentage
|from (select count(*) as total,
| A,
| sum(case when B = '1' then 1 else 0 end) as oneTotal,
| sum(case when B = '2' then 1 else 0 end) as twoTotal,
| sum(case when B = '3' then 1 else 0 end) as threeTotal
| from test
| group by A) as C
""".stripMargin).show
}
Result :
+---+---+
|A |B |
+---+---+
|11 |1 |
|11 |3 |
|12 |1 |
|13 |3 |
|12 |2 |
|13 |1 |
|11 |1 |
|12 |2 |
+---+---+
+------------------+------------------+------------------+
| onePercentage| twotPercentage| threPercentage|
+------------------+------------------+------------------+
|0.3333333333333333|0.6666666666666666| 0.0|
| 0.5| 0.0| 0.5|
|0.6666666666666666| 0.0|0.3333333333333333|
+------------------+------------------+------------------+

Aggregating similar records in Spark 1.6.2

I have a data set where after some transformation by using Spark SQL (1.6.2) in scala I got following data. (part of data).
home away count
a b 90
b a 70
c d 50
e f 45
f e 30
Now I want to get final result, like aggregating similar home and away i.e. a and b appearing two times. Similar home and away may not always come in consecutive rows
home away count
a b 160
c d 50
e f 75
Can someone help me out for this.
You can use create a temporary column using array and sort_array which you can use groupBy on to solve this. Here I assumed there can at most be two rows with the same value in the home/away columns and that which value is in home and which is in away doesn't matter:
val df = Seq(("a", "b", 90),
("b", "a", 70),
("c", "d", 50),
("e", "f", 45),
("f", "e", 30)).toDF("home", "away", "count")
val df2 = df.withColumn("home_away", sort_array(array($"home", $"away")))
.groupBy("home_away")
.agg(sum("count").as("count"))
.select($"home_away"(0).as("home"), $"home_away"(1).as("away"), $"count")
.drop("home_away")
Will give:
+----+----+-----+
|home|away|count|
+----+----+-----+
| e| f| 75|
| c| d| 50|
| a| b| 160|
+----+----+-----+

Pyspark - Depth-First Search on Dataframe

I have the following pyspark application that generates sequences of child/parent processes from a csv of child/parent process id's. Considering the problem as a tree, I'm using an iterative depth-first search starting at leaf nodes (a process that has no children) and iterating through my file to create these closures where process 1 is the parent to process 2 which is the parent of process 3 so on and so forth.
In other words, given a csv as shown below, is it possible to implement a depth-first search (iteratively or recursively) using pyspark dataframes & appropriate pyspark-isms to generate said closures without having to use the .collect() function (which is incredible expensive)?
from pyspark.sql.functions import monotonically_increasing_id
import copy
from pyspark.sql import SQLContext
from pyspark import SparkContext
class Test():
def __init__(self):
self.process_list = []
def main():
test = Test()
sc = SparkContext.getOrCreate()
sqlContext = SQLContext(sc)
df = sc.textFile("<path to csv>")
df = df.map(lambda line: line.split(","))
header = df.first()
data = df.filter(lambda row: row != header)
data = data.toDF(header)
data.createOrReplaceTempView("flat")
data = sqlContext.sql("select doc_process_pid, doc_parent_pid from flat
where doc_parent_pid is not null AND
doc_process_pid is not null")
data = data.select(monotonically_increasing_id().alias("rowId"), "*")
data.createOrReplaceTempView("data")
leaf_df = sqlContext.sql("select doc_process_pid, doc_parent_pid from data
where doc_parent_pid != -1 AND
doc_process_pid == -1")
leaf_df = leaf_df.rdd.collect()
data = sqlContext.sql("select doc_process_pid, doc_parent_pid from data
where doc_process_pid != -1")
data.createOrReplaceTempView("data")
for row in leaf_df:
path = []
rowID = row[0]
data = data.filter(data['rowId'] != rowID)
parentID = row[4]
path.append(parentID)
while (True):
next_df = sqlContext.sql(
"select doc_process_pid, doc_parent_pid from data where
doc_process_pid == " + str(parentID))
next_df_rdd = next_df.collect()
print("parent: ", next_df_rdd[0][1])
parentID = next_df_rdd[0][1]
if (int(parentID) != -1):
path.append(next_df_rdd[0][1])
else:
test.process_list.append(copy.deepcopy(path))
break
print("final: ", test.process_list)
main()
Here is my csv:
doc_process_pid doc_parent_pid
1 -1
2 1
6 -1
7 6
8 7
9 8
21 -1
22 21
24 -1
25 24
26 25
27 26
28 27
29 28
99 6
107 99
108 -1
109 108
222 109
1000 7
1001 1000
-1 9
-1 22
-1 29
-1 107
-1 1001
-1 222
-1 2
It represents child/parent process relationships. If we consider this as a tree, then leaf nodes are defined by doc_process_id == -1 and root nodes are process where doc_parent_process == -1.
The code above generates two data frames:
Leaf Nodes:
+---------------+--------------+
|doc_process_pid|doc_parent_pid|
+---------------+--------------+
| -1| 9|
| -1| 22|
| -1| 29|
| -1| 107|
| -1| 1001|
| -1| 222|
| -1| 2|
+---------------+--------------+
The remaining child/parent processes sans leaf nodes:
+---------------+--------------+
|doc_process_pid|doc_parent_pid|
+---------------+--------------+
| 1| -1|
| 2| 1|
| 6| -1|
| 7| 6|
| 8| 7|
| 9| 8|
| 21| -1|
| 22| 21|
| 24| -1|
| 25| 24|
| 26| 25|
| 27| 26|
| 28| 27|
| 29| 28|
| 99| 6|
| 107| 99|
| 108| -1|
| 109| 108|
| 222| 109|
| 1000| 7|
+---------------+--------------+
The output would be:
[[1, 2],
[6, 99, 107],
[6, 99, 7, 1000, 1001],
[6, 7, 1000, 8, 9],
[21, 22],
[24, 25, 26, 27, 28, 29],
[108, 109, 222]])
Thoughts? While this it a bit specific, I want to emphasize the generalized question of performing depth-first searches to generate closures of sequences represented in this DataFrame format.
Thanks in advance for the help!
I don't think pyspark it's the best language to do this.
A solution would be to iterate through the tree node levels joining the dataframe with itself everytime.
Let's create our dataframe, no need to split it into leaf and other nodes, we'll just keep the original dataframe:
data = spark.createDataFrame(
sc.parallelize(
[[1, -1], [2, 1], [6, -1], [7, 6], [8, 7], [9, 8], [21,-1], [22,21], [24,-1], [25,24], [26,25], [27,26], [28,27],
[29,28], [99, 6], [107,99], [108,-1], [109,108], [222,109], [1000,7], [1001,1000], [ -1,9], [ -1,22], [ -1,29],
[ -1,107], [ -1, 1001], [ -1,222], [ -1,2]]
),
["doc_process_pid", "doc_parent_pid"]
)
We'll now create two dataframes from this tree, one will be our building base and the other one will be our construction bricks:
df1 = data.filter("doc_parent_pid = -1").select(data.doc_process_pid.alias("node"))
df2 = data.select(data.doc_process_pid.alias("son"), data.doc_parent_pid.alias("node")).filter("node != -1")
Let's define a function for step i of the construction:
def add_node(df, i):
return df.filter("node != -1").join(df2, "node", "inner").withColumnRenamed("node", "node" + str(i)).withColumnRenamed("son", "node")
Let's define our initial state:
from pyspark.sql.types import *
df = df1
i = 0
df_end = spark.createDataFrame(
sc.emptyRDD(),
StructType([StructField("branch", ArrayType(LongType()), True)])
)
When a branch is fully constructed we take it out of dfand put it in df_end:
import pyspark.sql.functions as psf
while df.count() > 0:
i = i + 1
df = add_node(df, i)
df_end = df.filter("node = -1").drop('node').select(psf.array(*[c for c in reversed(df.columns) if c != "node"]).alias("branch")).unionAll(
df_end
)
df = df.filter("node != -1")
At the end, df is empty and we have
df_end.show(truncate=False)
+------------------------+
|branch |
+------------------------+
|[24, 25, 26, 27, 28, 29]|
|[6, 7, 8, 9] |
|[6, 7, 1000, 1001] |
|[108, 109, 222] |
|[6, 99, 107] |
|[21, 22] |
|[1, 2] |
+------------------------+
The worst case for this algorithm is as many joins as the maximum branch length.

spark aggregateByKey with tuple

New to the RDD api of spark - thanks to Spark migrate sql window function to RDD for better performance - I managed to generate the following table:
+-----------------+---+
| _1| _2|
+-----------------+---+
| [col3TooMany,C]| 0|
| [col1,A]| 0|
| [col2,B]| 0|
| [col3TooMany,C]| 1|
| [col1,A]| 1|
| [col2,B]| 1|
|[col3TooMany,jkl]| 0|
| [col1,d]| 0|
| [col2,a]| 0|
| [col3TooMany,C]| 0|
| [col1,d]| 0|
| [col2,g]| 0|
| [col3TooMany,t]| 1|
| [col1,A]| 1|
| [col2,d]| 1|
| [col3TooMany,C]| 1|
| [col1,d]| 1|
| [col2,c]| 1|
| [col3TooMany,C]| 1|
| [col1,c]| 1|
+-----------------+---+
with an initial input of
val df = Seq(
(0, "A", "B", "C", "D"),
(1, "A", "B", "C", "D"),
(0, "d", "a", "jkl", "d"),
(0, "d", "g", "C", "D"),
(1, "A", "d", "t", "k"),
(1, "d", "c", "C", "D"),
(1, "c", "B", "C", "D")
).toDF("TARGET", "col1", "col2", "col3TooMany", "col4")
val columnsToDrop = Seq("col3TooMany")
val columnsToCode = Seq("col1", "col2")
val target = "TARGET"
import org.apache.spark.sql.functions._
val exploded = explode(array(
(columnsToDrop ++ columnsToCode).map(c =>
struct(lit(c).alias("k"), col(c).alias("v"))): _*
)).alias("level")
val long = df.select(exploded, $"TARGET")
import org.apache.spark.util.StatCounter
then
long.as[((String, String), Int)].rdd.aggregateByKey(StatCounter())(_ merge _, _ merge _).collect.head
res71: ((String, String), org.apache.spark.util.StatCounter) = ((col2,B),(count: 3, mean: 0,666667, stdev: 0,471405, max: 1,000000, min: 0,000000))
is aggregating statistics of all the unique values for each column.
How can I add to the count (which is 3 for B in col2) a second count (maybe as a tuple) which represents the number of B in col2 where TARGET == 1. In this case, it should be 2.
You shouldn't need additional aggregate here. With binary target column, mean is just an empirical probability of target being equal 1:
number of 1 - count * mean
number of 0 - count * (1 - mean)