Calculate the sum on the 24 hours time frame in spark dataframe - scala

I want to calculate the sum on date and date+1(24 hours) by filtering the rows based on hours.
1, 2018-05-01 02:12:00,1
1, 2018-05-01 03:16:10,2
1, 2018-05-01 09:12:00,4
1, 2018-05-01 14:18:00,3
1, 2018-05-01 18:32:00,1
1, 2018-05-01 20:12:00,1
1, 2018-05-02 01:22:00,1
1, 2018-05-02 02:12:00,1
1, 2018-05-02 08:30:00,1
1, 2018-05-02 10:12:00,1
1, 2018-05-02 11:32:00,1
1, 2018-05-02 18:12:00,1
1, 2018-05-03 03:12:00,1
1, 2018-05-03 08:22:00,1
Here, example I have filtered the rows from 9AM to 9AM(next date)
Output
1, 2018-05-01,12
1, 2018-05-02,5

First define df for reproducibility:
import pandas as pd
import io
data=\
"""
1, 2018-05-01 02:12:00,1
1, 2018-05-01 03:16:10,2
1, 2018-05-01 09:12:00,4
1, 2018-05-01 14:18:00,3
1, 2018-05-01 18:32:00,1
1, 2018-05-01 20:12:00,1
1, 2018-05-02 01:22:00,1
1, 2018-05-02 02:12:00,1
1, 2018-05-02 08:30:00,1
1, 2018-05-02 10:12:00,1
1, 2018-05-02 11:32:00,1
1, 2018-05-02 18:12:00,1
1, 2018-05-03 03:12:00,1
1, 2018-05-03 08:22:00,1
"""
df = pd.read_csv(io.StringIO(data), sep = ',', names = ['id','t', 'n'], parse_dates =['t'])
Then use pd.Grouper with frequency set to 24h and base parameter set to 9, which indicates period is beggining at 9 a.m.:
df.groupby(pd.Grouper(key='t', freq='24h', base=9)).n.sum()
result:
t
2018-04-30 09:00:00 3
2018-05-01 09:00:00 12
2018-05-02 09:00:00 5
Freq: 24H, Name: n, dtype: int64

Just shift the time of your timestamp column by 9 hours and then groupby the date of the adjusted column:
from pyspark.sql.functions import expr, sum as fsum
df
# DataFrame[id: int, dtime: timestamp, cnt: int]
df.groupby("id", expr("date(dtime - interval 9 hours) as ddate")) \
.agg(fsum("cnt").alias("cnt")) \
.show()
+---+----------+---+
| id| ddate|cnt|
+---+----------+---+
| 1|2018-05-01| 12|
| 1|2018-05-02| 5|
| 1|2018-04-30| 3|
+---+----------+---+

Use date_format(), date_add(),to_date() and then groupBy,aggregate spark built in functions.
Example:
Spark-Scala:
df.show()
//+---+-------------------+---+
//| id| date|cnt|
//+---+-------------------+---+
//| 1|2018-05-01 02:12:00| 1|
//| 1|2018-05-01 03:16:10| 2|
//| 1|2018-05-01 09:12:00| 4|
//| 1|2018-05-01 14:18:00| 3|
//| 1|2018-05-01 18:32:00| 1|
//| 1|2018-05-01 20:12:00| 1|
//| 1|2018-05-02 01:22:00| 1|
//| 1|2018-05-02 02:12:00| 1|
//| 1|2018-05-02 08:30:00| 1|
//| 1|2018-05-02 10:12:00| 1|
//| 1|2018-05-02 11:32:00| 1|
//| 1|2018-05-02 18:12:00| 1|
//| 1|2018-05-03 03:12:00| 1|
//| 1|2018-05-03 08:22:00| 1|
//+---+-------------------+---+
df.withColumn("hour",when(date_format(col("date"),"HH").cast("int") >= 9,to_date(col("date"))).otherwise(date_add(to_date(col("date")),-1))).
groupBy("id","hour").
agg(sum("cnt").cast("int").alias("sum")).
show()
//+---+----------+---+
//| id| hour|sum|
//+---+----------+---+
//| 1|2018-05-01| 12|
//| 1|2018-05-02| 5|
//| 1|2018-04-30| 3|
//+---+----------+---+
Pyspark:
from pyspark.sql.functions import *
from pyspark.sql.types import *
df.withColumn("hour",when(date_format(col("date"),"HH").cast("int") >= 9,to_date(col("date"))).otherwise(date_add(to_date(col("date")),-1))).\
groupBy("id","hour").\
agg(sum("cnt").cast("int").alias("sum")).\
show()
#+---+----------+---+
#| id| hour|sum|
#+---+----------+---+
#| 1|2018-05-01| 12|
#| 1|2018-05-02| 5|
#| 1|2018-04-30| 3|
#+---+----------+---+

Related

How to do cumulative sum based on conditions in spark scala

I have the below data and final_column is the exact output what I am trying to get. I am trying to do cumulative sum of flag and want to rest if flag is 0 then set value to 0 as below data
cola date flag final_column
a 2021-10-01 0 0
a 2021-10-02 1 1
a 2021-10-03 1 2
a 2021-10-04 0 0
a 2021-10-05 0 0
a 2021-10-06 0 0
a 2021-10-07 1 1
a 2021-10-08 1 2
a 2021-10-09 1 3
a 2021-10-10 0 0
b 2021-10-01 0 0
b 2021-10-02 1 1
b 2021-10-03 1 2
b 2021-10-04 0 0
b 2021-10-05 0 0
b 2021-10-06 1 1
b 2021-10-07 1 2
b 2021-10-08 1 3
b 2021-10-09 1 4
b 2021-10-10 0 0
I have tried like
import org.apache.spark.sql.functions._
df.withColumn("final_column",expr("sum(flag) over(partition by cola order date asc)"))
I have tried to add condition like case when flag = 0 then 0 else 1 end inside sum function but not working.
You can define a column group using conditional sum on flag, then using row_number with a Window partitioned by cola and group gives the result you want:
import org.apache.spark.sql.expressions.Window
val result = df.withColumn(
"group",
sum(when(col("flag") === 0, 1).otherwise(0)).over(Window.partitionBy("cola").orderBy("date"))
).withColumn(
"final_column",
row_number().over(Window.partitionBy("cola", "group").orderBy("date")) - 1
).drop("group")
result.show
//+----+-----+----+------------+
//|cola| date|flag|final_column|
//+----+-----+----+------------+
//| b|44201| 0| 0|
//| b|44202| 1| 1|
//| b|44203| 1| 2|
//| b|44204| 0| 0|
//| b|44205| 0| 0|
//| b|44206| 1| 1|
//| b|44207| 1| 2|
//| b|44208| 1| 3|
//| b|44209| 1| 4|
//| b|44210| 0| 0|
//| a|44201| 0| 0|
//| a|44202| 1| 1|
//| a|44203| 1| 2|
//| a|44204| 0| 0|
//| a|44205| 0| 0|
//| a|44206| 0| 0|
//| a|44207| 1| 1|
//| a|44208| 1| 2|
//| a|44209| 1| 3|
//| a|44210| 0| 0|
//+----+-----+----+------------+
row_number() - 1 in this case is just equivalent to sum(col("flag")) as flag values are always 0 or 1. So the above final_column can also be written as:
.withColumn(
"final_column",
sum(col("flag")).over(Window.partitionBy("cola", "group").orderBy("date"))
)

Spark Scala create a new column which contains addition of previous balance amount for each cid

Initial DF:
cid transAmt trasnDate
1 10 2-Aug
1 20 3-Aug
1 30 3-Aug
2 40 2-Aug
2 50 3-Aug
3 60 4-Aug
Output DF:
cid transAmt trasnDate sumAmt
1 10 2-Aug **10**
1 20 3-Aug **30**
1 30 3-Aug **60**
2 40 2-Aug **40**
2 50 3-Aug **90**
3 60 4-Aug **60**
I need a new column as sumAmt which has the addition for each cid
Use window sum function to get the cumulative sum.
Example:
df.show()
//+---+------+----------+
//|cid|Amount|transnDate|
//+---+------+----------+
//| 1| 10| 2-Aug|
//| 1| 20| 3-Aug|
//| 2| 40| 2-Aug|
//| 2| 50| 3-Aug|
//| 3| 60| 4-Aug|
//+---+------+----------+
import org.apache.spark.sql.functions._
import org.apache.spark.sql.types._
import org.apache.spark.sql.expressions._
val w= Window.partitionBy("cid").orderBy("Amount","transnDate")
df.withColumn("sumAmt",sum(col("Amount")).over(w)).show()
//+---+------+----------+------+
//|cid|Amount|transnDate|sumAmt|
//+---+------+----------+------+
//| 1| 10| 2-Aug| 10|
//| 1| 20| 3-Aug| 30|
//| 3| 60| 4-Aug| 60|
//| 2| 40| 2-Aug| 40|
//| 2| 50| 3-Aug| 90|
//+---+------+----------+------+
Just use a simple window indicating rows between.
Window.unboundedPreceding meaning no lower limit
Window.currentRow meaning current row (pretty obvious)
import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions._
val cidCategory = Window.partitionBy("cid")
.orderBy("transDate")
.rowsBetween(Window.unboundedPreceding, Window.currentRow)
val result = df.withColumn("sumAmt", sum($"transAmt").over(cidCategory))
OUTPUT

How to apply conditional counts (with reset) to grouped data in PySpark?

I have PySpark code that effectively groups up rows numerically, and increments when a certain condition is met. I'm having trouble figuring out how to transform this code, efficiently, into one that can be applied to groups.
Take this sample dataframe df
df = sqlContext.createDataFrame(
[
(33, [], '2017-01-01'),
(33, ['apple', 'orange'], '2017-01-02'),
(33, [], '2017-01-03'),
(33, ['banana'], '2017-01-04')
],
('ID', 'X', 'date')
)
This code achieves what I want for this sample df, which is to order by date and to create groups ('grp') that increment when the size column goes back to 0.
df \
.withColumn('size', size(col('X'))) \
.withColumn(
"grp",
sum((col('size') == 0).cast("int")).over(Window.orderBy('date'))
).show()
This is partly based on Pyspark - Cumulative sum with reset condition
Now what I am trying to do is apply the same approach to a dataframe that has multiple IDs - achieving a result that looks like
df2 = sqlContext.createDataFrame(
[
(33, [], '2017-01-01', 0, 1),
(33, ['apple', 'orange'], '2017-01-02', 2, 1),
(33, [], '2017-01-03', 0, 2),
(33, ['banana'], '2017-01-04', 1, 2),
(55, ['coffee'], '2017-01-01', 1, 1),
(55, [], '2017-01-03', 0, 2)
],
('ID', 'X', 'date', 'size', 'group')
)
edit for clarity
1) For the first date of each ID - the group should be 1 - regardless of what shows up in any other column.
2) However, for each subsequent date, I need to check the size column. If the size column is 0, then I increment the group number. If it is any non-zero, positive integer, then I continue the previous group number.
I've seen a few way to handle this in pandas, but I'm having difficulty understanding the applications in pyspark and the ways in which grouped data is different in pandas vs spark (e.g. do I need to use something called UADFs?)
Create a column zero_or_first by checking whether the size is zero or the row is the first row. Then sum.
df2 = sqlContext.createDataFrame(
[
(33, [], '2017-01-01', 0, 1),
(33, ['apple', 'orange'], '2017-01-02', 2, 1),
(33, [], '2017-01-03', 0, 2),
(33, ['banana'], '2017-01-04', 1, 2),
(55, ['coffee'], '2017-01-01', 1, 1),
(55, [], '2017-01-03', 0, 2),
(55, ['banana'], '2017-01-01', 1, 1)
],
('ID', 'X', 'date', 'size', 'group')
)
w = Window.partitionBy('ID').orderBy('date')
df2 = df2.withColumn('row', F.row_number().over(w))
df2 = df2.withColumn('zero_or_first', F.when((F.col('size')==0)|(F.col('row')==1), 1).otherwise(0))
df2 = df2.withColumn('grp', F.sum('zero_or_first').over(w))
df2.orderBy('ID').show()
Here' the output. You can see that column group == grp. Where group is the expected results.
+---+---------------+----------+----+-----+---+-------------+---+
| ID| X| date|size|group|row|zero_or_first|grp|
+---+---------------+----------+----+-----+---+-------------+---+
| 33| []|2017-01-01| 0| 1| 1| 1| 1|
| 33| [banana]|2017-01-04| 1| 2| 4| 0| 2|
| 33|[apple, orange]|2017-01-02| 2| 1| 2| 0| 1|
| 33| []|2017-01-03| 0| 2| 3| 1| 2|
| 55| [coffee]|2017-01-01| 1| 1| 1| 1| 1|
| 55| [banana]|2017-01-01| 1| 1| 2| 0| 1|
| 55| []|2017-01-03| 0| 2| 3| 1| 2|
+---+---------------+----------+----+-----+---+-------------+---+
I added a window function, and created an index within each ID. Then I expanded the conditional statement to also reference that index. The following seems to produce my desired output dataframe - but I am interested in knowing if there is a more efficient way to do this.
window = Window.partitionBy('ID').orderBy('date')
df \
.withColumn('size', size(col('X'))) \
.withColumn('index', rank().over(window).alias('index')) \
.withColumn(
"grp",
sum(((col('size') == 0) | (col('index') == 1)).cast("int")).over(window)
).show()
which yields
+---+---------------+----------+----+-----+---+
| ID| X| date|size|index|grp|
+---+---------------+----------+----+-----+---+
| 33| []|2017-01-01| 0| 1| 1|
| 33|[apple, orange]|2017-01-02| 2| 2| 1|
| 33| []|2017-01-03| 0| 3| 2|
| 33| [banana]|2017-01-04| 1| 4| 2|
| 55| [coffee]|2017-01-01| 1| 1| 1|
| 55| []|2017-01-03| 0| 2| 2|
+---+---------------+----------+----+-----+---+

How to calculate connections of the node in Spark 2

I have the following DataFrame df:
val df = Seq(
(1, 0, 1, 0, 0), (1, 4, 1, 0, 4), (2, 2, 1, 2, 2),
(4, 3, 1, 4, 4), (4, 5, 1, 4, 4)
).toDF("from", "to", "attr", "type_from", "type_to")
+-----+-----+----+---------------+---------------+
|from |to |attr|type_from |type_to |
+-----+-----+----+---------------+---------------+
| 1| 0| 1| 0| 0|
| 1| 4| 1| 0| 4|
| 2| 2| 1| 2| 2|
| 4| 3| 1| 4| 4|
| 4| 5| 1| 4| 4|
+-----+-----+----+---------------+---------------+
I want to count the number of ingoing and outgoing links for each node only when the type of from node is the same as the type of to node (i.e. the values of type_from and type_to).
The cases when to and from are equal should be excluded.
This is how I calculate the number of outgoing links based on this answer proposed by user8371915.
df
.where($"type_from" === $"type_to" && $"from" =!= $"to")
.groupBy($"from" as "nodeId", $"type_from" as "type")
.agg(count("*") as "numLinks")
.na.fill(0)
.show()
Of course, I can repeat the same calculation for the incoming links and then join the results. But is there any shorter solution?
df2
.where($"type_from" === $"type_to" && $"from" =!= $"to")
.groupBy($"to" as "nodeId", $"type_to" as "type")
.agg(count("*") as "numLinks")
.na.fill(0)
.show()
val df_result = df.join(df2, Seq("nodeId", "type"), "rightouter")

scala - Spark : how to get the resultSet with condition in a groupedData

Is there a way to the group Dataframe using its own schema?
This is produces data of format :
Country | Class | Name | age
US, 1,'aaa',21
US, 1,'bbb',20
BR, 2,'ccc',30
AU, 3,'ddd',20
....
I would want to do some like
Country | Class 1 Students | Class 2 Students
US , 2, 0
BR , 0, 1
....
condition 1. Country Groupping.
condition 2. get only 1 or 2 class value
this is a source code..
val df = Seq(("US", 1, "AAA",19),("US", 1, "BBB",20),("KR", 2, "CCC",29),
("AU", 3, "DDD",18)).toDF("country", "class", "name","age")
df.groupBy("country").agg(count($"name") as "Cnt")
You should use pivot function.
val df = Seq(("US", 1, "AAA",19),("US", 1, "BBB",20),("KR", 2, "CCC",29),
("AU", 3, "DDD",18)).toDF("country", "class", "name","age")
df.groupBy("country").pivot("class").agg(count($"name") as "Cnt").show
+-------+---+---+---+
|country| 1| 2| 3|
+-------+---+---+---+
| AU| 0| 0| 1|
| US| 2| 0| 0|
| KR| 0| 1| 0|
+-------+---+---+---+