I try to get the time difference "time_d" in seconds of a timestamp within "name" in Pyspark.
+-------------------+----+
| timestamplast|name|
+-------------------+----+
|2019-08-01 00:00:00| 1|
|2019-08-01 00:01:00| 1|
|2019-08-01 00:01:15| 1|
|2019-08-01 03:00:00| 2|
|2019-08-01 04:00:00| 2|
|2019-08-01 00:15:00| 3|
+-------------------+----+
Output should look like:
+-------------------+----+--------+
| timestamplast|name| time_d |
+-------------------+----+------- +
|2019-08-01 00:00:00| 1| 0 |
|2019-08-01 00:01:00| 1| 60 |
|2019-08-01 00:01:15| 1| 15 |
|2019-08-01 03:00:00| 2| 0 |
|2019-08-01 04:00:00| 2| 3600 |
|2019-08-01 00:15:00| 3| 0 |
+-------------------+----+--------+
In Pandas this would be:
df['time_d'] = df.groupby("name")['timestamplast'].diff().fillna(pd.Timedelta(0)).dt.total_seconds()
How would this be done in Pyspark?
You can use a lag window function(partitioned by name) and then compute the difference using timestamp in seconds(unix_timestamp).
from pyspark.sql import functions as F
from pyspark.sql.window import Window
w=Window().partitionBy("name").orderBy(F.col("timestamplast"))
df.withColumn("time_d", F.lag(F.unix_timestamp("timestamplast")).over(w))\
.withColumn("time_d", F.when(F.col("time_d").isNotNull(), F.unix_timestamp("timestamplast")-F.col("time_d"))\
.otherwise(F.lit(0))).orderBy("name","timestamplast").show()
#+-------------------+----+------+
#| timestamplast|name|time_d|
#+-------------------+----+------+
#|2019-08-01 00:00:00| 1| 0|
#|2019-08-01 00:01:00| 1| 60|
#|2019-08-01 00:01:15| 1| 15|
#|2019-08-01 03:00:00| 2| 0|
#|2019-08-01 04:00:00| 2| 3600|
#|2019-08-01 00:15:00| 3| 0|
#+-------------------+----+------+
Related
I have a pyspark dataframe that looks like this
import pandas as pd
spark.createDataFrame(
pd.DataFrame({'ch_id': [1,1,1,1,1,
2,2,2,2],
'e_id': [0,0,1,2,2,
0,0,1,1],
'seg': ['h','s','s','a','s',
'h','s','s','h']})
).show()
+-----+----+---+
|ch_id|e_id|seg|
+-----+----+---+
| 1| 0| h|
| 1| 0| s|
| 1| 1| s|
| 1| 2| a|
| 1| 2| s|
| 2| 0| h|
| 2| 0| s|
| 2| 1| s|
| 2| 1| h|
+-----+----+---+
I would like for every c_id to get:
the % of e_id for which there is one unique value of s
The output would like like this:
+----+-------+
|c_id|%_major|
+----+-------+
| 1| 66.6|
| 2| 0.0|
+----+-------+
How could I achieve that in pyspark ?
Logic and columnIn Pyspark DataFrame consider a column like [1,2,3,4,1,2,1,1,2,3,1,2,1,1,2]. Pyspark Column
create a new column to increment value when value resets to 1.
Expected output is[1,1,1,1,2,2,3,4,4,4,5,5,6,7,7]
i am bit new to pyspark, if anyone can help me it would be great for me.
written the logic as like below
def sequence(row_num):
results = [1, ]
flag = 1
for col in range(0, len(row_num)-1):
if row_num[col][0]>=row_num[col+1][0]:
flag+=1
results.append(flag)
return results
but not able to pass a column through udf. please help me in this
Your Dataframe:
df = spark.createDataFrame(
[
('1','a'),
('2','b'),
('3','c'),
('4','d'),
('1','e'),
('2','f'),
('1','g'),
('1','h'),
('2','i'),
('3','j'),
('1','k'),
('2','l'),
('1','m'),
('1','n'),
('2','o')
], ['group','label']
)
+-----+-----+
|group|label|
+-----+-----+
| 1| a|
| 2| b|
| 3| c|
| 4| d|
| 1| e|
| 2| f|
| 1| g|
| 1| h|
| 2| i|
| 3| j|
| 1| k|
| 2| l|
| 1| m|
| 1| n|
| 2| o|
+-----+-----+
You can create a flag and use a Window Function to calculate the cumulative sum. No need to use an UDF:
from pyspark.sql import Window as W
from pyspark.sql import functions as F
w = W.partitionBy().orderBy('label').rowsBetween(Window.unboundedPreceding, 0)
df\
.withColumn('Flag', F.when(F.col('group') == 1, 1).otherwise(0))\
.withColumn('Output', F.sum('Flag').over(w))\
.show()
+-----+-----+----+------+
|group|label|Flag|Output|
+-----+-----+----+------+
| 1| a| 1| 1|
| 2| b| 0| 1|
| 3| c| 0| 1|
| 4| d| 0| 1|
| 1| e| 1| 2|
| 2| f| 0| 2|
| 1| g| 1| 3|
| 1| h| 1| 4|
| 2| i| 0| 4|
| 3| j| 0| 4|
| 1| k| 1| 5|
| 2| l| 0| 5|
| 1| m| 1| 6|
| 1| n| 1| 7|
| 2| o| 0| 7|
+-----+-----+----+------+
I will expose my problem based on the initial dataframe and the one I want to achieve:
val df_997 = Seq [(Int, Int, Int, Int)]((1,1,7,10),(1,10,4,300),(1,3,14,50),(1,20,24,70),(1,30,12,90),(2,10,4,900),(2,25,30,40),(2,15,21,60),(2,5,10,80)).toDF("policyId","FECMVTO","aux","IND_DEF").orderBy(asc("policyId"), asc("FECMVTO"))
df_997.show
+--------+-------+---+-------+
|policyId|FECMVTO|aux|IND_DEF|
+--------+-------+---+-------+
| 1| 1| 7| 10|
| 1| 3| 14| 50|
| 1| 10| 4| 300|
| 1| 20| 24| 70|
| 1| 30| 12| 90|
| 2| 5| 10| 80|
| 2| 10| 4| 900|
| 2| 15| 21| 60|
| 2| 25| 30| 40|
+--------+-------+---+-------+
Imagine I have partitioned this DF by the column policyId and created the column row_num based on it to better see the Windows:
val win = Window.partitionBy("policyId").orderBy("FECMVTO")
val df_998 = df_997.withColumn("row_num",row_number().over(win))
df_998.show
+--------+-------+---+-------+-------+
|policyId|FECMVTO|aux|IND_DEF|row_num|
+--------+-------+---+-------+-------+
| 1| 1| 7| 10| 1|
| 1| 3| 14| 50| 2|
| 1| 10| 4| 300| 3|
| 1| 20| 24| 70| 4|
| 1| 30| 12| 90| 5|
| 2| 5| 10| 80| 1|
| 2| 10| 4| 900| 2|
| 2| 15| 21| 60| 3|
| 2| 25| 30| 40| 4|
+--------+-------+---+-------+-------+
Now, for each window, if the value of aux is 4, I want to set the value of IND_DEF column for that register to the column FEC_MVTO for this register on until the end of the window.
The resulting DF would be:
+--------+-------+---+-------+-------+
|policyId|FECMVTO|aux|IND_DEF|row_num|
+--------+-------+---+-------+-------+
| 1| 1| 7| 10| 1|
| 1| 3| 14| 50| 2|
| 1| 300| 4| 300| 3|
| 1| 300| 24| 70| 4|
| 1| 300| 12| 90| 5|
| 2| 5| 10| 80| 1|
| 2| 900| 4| 900| 2|
| 2| 900| 21| 60| 3|
| 2| 900| 30| 40| 4|
+--------+-------+---+-------+-------+
Thanks for your suggestions as I am very stuck in here...
Here's one approach: First left-join the DataFrame with its aux == 4 filtered version, followed by applying Window function first to backfill nulls with the wanted IND_DEF values per partition, and finally conditionally recreate column FECMVTO:
import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions._
import spark.implicits._
val df = Seq(
(1,1,7,10), (1,10,4,300), (1,3,14,50), (1,20,24,70), (1,30,12,90),
(2,10,4,900), (2,25,30,40), (2,15,21,60), (2,5,10,80)
).toDF("policyId","FECMVTO","aux","IND_DEF")
val win = Window.partitionBy("policyId").orderBy("FECMVTO").
rowsBetween(Window.unboundedPreceding, 0)
val df2 = df.
select($"policyId", $"aux", $"IND_DEF".as("IND_DEF2")).
where($"aux" === 4)
df.join(df2, Seq("policyId", "aux"), "left_outer").
withColumn("IND_DEF3", first($"IND_DEF2", ignoreNulls=true).over(win)).
withColumn("FECMVTO", coalesce($"IND_DEF3", $"FECMVTO")).
show
// +--------+---+-------+-------+--------+--------+
// |policyId|aux|FECMVTO|IND_DEF|IND_DEF2|IND_DEF3|
// +--------+---+-------+-------+--------+--------+
// | 1| 7| 1| 10| null| null|
// | 1| 14| 3| 50| null| null|
// | 1| 4| 300| 300| 300| 300|
// | 1| 24| 300| 70| null| 300|
// | 1| 12| 300| 90| null| 300|
// | 2| 10| 5| 80| null| null|
// | 2| 4| 900| 900| 900| 900|
// | 2| 21| 900| 60| null| 900|
// | 2| 30| 900| 40| null| 900|
// +--------+---+-------+-------+--------+--------+
Columns IND_DEF2, IND_DEF3 are kept only for illustration (and can certainly be dropped).
#I believe below can be solution for your issue
Considering input_df is your input dataframe
//Step#1 - Filter rows with IND_DEF = 4 from input_df
val only_FECMVTO_4_df1 = input_df.filter($"IND_DEF" === 4)
//Step#2 - Filling FECMVTO value from IND_DEF for the above result
val only_FECMVTO_4_df2 = only_FECMVTO_4_df1.withColumn("FECMVTO_NEW",$"IND_DEF").drop($"FECMVTO").withColumnRenamed("FECMVTO",$"FECMVTO_NEW")
//Step#3 - removing all the records from step#1 from input_df
val input_df_without_FECMVTO_4 = input_df.except(only_FECMVTO_4_df1)
//combining Step#2 output with output of Step#3
val final_df = input_df_without_FECMVTO_4.union(only_FECMVTO_4_df2)
I have a dataframe that looks like
key | value | time | status
x | 10 | 0 | running
x | 15 | 1 | running
x | 30 | 2 | running
x | 15 | 3 | running
x | 0 | 4 | stop
x | 40 | 5 | running
x | 10 | 6 | running
y | 10 | 0 | running
y | 15 | 1 | running
y | 30 | 2 | running
y | 15 | 3 | running
y | 0 | 4 | stop
y | 40 | 5 | running
y | 10 | 6 | running
...
I want to end up with a table that looks like
key | start | end | status | max value
x | 0 | 3 | running| 30
x | 4 | 4 | stop | 0
x | 5 | 6 | running| 40
y | 0 | 3 | running| 30
y | 4 | 4 | stop | 0
y | 5 | 6 | running| 40
...
In other words, I want to split by key, sort by time, into windows that have the same status, keep the first and last time and do a calculation over that window i.e max of value
Using pyspark ideally.
Here is one approach you can take.
First create a column to determine if the status has changed for a given key:
from pyspark.sql.functions import col, lag
from pyspark.sql import Window
w = Window.partitionBy("key").orderBy("time")
df = df.withColumn(
"status_change",
(col("status") != lag("status").over(w)).cast("int")
)
df.show()
#+---+-----+----+-------+-------------+
#|key|value|time| status|status_change|
#+---+-----+----+-------+-------------+
#| x| 10| 0|running| null|
#| x| 15| 1|running| 0|
#| x| 30| 2|running| 0|
#| x| 15| 3|running| 0|
#| x| 0| 4| stop| 1|
#| x| 40| 5|running| 1|
#| x| 10| 6|running| 0|
#| y| 10| 0|running| null|
#| y| 15| 1|running| 0|
#| y| 30| 2|running| 0|
#| y| 15| 3|running| 0|
#| y| 0| 4| stop| 1|
#| y| 40| 5|running| 1|
#| y| 10| 6|running| 0|
#+---+-----+----+-------+-------------+
Next fill the nulls with 0 and take the cumulative sum of the status_change column, per key:
from pyspark.sql.functions import sum as sum_ # avoid shadowing builtin
df = df.fillna(0).withColumn(
"status_group",
sum_("status_change").over(w)
)
df.show()
#+---+-----+----+-------+-------------+------------+
#|key|value|time| status|status_change|status_group|
#+---+-----+----+-------+-------------+------------+
#| x| 10| 0|running| 0| 0|
#| x| 15| 1|running| 0| 0|
#| x| 30| 2|running| 0| 0|
#| x| 15| 3|running| 0| 0|
#| x| 0| 4| stop| 1| 1|
#| x| 40| 5|running| 1| 2|
#| x| 10| 6|running| 0| 2|
#| y| 10| 0|running| 0| 0|
#| y| 15| 1|running| 0| 0|
#| y| 30| 2|running| 0| 0|
#| y| 15| 3|running| 0| 0|
#| y| 0| 4| stop| 1| 1|
#| y| 40| 5|running| 1| 2|
#| y| 10| 6|running| 0| 2|
#+---+-----+----+-------+-------------+------------+
Now you can aggregate over the key and status_group. You can also include status in the groupBy since it will be the same for each status_group. Finally select only the columns you want in your output.
from pyspark.sql.functions import min as min_, max as max_
df_agg = df.groupBy("key", "status", "status_group")\
.agg(
min_("time").alias("start"),
max_("time").alias("end"),
max_("value").alias("max_value")
)\
.select("key", "start", "end", "status", "max_value")\
.sort("key", "start")
df_agg.show()
#+---+-----+---+-------+---------+
#|key|start|end| status|max_value|
#+---+-----+---+-------+---------+
#| x| 0| 3|running| 30|
#| x| 4| 4| stop| 0|
#| x| 5| 6|running| 40|
#| y| 0| 3|running| 30|
#| y| 4| 4| stop| 0|
#| y| 5| 6|running| 40|
#+---+-----+---+-------+---------+
I have a dataframe (renderDF) like this:
+------+---+-------+
| uid|sid|renders|
+------+---+-------+
| david| 0| 0|
|rachel| 1| 0|
|rachel| 3| 0|
|rachel| 2| 0|
| pep| 2| 0|
| pep| 0| 1|
| pep| 1| 1|
|rachel| 0| 1|
| rick| 1| 1|
| ross| 0| 3|
| rick| 0| 3|
+------+---+-------+
I want to use a window function to achieve this result
+------+---+-------+-----------+
| uid|sid|renders|row_number |
+------+---+-------+-----------+
| david| 0| 0| 1 |
|rachel| 1| 0| 2 |
|rachel| 3| 0| 3 |
|rachel| 2| 0| 4 |
| pep| 2| 0| 5 |
| pep| 0| 1| 6 |
| pep| 1| 1| 7 |
|rachel| 0| 1| 8 |
| rick| 1| 1| 9 |
| ross| 0| 3| 10 |
| rick| 0| 3| 11 |
+------+---+-------+-----------+
I try:
val windowRender = Window.partitionBy('sid).orderBy('Renders)
renderDF.withColumn("row_number", row_number() over windowRender)
But it doesn't do what I need.
Is the partition my problem?
try this:
val dfWithRownumber = renderDF.withColumn("row_number", row_number.over(Window.partitionBy(lit(1)).orderBy("renders")))