I have dataframe as below:
RankNumber Value Dept Number
5 200 5
4 200 5
3 205 5
2 198 5
1 197 5
5 200 6
4 202 6
3 205 6
2 198 6
1 194 6
I would like to update some of the cells from Value Column from dataframe. If the current "Value" is greater than previous value then it should get updated to previous value. If the "Value" is same or less than previous value then it should skip. It has been grouped by dept number.
I am trying to do this on pyspark, but cant find a way to accomplish that. Can someone please help??
Expected Results from dataframe is as below:
RankNumber Value Dept Number
5 200 5
4 200 5
3 200 5 (record updated)
2 198 5
1 197 5
5 200 6
4 200 6 (record updated)
3 200 6 (record updated)
2 198 6
1 194 6
I believe your 8th row will get updated as '3 202 6 (record updated)' instead of
'3 200 6 (record updated)'. since it's previous value was '202' and current value '205' is greater than previous '202'.
from pyspark.sql.window import Window
import pyspark.sql.functions as F
w=Window.partitionBy("DeptNumber").orderBy(desc("RankNumber"))
df = df.withColumn('previous_value',F.coalesce(F.lag(df['value'],1).over(w),df['value']))
Below code will get the previous value if Value is greater than previous value.
newdf = df.select(df.RankNumber,df.DeptNumber,df.Value,df.previous_value,when( df.Value<=df.previous_value, df.Value).otherwise(df.previous_value).alias('newValue'))
>>> newdf.show()
+----------+----------+-----+--------------+--------+
|RankNumber|DeptNumber|Value|previous_value|newValue|
+----------+----------+-----+--------------+--------+
| 5| 6| 200| 200| 200|
| 4| 6| 202| 200| 200|
| 3| 6| 205| 202| 202|
| 2| 6| 198| 205| 198|
| 1| 6| 194| 198| 194|
| 5| 5| 200| 200| 200|
| 4| 5| 200| 200| 200|
| 3| 5| 205| 200| 200|
| 2| 5| 198| 205| 198|
| 1| 5| 197| 198| 197|
+----------+----------+-----+--------------+--------+
Below code will get the minimum of previous value as a new value.
from pyspark.sql.window import Window
import pyspark.sql.functions as F
from pyspark.sql.functions import desc,when,lit
w=Window.partitionBy("DeptNumber").orderBy(desc("RankNumber"))
df = df.withColumn('previous_value',F.coalesce(F.lag(df['value'],1).over(w),df['value']))
newdf = df.select(df.RankNumber,df.DeptNumber,df.Value,df.previous_value,when( df.Value<=df.previous_value, df.Value) \
.when(F.lag(df['previous_value'],1).over(w)<=df.previous_value, F.first(df.previous_value).over(w)) \
.otherwise(df.previous_value).alias('newValue'))
>>> newdf.show()
+----------+----------+-----+--------------+--------+
|RankNumber|DeptNumber|Value|previous_value|newValue|
+----------+----------+-----+--------------+--------+
| 5| 6| 200| 200| 200|
| 4| 6| 202| 200| 200|
| 3| 6| 205| 202| 200|
| 2| 6| 198| 205| 198|
| 1| 6| 194| 198| 194|
| 5| 5| 200| 200| 200|
| 4| 5| 200| 200| 200|
| 3| 5| 205| 200| 200|
| 2| 5| 198| 205| 198|
| 1| 5| 197| 198| 197|
+----------+----------+-----+--------------+--------+
If you are looking a lowest value which is just above the previous value of that group then you need to changed a code like this.
newdf = df.select(df.RankNumber,df.DeptNumber,df.Value,df.previous_value,when( df.Value<=df.previous_value, df.Value) \
.when(F.lag(df['previous_value'],1).over(w)<=df.previous_value, F.lag(df['previous_value'],1).over(w)) \
.otherwise(df.previous_value).alias('newValue'))
This will results:
>>> newdf.show()
+----------+----------+-----+--------------+--------+
|RankNumber|DeptNumber|Value|previous_value|newValue|
+----------+----------+-----+--------------+--------+
| 5| Dept2| 100| 100| 100|
| 4| Dept2| 102| 100| 100|
| 3| Dept2| 105| 102| 100|
| 2| Dept2| 198| 105| 102|
| 1| Dept2| 194| 198| 194|
| 5| Dept1| 200| 200| 200|
| 4| Dept1| 202| 200| 200|
| 3| Dept1| 205| 202| 200|
| 2| Dept1| 198| 205| 198|
| 1| Dept1| 194| 198| 194|
+----------+----------+-----+--------------+--------+
Update:
Now creating a new dataframe as mentioned in the comment section below:
listOfTuples = [(5, 200, "Dept1"), (4, 202, "Dept1"), (3, 205, "Dept1"), (2, 198, "Dept1"), (1, 194, "Dept1") , (5, 100, "Dept2"), (4, 102, "Dept2"), (3, 105, "Dept2"), (2, 198, "Dept2"), (1, 194, "Dept2") ]
df = spark.createDataFrame(listOfTuples , ["RankNumber", "Value", "DeptNumber"])
>>> df.show()
+----------+-----+----------+
|RankNumber|Value|DeptNumber|
+----------+-----+----------+
| 5| 200| Dept1|
| 4| 202| Dept1|
| 3| 205| Dept1|
| 2| 198| Dept1|
| 1| 194| Dept1|
| 5| 100| Dept2|
| 4| 102| Dept2|
| 3| 105| Dept2|
| 2| 198| Dept2|
| 1| 194| Dept2|
+----------+-----+----------+
I believe your intention is to look in a range between current and preceding row and pick the lowest value if first condition get satisfied. ie: value is greater than previous value.
w1=Window.partitionBy("DeptNumber").orderBy(desc("RankNumber"))
w2=Window.partitionBy("DeptNumber").orderBy(desc("RankNumber")).rowsBetween(Window.unboundedPreceding, Window.currentRow)
df = df.withColumn('previous_value',F.coalesce(F.lag(df['value'],1).over(w1),df['value']))
here's your code:
newdf = df.select(df.RankNumber,df.DeptNumber,df.Value,df.previous_value,when( df.Value<=df.previous_value, df.Value) \
.otherwise(F.min(df.previous_value).over(w2)).alias('newValue'))
>>> newdf.show()
+----------+----------+-----+--------------+--------+
|RankNumber|DeptNumber|Value|previous_value|newValue|
+----------+----------+-----+--------------+--------+
| 5| Dept2| 100| 100| 100|
| 4| Dept2| 102| 100| 100|
| 3| Dept2| 105| 102| 100|
| 2| Dept2| 198| 105| 100|
| 1| Dept2| 194| 198| 194|
| 5| Dept1| 200| 200| 200|
| 4| Dept1| 202| 200| 200|
| 3| Dept1| 205| 202| 200|
| 2| Dept1| 198| 205| 198|
| 1| Dept1| 194| 198| 194|
+----------+----------+-----+--------------+--------+
Related
I have a df looks like this:
+-----+-------+-----+
|docId|vocabId|count|
+-----+-------+-----+
| 3| 3| 600|
| 2| 3| 702|
| 1| 2| 120|
| 2| 5| 200|
| 2| 2| 500|
| 3| 1| 100|
| 3| 5| 2000|
| 3| 4| 122|
| 1| 3| 1200|
| 1| 1| 1000|
+-----+-------+-----+
I want to output the max count of vocabId and the docId it belongs to. I did this:
val wordCounts = docwords.groupBy("vocabId").agg(max($"count") as ("count"))
and got this:
+-------+----------+
|vocabId| count |
+-------+----------+
| 1| 1000|
| 3| 1200|
| 5| 2000|
| 4| 122|
| 2| 500|
+-------+----------+
How do I add the docId at the front???
It should looks something like this(the order is not important):
+-----+-------+-----+
|docId|vocabId|count|
+-----+-------+-----+
| 2| 2| 500|
| 3| 5| 2000|
| 3| 4| 122|
| 1| 3| 1200|
| 1| 1| 1000|
+-----+-------+-----+
You can do self join with docwords over count and vocabId something like below
val wordCounts = docwords.groupBy("vocabId").agg(max($"count") as ("count")).join(docwords,Seq("vocabId","count"))
I have a data frame like this. How can i take the sum of the column sales where the rank is greater than 3 , per 'M'
+---+-----+----+
| M|Sales|Rank|
+---+-----+----+
| M1| 200| 1|
| M1| 175| 2|
| M1| 150| 3|
| M1| 125| 4|
| M1| 90| 5|
| M1| 85| 6|
| M2| 1001| 1|
| M2| 500| 2|
| M2| 456| 3|
| M2| 345| 4|
| M2| 231| 5|
| M2| 123| 6|
+---+-----+----+
Expected Output --
+---+-----+----+---------------+
| M|Sales|Rank|SumGreaterThan3|
+---+-----+----+---------------+
| M1| 200| 1| 300|
| M1| 175| 2| 300|
| M1| 150| 3| 300|
| M1| 125| 4| 300|
| M1| 90| 5| 300|
| M1| 85| 6| 300|
| M2| 1001| 1| 699|
| M2| 500| 2| 699|
| M2| 456| 3| 699|
| M2| 345| 4| 699|
| M2| 231| 5| 699|
| M2| 123| 6| 699|
+---+-----+----+---------------+
I have done sum over ROwnumber like this
df.withColumn("SumGreaterThan3",sum("Sales").over(Window.partitionBy(col("M"))))` //But this will provide total sum of sales.
To replicate the same DF-
val df = Seq(
("M1",200,1),
("M1",175,2),
("M1",150,3),
("M1",125,4),
("M1",90,5),
("M1",85,6),
("M2",1001,1),
("M2",500,2),
("M2",456,3),
("M2",345,4),
("M2",231,5),
("M2",123,6)
).toDF("M","Sales","Rank")
Well, the partition is enough to set the window function. Of course you also have to use the conditional summation by mixing sum and when.
import org.apache.spark.sql.expressions.Window
val w = Window.partitionBy("M")
df.withColumn("SumGreaterThan3", sum(when('Rank > 3, 'Sales).otherwise(0)).over(w).alias("sum")).show
This will givs you the expected results.
I will expose my problem based on the initial dataframe and the one I want to achieve:
val df_997 = Seq [(Int, Int, Int, Int)]((1,1,7,10),(1,10,4,300),(1,3,14,50),(1,20,24,70),(1,30,12,90),(2,10,4,900),(2,25,30,40),(2,15,21,60),(2,5,10,80)).toDF("policyId","FECMVTO","aux","IND_DEF").orderBy(asc("policyId"), asc("FECMVTO"))
df_997.show
+--------+-------+---+-------+
|policyId|FECMVTO|aux|IND_DEF|
+--------+-------+---+-------+
| 1| 1| 7| 10|
| 1| 3| 14| 50|
| 1| 10| 4| 300|
| 1| 20| 24| 70|
| 1| 30| 12| 90|
| 2| 5| 10| 80|
| 2| 10| 4| 900|
| 2| 15| 21| 60|
| 2| 25| 30| 40|
+--------+-------+---+-------+
Imagine I have partitioned this DF by the column policyId and created the column row_num based on it to better see the Windows:
val win = Window.partitionBy("policyId").orderBy("FECMVTO")
val df_998 = df_997.withColumn("row_num",row_number().over(win))
df_998.show
+--------+-------+---+-------+-------+
|policyId|FECMVTO|aux|IND_DEF|row_num|
+--------+-------+---+-------+-------+
| 1| 1| 7| 10| 1|
| 1| 3| 14| 50| 2|
| 1| 10| 4| 300| 3|
| 1| 20| 24| 70| 4|
| 1| 30| 12| 90| 5|
| 2| 5| 10| 80| 1|
| 2| 10| 4| 900| 2|
| 2| 15| 21| 60| 3|
| 2| 25| 30| 40| 4|
+--------+-------+---+-------+-------+
Now, for each window, if the value of aux is 4, I want to set the value of IND_DEF column for that register to the column FEC_MVTO for this register on until the end of the window.
The resulting DF would be:
+--------+-------+---+-------+-------+
|policyId|FECMVTO|aux|IND_DEF|row_num|
+--------+-------+---+-------+-------+
| 1| 1| 7| 10| 1|
| 1| 3| 14| 50| 2|
| 1| 300| 4| 300| 3|
| 1| 300| 24| 70| 4|
| 1| 300| 12| 90| 5|
| 2| 5| 10| 80| 1|
| 2| 900| 4| 900| 2|
| 2| 900| 21| 60| 3|
| 2| 900| 30| 40| 4|
+--------+-------+---+-------+-------+
Thanks for your suggestions as I am very stuck in here...
Here's one approach: First left-join the DataFrame with its aux == 4 filtered version, followed by applying Window function first to backfill nulls with the wanted IND_DEF values per partition, and finally conditionally recreate column FECMVTO:
import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions._
import spark.implicits._
val df = Seq(
(1,1,7,10), (1,10,4,300), (1,3,14,50), (1,20,24,70), (1,30,12,90),
(2,10,4,900), (2,25,30,40), (2,15,21,60), (2,5,10,80)
).toDF("policyId","FECMVTO","aux","IND_DEF")
val win = Window.partitionBy("policyId").orderBy("FECMVTO").
rowsBetween(Window.unboundedPreceding, 0)
val df2 = df.
select($"policyId", $"aux", $"IND_DEF".as("IND_DEF2")).
where($"aux" === 4)
df.join(df2, Seq("policyId", "aux"), "left_outer").
withColumn("IND_DEF3", first($"IND_DEF2", ignoreNulls=true).over(win)).
withColumn("FECMVTO", coalesce($"IND_DEF3", $"FECMVTO")).
show
// +--------+---+-------+-------+--------+--------+
// |policyId|aux|FECMVTO|IND_DEF|IND_DEF2|IND_DEF3|
// +--------+---+-------+-------+--------+--------+
// | 1| 7| 1| 10| null| null|
// | 1| 14| 3| 50| null| null|
// | 1| 4| 300| 300| 300| 300|
// | 1| 24| 300| 70| null| 300|
// | 1| 12| 300| 90| null| 300|
// | 2| 10| 5| 80| null| null|
// | 2| 4| 900| 900| 900| 900|
// | 2| 21| 900| 60| null| 900|
// | 2| 30| 900| 40| null| 900|
// +--------+---+-------+-------+--------+--------+
Columns IND_DEF2, IND_DEF3 are kept only for illustration (and can certainly be dropped).
#I believe below can be solution for your issue
Considering input_df is your input dataframe
//Step#1 - Filter rows with IND_DEF = 4 from input_df
val only_FECMVTO_4_df1 = input_df.filter($"IND_DEF" === 4)
//Step#2 - Filling FECMVTO value from IND_DEF for the above result
val only_FECMVTO_4_df2 = only_FECMVTO_4_df1.withColumn("FECMVTO_NEW",$"IND_DEF").drop($"FECMVTO").withColumnRenamed("FECMVTO",$"FECMVTO_NEW")
//Step#3 - removing all the records from step#1 from input_df
val input_df_without_FECMVTO_4 = input_df.except(only_FECMVTO_4_df1)
//combining Step#2 output with output of Step#3
val final_df = input_df_without_FECMVTO_4.union(only_FECMVTO_4_df2)
I have a dataframe that looks like
key | value | time | status
x | 10 | 0 | running
x | 15 | 1 | running
x | 30 | 2 | running
x | 15 | 3 | running
x | 0 | 4 | stop
x | 40 | 5 | running
x | 10 | 6 | running
y | 10 | 0 | running
y | 15 | 1 | running
y | 30 | 2 | running
y | 15 | 3 | running
y | 0 | 4 | stop
y | 40 | 5 | running
y | 10 | 6 | running
...
I want to end up with a table that looks like
key | start | end | status | max value
x | 0 | 3 | running| 30
x | 4 | 4 | stop | 0
x | 5 | 6 | running| 40
y | 0 | 3 | running| 30
y | 4 | 4 | stop | 0
y | 5 | 6 | running| 40
...
In other words, I want to split by key, sort by time, into windows that have the same status, keep the first and last time and do a calculation over that window i.e max of value
Using pyspark ideally.
Here is one approach you can take.
First create a column to determine if the status has changed for a given key:
from pyspark.sql.functions import col, lag
from pyspark.sql import Window
w = Window.partitionBy("key").orderBy("time")
df = df.withColumn(
"status_change",
(col("status") != lag("status").over(w)).cast("int")
)
df.show()
#+---+-----+----+-------+-------------+
#|key|value|time| status|status_change|
#+---+-----+----+-------+-------------+
#| x| 10| 0|running| null|
#| x| 15| 1|running| 0|
#| x| 30| 2|running| 0|
#| x| 15| 3|running| 0|
#| x| 0| 4| stop| 1|
#| x| 40| 5|running| 1|
#| x| 10| 6|running| 0|
#| y| 10| 0|running| null|
#| y| 15| 1|running| 0|
#| y| 30| 2|running| 0|
#| y| 15| 3|running| 0|
#| y| 0| 4| stop| 1|
#| y| 40| 5|running| 1|
#| y| 10| 6|running| 0|
#+---+-----+----+-------+-------------+
Next fill the nulls with 0 and take the cumulative sum of the status_change column, per key:
from pyspark.sql.functions import sum as sum_ # avoid shadowing builtin
df = df.fillna(0).withColumn(
"status_group",
sum_("status_change").over(w)
)
df.show()
#+---+-----+----+-------+-------------+------------+
#|key|value|time| status|status_change|status_group|
#+---+-----+----+-------+-------------+------------+
#| x| 10| 0|running| 0| 0|
#| x| 15| 1|running| 0| 0|
#| x| 30| 2|running| 0| 0|
#| x| 15| 3|running| 0| 0|
#| x| 0| 4| stop| 1| 1|
#| x| 40| 5|running| 1| 2|
#| x| 10| 6|running| 0| 2|
#| y| 10| 0|running| 0| 0|
#| y| 15| 1|running| 0| 0|
#| y| 30| 2|running| 0| 0|
#| y| 15| 3|running| 0| 0|
#| y| 0| 4| stop| 1| 1|
#| y| 40| 5|running| 1| 2|
#| y| 10| 6|running| 0| 2|
#+---+-----+----+-------+-------------+------------+
Now you can aggregate over the key and status_group. You can also include status in the groupBy since it will be the same for each status_group. Finally select only the columns you want in your output.
from pyspark.sql.functions import min as min_, max as max_
df_agg = df.groupBy("key", "status", "status_group")\
.agg(
min_("time").alias("start"),
max_("time").alias("end"),
max_("value").alias("max_value")
)\
.select("key", "start", "end", "status", "max_value")\
.sort("key", "start")
df_agg.show()
#+---+-----+---+-------+---------+
#|key|start|end| status|max_value|
#+---+-----+---+-------+---------+
#| x| 0| 3|running| 30|
#| x| 4| 4| stop| 0|
#| x| 5| 6|running| 40|
#| y| 0| 3|running| 30|
#| y| 4| 4| stop| 0|
#| y| 5| 6|running| 40|
#+---+-----+---+-------+---------+
I have a DataFrame with a column "Speed". Can I efficiently add a column with, for each row, the number of rows in the DataFrame such that their "Speed" is within +/2 from the row "Speed"?
results = spark.createDataFrame([[1],[2],[3],[4],[5],
[4],[5],[4],[5],[6],
[5],[6],[1],[3],[8],
[2],[5],[6],[10],[12]],
['Speed'])
results.show()
+-----+
|Speed|
+-----+
| 1|
| 2|
| 3|
| 4|
| 5|
| 4|
| 5|
| 4|
| 5|
| 6|
| 5|
| 6|
| 1|
| 3|
| 8|
| 2|
| 5|
| 6|
| 10|
| 12|
+-----+
You could use a window function :
# Order the window by speed, and look at range [0;+2]
w = Window.orderBy('Speed').rangeBetween(0,2)
# Define a column counting the number of rows containing value Speed+2
results = results.withColumn('count+2',F.count('Speed').over(w)).orderBy('Speed')
results.show()
+-----+-----+
|Speed|count|
+-----+-----+
| 1| 6|
| 1| 6|
| 2| 7|
| 2| 7|
| 3| 10|
| 3| 10|
| 4| 11|
| 4| 11|
| 4| 11|
| 5| 8|
| 5| 8|
| 5| 8|
| 5| 8|
| 5| 8|
| 6| 4|
| 6| 4|
| 6| 4|
| 8| 2|
| 10| 2|
| 12| 1|
+-----+-----+
Note : The window function counts the studied row itself. You could correct this by adding a -1 in the count column
results = results.withColumn('count+2',F.count('Speed').over(w)-1).orderBy('Speed')