In Pyspark, I am trying to use dense_rank() to group rows into the same group based on the userId and the time value.
Here is my inital dataframe :
+--------------------+--------------------+--------------------+
| userId| BeginTime| EndTime|
+--------------------+--------------------+--------------------+
| A|2021-02-09 15:56:...|2021-02-09 15:56:...|
| A|2021-02-09 15:57:...|2021-02-09 15:57:...|
| A|2021-02-09 15:58:...|2021-02-09 15:58:...|
| B|2021-02-05 13:16:...|2021-02-05 13:16:...|
| B|2021-02-05 13:16:...|2021-02-05 13:16:...|
| B|2021-02-05 18:27:...|2021-02-05 18:37:...|
+--------------------+--------------------+--------------------+
One row represents one action made by one user and gives the startDate and the endDate of each action. I want to gather actions that were made in succession, so if the duration between two action is more than 1 hour, I consider these two actions where not made in succession.
So here is what I expect :
+--------------------+--------------------+--------------------+---------+
| userId| BeginTime| EndTime| sequence|
+--------------------+--------------------+--------------------+---------+
| A|2021-02-09 15:56:...|2021-02-09 15:56:...| 1|
| A|2021-02-09 15:57:...|2021-02-09 15:57:...| 1|
| A|2021-02-09 15:58:...|2021-02-09 15:58:...| 1|
| B|2021-02-05 13:16:...|2021-02-05 13:16:...| 1|
| B|2021-02-05 13:16:...|2021-02-05 13:16:...| 1|
| B|2021-02-05 18:27:...|2021-02-05 18:37:...| 2|
+--------------------+--------------------+--------------------+---------+
I try to use dense_rank() and rangeBetween in my Window like this :
w_rank = (Window
.partitionBy("userId")
.orderBy(col("BeginTime").cast("timestamp").cast("long"))
.rangeBetween(0,3600 )
df = df.withColumn('sequence', dense_rank().over(w_rank))
But i have this error :
AnalysisException : Window Frame specifiedwindowframe(RangeFrame, currentrow$(), 3600) must match the required frame specifiedwindowframe(RowFrame, unboundedpreceding$(), currentrow$());
I am quite new with pyspark so if anyone could help me on this one I'll be very grateful. Thanks in advance !
So I manage to find something that works for my case, I post my answer here in case it will help someone :
w = Window.partitionBy('userId').orderBy(col("BeginTime"))
df = df.withColumn('duration_between_series', col('BeginTime').cast('long') - lag(col('EndTime').over(w) )
.withColumn('rank', dense_rank().over(w))
.withColumn('sequence_temp', when(col('rank')==1, 1).when(col('duration_between_series')>3600, col('rank')).otherwise(None))
.withColumn('sequence', last('sequence_temp', True).over(w.rowsBetween(-sys.maxsize, 0))).drop('sequence_temp', 'duration_between_series')
Output :
+--------------------+--------------------+--------------------+---------+---------+
| userId| BeginTime| EndTime| rank| sequence|
+--------------------+--------------------+--------------------+---------+---------+
| A|2021-02-09 15:56:...|2021-02-09 15:56:...| 1| 1|
| A|2021-02-09 15:57:...|2021-02-09 15:57:...| 2| 1|
| A|2021-02-09 15:58:...|2021-02-09 15:58:...| 3| 1|
| B|2021-02-05 13:16:...|2021-02-05 13:16:...| 1| 1|
| B|2021-02-05 13:16:...|2021-02-05 13:16:...| 2| 1|
| B|2021-02-05 18:27:...|2021-02-05 18:37:...| 3| 3|
+--------------------+--------------------+--------------------+---------+---------+
the column sequence is not exactly like what I expected but as much as I have different values for each group, I am fine with that :)
Related
I have a dataset that has column userid and index values.
+---------+--------+
| userid | index|
+---------+--------+
| user1| 1|
| user2| 2|
| user3| 3|
| user4| 4|
| user5| 5|
| user6| 6|
| user7| 7|
| user8| 8|
| user9| 9|
| user10| 10|
+---------+--------+
I want to append a new data frame to it and add an index to the newly added columns.
The userid is unique and the existing data frame will not have the Dataframe 2 user ids.
+----------+
| userid |
+----------+
| user11|
| user21|
| user41|
| user51|
| user64|
+----------+
The expected output with newly added userid and index
+---------+--------+
| userid | index|
+---------+--------+
| user1| 1|
| user2| 2|
| user3| 3|
| user4| 4|
| user5| 5|
| user6| 6|
| user7| 7|
| user8| 8|
| user9| 9|
| user10| 10|
| user11| 11|
| user21| 12|
| user41| 13|
| user51| 14|
| user64| 15|
+---------+--------+
Is it possible to achive this by passing a max index value and start index for second Dataframe from given index value.
If the userid has some ordering, then you can use the rownumber function. Even if it does not have, then you can add an id using monotonically_increasing_id(). For now I assume that userid can be ordered. Then you can do this:
from pyspark.sql import functions as F
from pyspark.sql.window import Window
df_merge = df1.select('userid').union(df2.select('userid'))
w=Window.orderBy('userid')
df_result = df_merge.withColumn('indexid',F.row_number().over(w))
EDIT : After discussions in comment.
#%% Test data and imports
import pyspark.sql.functions as F
from pyspark.sql import Window
df = sqlContext.createDataFrame([('a',100),('ab',50),('ba',300),('ced',60),('d',500)],schema=['userid','index'])
df1 = sqlContext.createDataFrame([('fgh',100),('ff',50),('fe',300),('er',60),('fi',500)],schema=['userid','dummy'])
#%%
#%% Merge the two dataframes, with a null columns as the index
df1=df1.withColumn('index', F.lit(None))
df_merge = df.select(df.columns).union(df1.select(df.columns))
#%%Define a window to arrange the newly added rows at the last and order them by userid
#%% The user id, even though random strings, can be ordered
w= Window.orderBy(F.col('index').asc_nulls_last(),F.col('userid'))# if possible add a partition column here, otherwise all your data will come in one partition, consider salting
#%% For the newly added rows, define index as the maximum value + increment of number of rows in main dataframe
df_final = df_merge.withColumn("index_new",F.when(~F.col('index').isNull(),F.col('index')).otherwise((F.last(F.col('index'),ignorenulls=True).over(w))+F.sum(F.lit(1)).over(w)))
#%% If number of rows in main dataframe is huge, then add an offset in the above line
df_final.show()
+------+-----+---------+
|userid|index|index_new|
+------+-----+---------+
| ab| 50| 50|
| ced| 60| 60|
| a| 100| 100|
| ba| 300| 300|
| d| 500| 500|
| er| null| 506|
| fe| null| 507|
| ff| null| 508|
| fgh| null| 509|
| fi| null| 510|
+------+-----+---------+
Im comparing 2 dataframes.
I choose to compare them column by column
I created 2 smaller dataframes from the parent dataframes.
based on join columns and the comparison columns:
Created 1st dataframe:
val df1_subset = df1.select(subset_cols.head, subset_cols.tail: _*)
+----------+---------+-------------+
|first_name|last_name|loyalty_score|
+----------+---------+-------------+
| tom | cruise| 66|
| blake | lively| 66|
| eva| green| 44|
| brad| pitt| 99|
| jason| momoa| 34|
| george | clooney| 67|
| ed| sheeran| 88|
| lionel| messi| 88|
| ryan| reynolds| 45|
| will | smith| 67|
| null| null| |
+----------+---------+-------------+
Created 2nd Dataframe:
val df1_1_subset = df1_1.select(subset_cols.head, subset_cols.tail: _*)
+----------+---------+-------------+
|first_name|last_name|loyalty_score|
+----------+---------+-------------+
| tom | cruise| 34|
| brad| pitt| 78|
| eva| green| 56|
| tom | cruise| 99|
| jason| momoa| 34|
| george | clooney| 67|
| george | clooney| 88|
| lionel| messi| 88|
| ryan| reynolds| 45|
| will | smith| 67|
| kyle| jenner| 56|
| celena| gomez| 2|
+----------+---------+-------------+
Then I joined the 2 subsets
I joined these as a full outer join to get the following:
val df_subset_joined = df1_subset.join(df1_1_subset, joinColsArray, "full_outer")
Joined Subset
+----------+---------+-------------+-------------+
|first_name|last_name|loyalty_score|loyalty_score|
+----------+---------+-------------+-------------+
| will | smith| 67| 67|
| george | clooney| 67| 67|
| george | clooney| 67| 88|
| blake | lively| 66| null|
| celena| gomez| null| 2|
| eva| green| 44| 56|
| null| null| | null|
| jason| momoa| 34| 34|
| ed| sheeran| 88| null|
| lionel| messi| 88| 88|
| kyle| jenner| null| 56|
| tom | cruise| 66| 34|
| tom | cruise| 66| 99|
| brad| pitt| 99| 78|
| ryan| reynolds| 45| 45|
+----------+---------+-------------+-------------+
Then I tried to filter out the elements that are same in both comparison columns (loyalty_scores in this example) by using column positions
df_subset_joined.filter(_c2 != _c3).show
But that didnt work. Im getting the following error:
Error:(174, 33) not found: value _c2
df_subset_joined.filter(_c2 != _c3).show
What is the most efficient way for me to get a joined dataframe, where I only see the rows that do not match in the comparison columns.
I would like to keep this dynamic so hard coding column names is not an option.
Thank you for helping me understand this.
you need wo work with aliases and make us of the null-safe comparison operator (https://spark.apache.org/docs/latest/api/sql/index.html#_9), see also https://stackoverflow.com/a/54067477/1138523
val df_subset_joined = df1_subset.as("a").join(df1_1_subset.as("b"), joinColsArray, "full_outer")
df_subset_joined.filter(!($"a.loyality_score" <=> $"b.loyality_score")).show
EDIT: for dynamic column names, you can use string interpolation
import org.apache.spark.sql.functions.col
val xxx : String = ???
df_subset_joined.filter(!(col(s"a.$xxx") <=> col(s"b.$xxx"))).show
I have a simple dataset as shown under.
| id| name| country| languages|
|1 | Bob| USA| Spanish|
|2 | Angelina| France| null|
|3 | Carl| Brazil| null|
|4 | John| Australia| English|
|5 | Anne| Nepal| null|
I am trying to impute the null values in languages with the last non-null value using pyspark.sql.window to create a window over certain rows but nothing is happening. The column which is supposed to be have null values filled, temp_filled_spark, remains unchanged i.e a copy of original languages column.
from pyspark.sql import Window
from pyspark.sql.functions import last
window = Window.partitionBy('name').orderBy('country').rowsBetween(-sys.maxsize, 0)
filled_column = last(df['languages'], ignorenulls=True).over(window)
df = df.withColumn('temp_filled_spark', filled_column)
df.orderBy('name', 'country').show(100)
I expect the output column to be:
|temp_filled_spark|
| Spanish|
| Spanish|
| Spanish|
| English|
| English|
Could anybody help pointing out the mistake?
we can create window considering entire dataframe as one partition as,
from pyspark.sql import functions as F
>>> df1.show()
+---+--------+---------+---------+
| id| name| country|languages|
+---+--------+---------+---------+
| 1| Bob| USA| Spanish|
| 2|Angelina| France| null|
| 3| Carl| Brazil| null|
| 4| John|Australia| English|
| 5| Anne| Nepal| null|
+---+--------+---------+---------+
>>> w = Window.partitionBy(F.lit(1)).orderBy(F.lit(1)).rowsBetween(-sys.maxsize, 0)
>>> df1.select("*",F.last('languages',True).over(w).alias('newcol')).show()
+---+--------+---------+---------+-------+
| id| name| country|languages| newcol|
+---+--------+---------+---------+-------+
| 1| Bob| USA| Spanish|Spanish|
| 2|Angelina| France| null|Spanish|
| 3| Carl| Brazil| null|Spanish|
| 4| John|Australia| English|English|
| 5| Anne| Nepal| null|English|
+---+--------+---------+---------+-------+
Hope this helps.!
I have the below table:
+-------+---------+---------+
|movieId|movieName| genre|
+-------+---------+---------+
| 1| example1| action|
| 1| example1| thriller|
| 1| example1| romance|
| 2| example2|fantastic|
| 2| example2| action|
+-------+---------+---------+
What I am trying to achieve is to append the genre values together where the id and name are the same. Like this:
+-------+---------+---------------------------+
|movieId|movieName| genre |
+-------+---------+---------------------------+
| 1| example1| action|thriller|romance |
| 2| example2| action|fantastic |
+-------+---------+---------------------------+
Use groupBy and collect_list to get a list of all items with the same movie name. Then combine these to a string using concat_ws (if the order is important, first use sort_array). Small example with given sample dataframe:
val df2 = df.groupBy("movieId", "movieName")
.agg(collect_list($"genre").as("genre"))
.withColumn("genre", concat_ws("|", sort_array($"genre")))
Gives the result:
+-------+---------+-----------------------+
|movieId|movieName|genre |
+-------+---------+-----------------------+
|1 |example1 |action|thriller|romance|
|2 |example2 |action|fantastic |
+-------+---------+-----------------------+
I would like to create a new column with value of the previous date(date less the current date) for group of ids for the below dataframe
+---+----------+-----+
| id| date|value|
+---+----------+-----+
| a|2015-04-11| 300|
| a|2015-04-12| 400|
| a|2015-04-12| 200|
| a|2015-04-12| 100|
| a|2015-04-11| 700|
| b|2015-04-02| 100|
| b|2015-04-12| 100|
| c|2015-04-12| 400|
+---+----------+-----+
I have tried with lead window function .
val df1=Seq(("a","2015-04-11",300),("a","2015-04-12",400),("a","2015-04-12",200),("a","2015-04-12",100),("a","2015-04-11",700),("b","2015-04-02",100),("b","2015-04-12",100),("c","2015-04-12",400)).toDF("id","date","value")
var w1=Window.partitionBy("id").orderBy("date".desc)
var leadc1=lead(df1("value"),1).over(w1)
val df2=df1.withColumn("nvalue",leadc1)
+---+----------+-----+------+
| id| date|value|nvalue|
+---+----------+-----+------+
| a|2015-04-12| 400| 200|
| a|2015-04-12| 200| 100|
| a|2015-04-12| 100| 300|
| a|2015-04-11| 300| 700|
| a|2015-04-11| 700| null|
| b|2015-04-12| 100| 100|
| b|2015-04-02| 100| null|
| c|2015-04-12| 400| null|
+---+----------+-----+------+
But as we can see when I have same date in id "a" I am getting wrong result.The result should be like
+---+----------+-----+------+
| id| date|value|nvalue|
+---+----------+-----+------+
| a|2015-04-12| 400| 300|
| a|2015-04-12| 200| 300|
| a|2015-04-12| 100| 300|
| a|2015-04-11| 300| null|
| a|2015-04-11| 700| null|
| b|2015-04-12| 100| 100|
| b|2015-04-02| 100| null|
| c|2015-04-12| 400| null|
+---+----------+-----+------+
I already have a solution using join although I am looking for a solution using window function.
Thanks
The issue is you have multiple rows with the same date. lead will take value from the next row in the result set, not the next date. So when you sort the rows by date in descending order, the next row could be the same date.
How do you identify the correct value to use for a particular date? for example why are you taking 300 from (id=a, date=2015-04-11), and not 700?
To do this with window functions you may need to do multiple passes - this would take the last nvalue and apply it to all rows in the same id/date grouping - but I'm not sure how your rows are initially ordered.
val df1=Seq(("a","2015-04-11",300),("a","2015-04-12",400),("a","2015-04-12",200),("a","2015-04-12",100),("a","2015-04-11",700),("b","2015-04-02",100),("b","2015-04-12",100),("c","2015-04-12",400)).toDF("id","date","value")
var w1 = Window.partitionBy("id").orderBy("date".desc)
var leadc1 = lead(df1("value"),1).over(w1)
val df2 = df1.withColumn("nvalue",leadc1)
val w2 = Window.partitionBy("id", "date").orderBy("??? some way to distinguish row ordering")
val df3 = df1.withColumn("nvalue2", last_value("nvalue").over(w2))