I have the following data:
+-----+----+-----+
|event|t |type |
+-----+----+-----+
| A |20 | 1 |
| A |40 | 1 |
| B |10 | 1 |
| B |20 | 1 |
| B |120 | 1 |
| B |140 | 1 |
| B |320 | 1 |
| B |340 | 1 |
| B |360 | 7 |
| B |380 | 1 |
+-----+-----+----+
And what I want is something like this:
+-----+----+----+
|event|t |grp |
+-----+----+----+
| A |20 |1 |
| A |40 |1 |
| B |10 |2 |
| B |20 |2 |
| B |120 |3 |
| B |140 |3 |
| B |320 |4 |
| B |340 |4 |
| B |380 |5 |
+-----+----+----+
Rules:
Group all Values together that are at least 50ms away from each other. (column t) and belongs to the same event.
When a row of type 7 appears take a cut too and remove this row. (see last row)
The first rule I can achieve with the answer from this thread:
Code:
val windowSpec= Window.partitionBy("event").orderBy("t")
val newSession = (coalesce(
($"t" - lag($"t", 1).over(windowSpec)),
lit(0)
) > 50).cast("bigint")
val sessionized = df.withColumn("session", sum(newSession).over(userWindow))
I have to say I can't figure it out how it works and don't know how to modify it so that rule 2 also works...
Hope someone can give me some useful hints.
What I tried:
val newSession = (coalesce(
($"t" - lag($"t", 1).over(windowSpec)),
lit(0)
) > 50 || lead($"type",1).over(windowSpec) =!= 7 ).cast("bigint")
But only an error occurred: "Must follow method; cannot follow org.apache.spark.sql.Column val grp = (coalesce(
this should do the trick:
val newSession = (coalesce(
($"t" - lag($"t", 1).over(win)),
lit(0)
) > 50
or $"type"===7) // also start new group in this case
.cast("bigint")
df.withColumn("session", sum(newSession).over(win))
.where($"type"=!=7) // remove these rows
.orderBy($"event",$"t")
.show
gives:
+-----+---+----+-------+
|event| t|type|session|
+-----+---+----+-------+
| A| 20| 1| 0|
| A| 40| 1| 0|
| B| 10| 1| 0|
| B| 20| 1| 0|
| B|120| 1| 1|
| B|140| 1| 1|
| B|320| 1| 2|
| B|340| 1| 2|
| B|380| 1| 3|
+-----+---+----+-------+
Related
I have one dataframe with games and three valoration for every game from different reviews, every valoration is traduced in another dataframe as you can see:
Df_reviews
+--------+-------+-------+--------+
|Game | rev_1 | rev_2 | rev_3 |
+- ------+-------+-------+--------+
|CA |XX+ | K2 | L1 |
|FT |Z- | K1+ | L3 |
Df_rev1
+----------+-------------+
| review_1 | Equivalence |
+----------+-------------+
|XX+ | 9 |
|Y | 6 |
|Z- | 3 |
Df_rev2
+----------+-------------+
| review_2 | Equivalence |
+----------+-------------+
|K2 | 7 |
|K1+ | 6 |
|K3 | 10 |
Df_rev3
+----------+-------------+
| review_3 | Equivalence |
+----------+-------------+
|L3 | 10 |
|L2 | 9 |
|L1 | 8 |
I have to traduce it in a new dataframe with the valoration traduced and add a column with the second best valoration, for this example would be:
Df_output
+--------+---------+---------+----------+-------------+
|Game | rev_1_t | rev_2_t | rev_3_t | second_best |
+--------+---------+---------+----------+-------------+
|CA | 9 | 7 | 8 | 8 |
|FT | 3 | 6 | 10 | 6 |
To traduce it, I am trying with a left join but I am so lost. How can I deal with this?
####### Second Part ######
How can I translate columns from one dataframe to others from another dataframe, joining with multiple columns vs one? for example:
Df_revuews
+--------+-------+-------+--------+
|Game | rev_1 | rev_2 | rev_3 |
+- ------+-------+-------+--------+
|CA |XX+ | K2 | L1 |
|FT |Z- | K1+ | L3 |
Df_equiv
+--------+-------+
|Valorat | num |
+- ------+-------+
|X |3 |
|XX+ |5 |
|Z |7 |
|Z- |6 |
|K1+ |6 |
|K2 |4 |
|L1 |5 |
|L2 |6 |
|L3 |7 |
Output
+--------+-------+-------+--------+
|Game | rev_1 | rev_2 | rev_3 |
+- ------+-------+-------+--------+
|CA |5 | 4 | 5 |
|FT |6 | 6 | 7 |
I am doing this as you can see:
val joined = df_reviews
.join(df_equiv, df_reviews("rev_1") === df_equiv("num") && df_reviews("rev_2") === df_equiv("num")
&& df_reviews("rev_3") === df_equiv("num"), "left")
.select(df_reviews("Game"),
df_equiv("num").as("rev_1_t"),
df_equiv("num").as("rev_2_t"),
df_equiv("num").as("rev_3_t")
)
Thanks in advance!
You can do some left joins and get the second highest column using sort_array:
val joined = df_reviews
.join(df_rev1, df_reviews("rev_1") === df_rev1("review_1"), "left")
.join(df_rev2, df_reviews("rev_2") === df_rev2("review_2"), "left")
.join(df_rev3, df_reviews("rev_3") === df_rev3("review_3"), "left")
.select(df_reviews("Game"),
df_rev1("Equivalence").as("rev_1_t"),
df_rev2("Equivalence").as("rev_2_t"),
df_rev3("Equivalence").as("rev_3_t")
)
val array_sort_udf = udf((x: Seq[Int]) => x.sortBy(_ != null))
val result = joined.withColumn(
"second_best",
coalesce(
array_sort_udf(
array(col("rev_1_t").cast("int"), col("rev_2_t").cast("int"), col("rev_3_t").cast("int"))
)(1),
greatest(col("rev_1_t").cast("int"), col("rev_2_t").cast("int"), col("rev_3_t").cast("int"))
)
)
result.show
+----+-------+-------+-------+-----------+
|Game|rev_1_t|rev_2_t|rev_3_t|second_best|
+----+-------+-------+-------+-----------+
| CA| 9| 7| 8| 8|
| FT| 3| 6| 10| 6|
+----+-------+-------+-------+-----------+
For your second question:
val joined = df_reviews.as("r1")
.join(df_equiv.as("e1"), expr("r1.rev_1 = e1.Valorat"), "left")
.selectExpr("Game", "e1.num as rev_1", "rev_2", "rev_3")
.as("r2")
.join(df_equiv.as("e2"), expr("r2.rev_2 = e2.Valorat"), "left")
.selectExpr("Game", "rev_1", "e2.num as rev_2", "rev_3")
.as("r3")
.join(df_equiv.as("e3"), expr("r3.rev_3 = e3.Valorat"), "left")
.selectExpr("Game", "rev_1", "rev_2", "e3.num as rev_3")
joined.show
+----+-----+-----+-----+
|Game|rev_1|rev_2|rev_3|
+----+-----+-----+-----+
| CA| 5| 4| 5|
| FT| 6| 6| 7|
+----+-----+-----+-----+
I have two dataframes, and I want to add one to all row of the other one.
My dataframes are like:
id | name | rate
1 | a | 3
1 | b | 4
1 | c | 1
2 | a | 2
2 | d | 4
name
a
b
c
d
e
And I want a result like this:
id | name | rate
1 | a | 3
1 | b | 4
1 | c | 1
1 | d | null
1 | e | null
2 | a | 2
2 | b | null
2 | c | null
2 | d | 4
2 | e | null
How can I do this?
It seems it's more than a simple join.
val df = df1.select("id").distinct().crossJoin(df2).join(
df1,
Seq("name", "id"),
"left"
).orderBy("id", "name")
df.show
+----+---+----+
|name| id|rate|
+----+---+----+
| a| 1| 3|
| b| 1| 4|
| c| 1| 1|
| d| 1|null|
| e| 1|null|
| a| 2| 2|
| b| 2|null|
| c| 2|null|
| d| 2| 4|
| e| 2|null|
+----+---+----+
I created a dataframe in spark with the following schema:
root
|-- user_id: string (nullable = true)
|-- rate: decimal(32,16) (nullable =true)
|-- date: timestamp (nullable =true)
|-- type: string (nullable = true)
Data is like this in my schema
+----------+----------+-------------+---------+
| user_id| rate |date | type |
+----------+----------+-------------+---------+
| XO_121 | 10 |2020-04-20 | A |
| XO_121 | 10 |2020-04-21 | A |
| XO_121 | 30 |2020-04-22 | A |
| XO_121 |0 |2020-04-23 | A |
| XO_121 |0 |2020-04-24 | A |
| XO_121 |0 |2020-04-25 | A |
| XO_121 |0 |2020-04-26 | A |
| XO_121 |5 |2020-04-27 | A |
| XO_121 |0 |2020-04-28 | A |
| XO_121 |0 |2020-04-29 | A |
| XO_121 |1 |2020-04-30 | A |
I want to save space so I want to skip rate which has zero value but just want it's initial occurrence only other rate duplicates are allowed like you see case of 10 and they need to preserve Date order . So after applying filter my data should look like this
+----------+----------+-------------+---------+
| user_id| rate |date | type |
+----------+----------+-------------+---------+
| XO_121 | 10 |2020-04-20 | A |
| XO_121 | 10 |2020-04-21 | A |
| XO_121 | 30 |2020-04-22 | A |
| XO_121 |0 |2020-04-23 | A |
| XO_121 |5 |2020-04-27 | A |
| XO_121 |0 |2020-04-28 | A |
| XO_121 |1 |2020-04-30 | A |
I'm new to spark so just want to find out way to filter . I used Rank concept but that don't work .If any body can provide solution to this problem
Data Preparation :
val df = Seq( ("XO_121","10","2020-04-20"),("XO_121","10","2020-04-21"),("XO_121","30","2020-04-22"),("XO_121","0","2020-04-23"),("XO_121","0","2020-04-24"),("XO_121","0","2020-04-25"),("XO_121","0","2020-04-26"),("XO_121","5","2020-04-27"),("XO_121","0","2020-04-28"),("XO_121","0","2020-04-29"),("XO_121","1","2020-04-30"))
.toDF("user_id","rate","date")
Get the previous value of rate, and check for each record "rate" === "0" && "previous_rate" === "0"
import org.apache.spark.sql.expressions.Window
val winSpec = Window.partitionBy("user_id").orderBy("date")
val finalDf = df.withColumn("previous_rate", lag("rate", 1).over(winSpec))
.filter( !($"rate" === "0" && $"previous_rate" === "0"))
.drop("previous_rate")
Output :
scala> finalDf.show
+-------+----+----------+
|user_id|rate| date|
+-------+----+----------+
| XO_121| 10|2020-04-20|
| XO_121| 10|2020-04-21|
| XO_121| 30|2020-04-22|
| XO_121| 0|2020-04-23|
| XO_121| 5|2020-04-27|
| XO_121| 0|2020-04-28|
| XO_121| 1|2020-04-30|
+-------+----+----------+
Now you can apply orderBy($"date") or orderBy($"userd_id",$"date") which ever is applicable for you.
You can use row_number() instead of Rank as below
_w = W.partitionBy("col2").orderBy("col1")
df = df.withColumn("rnk", F.row_number().over(_w))
df = df.filter(F.col("rnk") == F.lit("1"))
df.show()
+------+----+---+
| col1|col2|rnk|
+------+----+---+
|XO_121| 0| 1|
|XO_121| 10| 1|
|XO_121| 30| 1|
|XO_121| 20| 1|
|XO_121| 40| 1|
+------+----+---+
Also , you can use first() in case you know there is only repetition on value 0
df = df.groupBy("col1","col2").agg(F.first("col2").alias("col2")).orderBy("col2")
df.show()
+------+----+----+
| col1|col2|col2|
+------+----+----+
|XO_121| 0| 0|
|XO_121| 10| 10|
|XO_121| 20| 20|
|XO_121| 30| 30|
|XO_121| 50| 50|
+------+----+----+
I am trying to find the unique rows (based on id) that have the maximum length value in a Spark dataframe. Each Column has a value of string type.
The dataframe is like:
+-----+---+----+---+---+
|id | A | B | C | D |
+-----+---+----+---+---+
|1 |toto|tata|titi| |
|1 |toto|tata|titi|tutu|
|2 |bla |blo | | |
|3 |b | c | | d |
|3 |b | c | a | d |
+-----+---+----+---+---+
The expectation is:
+-----+---+----+---+---+
|id | A | B | C | D |
+-----+---+----+---+---+
|1 |toto|tata|titi|tutu|
|2 |bla |blo | | |
|3 |b | c | a | d |
+-----+---+----+---+---+
I can't figure how to do this using Spark easily...
Thanks in advance
Note: This approach takes care of any addition/deletion of columns to the DataFrame, without the need of code change.
It can be done by first finding length of all columns after concatenating (except the first column), then filter all other rows except the row with the maximum length.
import org.apache.spark.sql.expressions._
import org.apache.spark.sql.functions._
val output = input.withColumn("rowLength", length(concat(input.columns.toList.drop(1).map(col): _*)))
.withColumn("maxLength", max($"rowLength").over(Window.partitionBy($"id")))
.filter($"rowLength" === $"maxLength")
.drop("rowLength", "maxLength")
scala> df.show
+---+----+----+----+----+
| id| A| B| C| D|
+---+----+----+----+----+
| 1|toto|tata|titi| |
| 1|toto|tata|titi|tutu|
| 2| bla| blo| | |
| 3| b| c| | d|
| 3| b| c| a| d|
+---+----+----+----+----+
scala> df.groupBy("id").agg(concat_ws("",collect_set(col("A"))).alias("A"),concat_ws("",collect_set(col("B"))).alias("B"),concat_ws("",collect_set(col("C"))).alias("C"),concat_ws("",collect_set(col("D"))).alias("D")).show
+---+----+----+----+----+
| id| A| B| C| D|
+---+----+----+----+----+
| 1|toto|tata|titi|tutu|
| 2| bla| blo| | |
| 3| b| c| a| d|
+---+----+----+----+----+
This question already has answers here:
How to melt Spark DataFrame?
(6 answers)
Closed 4 years ago.
How to create a vertical table in Spark 2 SQL.
I am building a ETL using Spark 2 / SQL / Scala. I have data in normal table structure like.
Input Table:
| ID | A | B | C | D |
| 1 | A1 | B1 | C1 | D1 |
| 2 | A2 | B2 | C2 | D2 |
Output Table:
| ID | Key | Val |
| 1 | A | A1 |
| 1 | B | B1 |
| 1 | C | C1 |
| 1 | D | D1 |
| 2 | A | A2 |
| 2 | B | B2 |
| 2 | C | C2 |
| 2 | D | D2 |
This could do the trick as well:
Input Data:
+---+---+---+---+---+
|ID |A |B |C |D |
+---+---+---+---+---+
|1 |A1 |B1 |C1 |D1 |
|2 |A2 |B2 |C2 |D2 |
|3 |A3 |B3 |C3 |D3 |
+---+---+---+---+---+
Zip the column header and no of columns to be included:
val cols = Seq("A","B","C","D") zip Range(0,4,1)
df.flatMap(r => cols.map(i => (r.getString(0),i._1,r.getString(i._2 + 1)))).toDF("ID","KEY","VALUE").show()
Result should look like this:
+---+---+-----+
| ID|KEY|VALUE|
+---+---+-----+
| 1| A| A1|
| 1| B| B1|
| 1| C| C1|
| 1| D| D1|
| 2| A| A2|
| 2| B| B2|
| 2| C| C2|
| 2| D| D2|
| 3| A| A3|
| 3| B| B3|
| 3| C| C3|
| 3| D| D3|
+---+---+-----+
Good Luck!!