I have a table which has format like this (id is the pk)
id|timestamps |year|month|day|groups_ids|status |SCHEDULED |uid|
--|-------------------|----|-----|---|----------|-------|-------------------|---|
1|2021-02-04 17:18:24|2020| 8| 9| 1|OK |2020-08-09 00:00:00| 1|
2|2021-02-04 17:18:09|2020| 9| 9| 1|OK |2020-09-09 00:00:00| 1|
3|2021-02-04 17:19:51|2020| 10| 9| 1|HOLD |2020-10-09 00:00:00| 1|
4|2021-02-04 17:19:04|2020| 10| 10| 2|HOLD |2020-10-09 00:00:00| 1|
5|2021-02-04 17:18:30|2020| 10| 11| 2|HOLD |2020-10-09 00:00:00| 1|
6|2021-02-04 17:18:57|2020| 10| 12| 2|OK |2020-10-09 00:00:00| 1|
7|2021-02-04 17:18:24|2020| 8| 9| 1|HOLD |2020-08-09 00:00:00| 2|
8|2021-02-04 17:18:09|2020| 9| 9| 2|HOLD |2020-09-09 00:00:00| 2|
9|2021-02-04 17:19:51|2020| 10| 9| 2|HOLD |2020-10-09 00:00:00| 2|
10|2021-02-04 17:19:04|2020| 10| 10| 2|HOLD |2020-10-09 00:00:00| 2|
11|2021-02-04 17:18:30|2020| 10| 11| 2|HOLD |2020-10-09 00:00:00| 2|
12|2021-02-04 17:18:57|2020| 10| 12| 2|HOLD |2020-10-09 00:00:00| 2|
the job is i want to extract every group_ids for each uid when the status is OK order by SCHEDULED ascended, and if there's no OK found in the record for the uid it will takes for the latest HOLD based on year month and day. After that I want to make a weighing score with each group_ids:
group_ids > score
1 > 100
2 > 80
3 > 60
4 > 50
5 > 10
6 > 50
7 > 0
so if [1,1,2] will be change to (100+100+80) = 280
it will look like this:
ids|uid|pattern|score|
---|---|-------|-----|
1| 1|[1,1,2]| 280|
2| 2|[2] | 80|
It's pretty hard since i cannot found any operators like python for loop and append operators in PostgreSQL
step-by-step demo:db<>fiddle
SELECT
s.uid, s.values,
sum(v.value) as score
FROM (
SELECT DISTINCT ON (uid)
uid,
CASE
WHEN cardinality(ok_count) > 0 THEN ok_count
ELSE ARRAY[last_value]
END as values
FROM (
SELECT
*,
ARRAY_AGG(groups_ids) FILTER (WHERE status = 'OK') OVER (PARTITION BY uid ORDER BY scheduled)as ok_count,
first_value(groups_ids) OVER (PARTITION BY uid ORDER BY year, month DESC) as last_value
FROM mytable
) s
ORDER BY uid, scheduled DESC
) s,
unnest(values) as u_group_id
JOIN (VALUES
(1, 100), (2, 80), (3, 60), (4, 50), (5,10), (6, 50), (7, 0)
) v(group_id, value) ON v.group_id = u_group_id
GROUP BY s.uid, s.values
Phew... quite complex. Let's have a look at the steps:
a)
SELECT
*,
-- 1:
ARRAY_AGG(groups_ids) FILTER (WHERE status = 'OK') OVER (PARTITION BY uid ORDER BY scheduled)as oks,
-- 2:
first_value(groups_ids) OVER (PARTITION BY uid ORDER BY year, month DESC) as last_value
FROM mytable
Using the array_agg() window function to create an array of group_ids without loosing the other data as we would with simple GROUP BY. The FILTER clause is to put only the status = OK records into the array.
Find the last group_id of a group (partition) using the first_value() window function. In descending order is returns the last value.
b)
SELECT DISTINCT ON (uid) -- 2
uid,
CASE -- 1
WHEN cardinality(ok_count) > 0 THEN ok_count
ELSE ARRAY[last_value]
END as values
FROM (
...
) s
ORDER BY uid, scheduled DESC -- 2
The CASE clause either takes the previously created array (from step a1) or, if there is none, it takes the last value (from step a2), creates an one-elemented array.
The DISTINCT ON clause returns only the first element of an ordered group. The group is your uid and the order is given by the column scheduled. Since you don't want the first, but last records within the group, you have to order it DESC to make the most recent one the topmost record. That is taken by the DISTINCT ON
c)
SELECT
uid,
group_id
FROM (
...
) s,
unnest(values) as group_id -- 1
The arrays should be extracted into one record per element. That helps to join the weighted values later.
d)
SELECT
s.uid, s.values,
sum(v.weighted_value) as score -- 2
FROM (
...
) s,
unnest(values) as u_group_id
JOIN (VALUES
(1, 100), (2, 80), ...
) v(group_id, weighted_value) ON v.group_id = u_group_id -- 1
GROUP BY s.uid, s.values -- 2
Join your weighted value on the array elements. Naturally, this can be a table or query or whatever.
Regroup the uid groups to calculate the SUM() of the weighted_values
Additional note:
You should avoid duplicate data storing. You don't need to store the date parts year, month and day if you also store the complete date. You can always calculate them from the date.
Related
I have a table "products" with a column called "store_id".
This table has a lot af products from many stores.
I need to select 4 random products from 4 specific stores (id: 1, 34, 45, 100).
How can I do that?
I've tried to like this:
SELECT * FROM products WHERE store_id IN (1, 34, 45, 100)
But that query returns duplicated records (by store_id).
I need the following result:
store_id
title
1
title a
34
title b
45
title c
100
title d
To get a true random pick of the products use a row_number function with random order.
This query shows all data with a random index of the product for each store
select products.*,
row_number() over (partition by store_id order by random()) rn
from products
where store_id in (1,34)
store_id|product_id|title|rn|
--------+----------+-----+--+
1| 1|a | 1|
1| 3|c | 2|
1| 2|b | 3|
34| 6|f | 1|
34| 7|g | 2|
34| 8|h | 3|
34| 5|e | 4|
34| 4|d | 5|
To get only one product per store simple filter with rn=1
with prod as (
select products.*,
row_number() over (partition by store_id order by random()) rn
from products
where store_id in (1,34)
)
select store_id, title from prod
where rn = 1
;
store_id|title|
--------+-----+
1|a |
34|e |
Note this query will produce a different result on each run. If you need a stability you must call setseed before each execution. E.g.
SELECT setseed(1)
Use the DISTINCT construct to get unique records for the desired column:
SELECT distinct on (store_id) store_id, title FROM products WHERE store_id IN (1, 34, 45, 100);
Demo in sqldaddy.io
Thanks in advance for any help.
I have a table with unique tickets, customer IDs and ticket price. For each ticket, I want to see the number of tickets and total revenue from a customer 3 months after the date of the ticket.
I tried to use the partition by function with the date condition set in the on clause, but it just evaluates all tickets of the customer rather than the 3 month period I want.
select distinct on (at2.ticket_number)
at2.customer_id
,at2.ticket_id
,at2.ticket_number
,at2.initial_sale_date
,ata.tix "a_tix"
,ata.aov "a_aov"
,ata.rev "a_rev"
from reports.agg_tickets at2
left join (select at2.customer_id, at2.final_fare_value, at2.initial_sale_date, count(at2.customer_id) OVER (PARTITION BY at2.customer_id) AS tix,
avg(at2.final_fare_value) over (partition by at2.customer_id) as aov,
sum(at2.final_fare_value) over (partition by at2.customer_id) as rev
from reports.agg_tickets at2
) ata
on (ata.customer_id = at2.customer_id
and ata.initial_sale_date > at2.initial_sale_date
and ata.initial_sale_date < at2.initial_sale_date + interval '3 months')
I could use a left join lateral, but it takes far too long. Slightly confused with how to achieve what I want, so any help would be greatly appreciated.
Many thanks
Edit:
Here is the sample of data. Picture of data table.
The table is unique on ticket number, but not on customer.
No need to use a join at all, this will yield (as you observe) a problemetic performnce.
What is your solution is a plain window function with a frame_clause that will consider the next 3 months for each ticket
Example (self explained)
count(*) over (partition by customer_id order by initial_sale_date
range between current row and '3 months' following) ticket_cnt
Here a full query with simplified sample data and the result
with dt as (
select * from (values
(1, 1, date'2020-01-01', 10),
(1, 2, date'2020-02-01', 15),
(1, 3, date'2020-03-01', 20),
(1, 4, date'2020-04-01', 25),
(1, 5, date'2020-05-01', 30),
(2, 6, date'2020-01-01', 15),
(2, 7, date'2020-02-01', 20),
(2, 7, date'2021-01-01', 25)
) tab (customer_id, ticket_id, initial_sale_date,final_fare_value)
)
select
customer_id, ticket_id, initial_sale_date, final_fare_value,
count(*) over (partition by customer_id order by initial_sale_date range between current row and '3 months' following) ticket_cnt,
sum(final_fare_value) over (partition by customer_id order by initial_sale_date range between current row and '3 months' following) ticket_sum
from dt;
customer_id|ticket_id|initial_sale_date|final_fare_value|ticket_cnt|ticket_sum|
-----------+---------+-----------------+----------------+----------+----------+
1| 1| 2020-01-01| 10| 4| 70|
1| 2| 2020-02-01| 15| 4| 90|
1| 3| 2020-03-01| 20| 3| 75|
1| 4| 2020-04-01| 25| 2| 55|
1| 5| 2020-05-01| 30| 1| 30|
2| 6| 2020-01-01| 15| 2| 35|
2| 7| 2020-02-01| 20| 1| 20|
2| 7| 2021-01-01| 25| 1| 25|
Heyo StackOverflow,
Currently trying to find an elegant way to do a specific transformation.
So I have a dataframe of actions, that looks like this:
+---------+----------+----------+---------+
|timestamp| user_id| action| value|
+---------+----------+----------+---------+
| 100| 1| click| null|
| 101| 2| click| null|
| 103| 1| drag| AAA|
| 101| 1| click| null|
| 108| 1| click| null|
| 100| 2| click| null|
| 106| 1| drag| BBB|
+---------+----------+----------+---------+
Context:
Users can perform actions: clicks and drags. Clicks don't have a value, drags do. A drag implies there was a click but not the other way around. Let's also assume that the drag event can be recorded after or before the click event.
So I have, for each drag, a corresponding click action. What I would like to do, is merge the drag and click actions into 1, ie. delete the drag action after giving its value to the click action.
To know which click corresponds to which drag, I have to take the click whose timestamp is the closest to the drag's timestamp. I also want to make sure that a drag cannot be linked to a click if there timestamp difference is over 5 (and that means some drags might not be linked, it's fine). Of course, I don't want the drag of user 1 to correspond to the click of user 2.
Here, the result would look like this:
+---------+----------+----------+---------+
|timestamp| user_id| action| value|
+---------+----------+----------+---------+
| 100| 1| click| null|
| 101| 2| click| null|
| 101| 1| click| AAA|
| 108| 1| click| BBB|
| 100| 2| click| null|
+---------+----------+----------+---------+
The drag with AAA (timestamp = 103) was linked to the click that happened at 101 because it's the closest to 103. Same logic for BBB.
So I would like to perform these operations, in a smooth/efficient way. So far, I have something like this:
val window = Window partitionBy ($"user_id") orderBy $"timestamp".asc
myDF
.withColumn("previous_value", lag("value", 1, null) over window)
.withColumn("previous_timestamp", lag("timestamp", 1, null) over window)
.withColumn("next_value", lead("value", 1, null) over window)
.withColumn("next_timestamp", lead("timestamp", 1, null) over window)
.withColumn("value",
when(
$"previous_value".isNotNull and
// If there is more than 5 sec. difference, it shouldn't be joined
$"timestamp" - $"previous_timestamp" < 5 and
(
$"next_timestamp".isNull or
$"next_timestamp" - $"timestamp" > $"timestamp" - $"previous_timestamp"
), $"previous_value")
.otherwise(
when($"next_timestamp" - $"timestamp" < 5, $"next_value")
.otherwise(null)
)
)
.filter($"action" === "click")
.drop("previous_value")
.drop("previous_timestamp")
.drop("next_value")
.drop("next_timestamp")
But I feel this is rather inefficient. Is there a better way to do this ? (something that can be done without having to create 4 temporary columns...)
Is there a way to manipulate both the row with offset -1 and +1 in the same expression for example ?
Thanks in advance!
Here's my attempt using Spark-SQL rather than DataFrame APIs, but it should be possible to convert:
myDF.registerTempTable("mydf")
spark.sql("""
with
clicks_table as (select * from mydf where action='click')
,drags_table as (select * from mydf where action='drag' )
,one_click_many_drags as (
select
c.timestamp as c_timestamp
, d.timestamp as d_timestamp
, c.user_id as c_user_id
, d.user_id as d_user_id
, c.action as c_action
, d.action as d_action
, c.value as c_value
, d.value as d_value
from clicks_table c
inner join drags_table d
on c.user_id = d.user_id
and abs(c.timestamp - d.timestamp) <= 5 --a drag cannot be linked to a click if there timestamp difference is over 5
)
,one_click_one_drag as (
select c_timestamp as timestamp, c_user_id as user_id, c_action as action, d_value as value
from (
select *, row_number() over (
partition by d_user_id, d_timestamp --for each drag timestamp with multiple possible click timestamps, we rank the click timestamps by nearness
order by
abs(c_timestamp - d_timestamp) asc --prefer nearest
, c_timestamp asc --prefer next_value if tied
) as rn
from one_click_many_drags
)
where rn=1 --take only the best match for each drag timestamp
)
--now we start from the clicks_table and add in the desired drag values!
select c.timestamp, c.user_id, c.action, m.value
from clicks_table c
left join one_click_one_drag m
on c.user_id = m.user_id
and c.timestamp = m.timestamp
""")
Tested to produce your desired output.
I have a dataframe in Spark with name column and dates. And I would like to find all continuous sequences of constantly increasing dates (day after day) for each name and calculate their durations. The output should contain a name, start date (of the dates sequence) and duration of such time period (amount of days)
How can I do this with Spark functions?
A consecutive sequence of dates example:
2019-03-12
2019-03-13
2019-03-14
2019-03-15
I have defined such solution but it calculates the overall amount of days by each name and does not divide it into sequences:
val result = allDataDf
.groupBy($"name")
.agg(count($"date").as("timePeriod"))
.orderBy($"timePeriod".desc)
.head()
Also, I tried with ranks, but counts column has only 1s, for some reason:
val names = Window
.partitionBy($"name")
.orderBy($"date")
val result = allDataDf
.select($"name", $"date", rank over names as "rank")
.groupBy($"name", $"date", $"rank")
.agg(count($"*") as "count")
The output looks like this:
+-----------+----------+----+-----+
|stationName| date|rank|count|
+-----------+----------+----+-----+
| NAME|2019-03-24| 1| 1|
| NAME|2019-03-25| 2| 1|
| NAME|2019-03-27| 3| 1|
| NAME|2019-03-28| 4| 1|
| NAME|2019-01-29| 5| 1|
| NAME|2019-03-30| 6| 1|
| NAME|2019-03-31| 7| 1|
| NAME|2019-04-02| 8| 1|
| NAME|2019-04-05| 9| 1|
| NAME|2019-04-07| 10| 1|
+-----------+----------+----+-----+
Finding consecutive dates is fairly easy in SQL. You could do it with a query like:
WITH s AS (
SELECT
stationName,
date,
date_add(date, -(row_number() over (partition by stationName order by date))) as discriminator
FROM stations
)
SELECT
stationName,
MIN(date) as start,
COUNT(1) AS duration
FROM s GROUP BY stationName, discriminator
Fortunately, we can use SQL in spark. Let's check if it works (I used different dates):
val df = Seq(
("NAME1", "2019-03-22"),
("NAME1", "2019-03-23"),
("NAME1", "2019-03-24"),
("NAME1", "2019-03-25"),
("NAME1", "2019-03-27"),
("NAME1", "2019-03-28"),
("NAME2", "2019-03-27"),
("NAME2", "2019-03-28"),
("NAME2", "2019-03-30"),
("NAME2", "2019-03-31"),
("NAME2", "2019-04-04"),
("NAME2", "2019-04-05"),
("NAME2", "2019-04-06")
).toDF("stationName", "date")
.withColumn("date", date_format(col("date"), "yyyy-MM-dd"))
df.createTempView("stations");
val result = spark.sql(
"""
|WITH s AS (
| SELECT
| stationName,
| date,
| date_add(date, -(row_number() over (partition by stationName order by date)) + 1) as discriminator
| FROM stations
|)
|SELECT
| stationName,
| MIN(date) as start,
| COUNT(1) AS duration
|FROM s GROUP BY stationName, discriminator
""".stripMargin)
result.show()
It seems to output correct dataset:
+-----------+----------+--------+
|stationName| start|duration|
+-----------+----------+--------+
| NAME1|2019-03-22| 4|
| NAME1|2019-03-27| 2|
| NAME2|2019-03-27| 2|
| NAME2|2019-03-30| 2|
| NAME2|2019-04-04| 3|
+-----------+----------+--------+
I have tried numerous approaches to turn the following:
Gender, Age, Value
1, 20, 21
2, 23 22
1, 26, 23
2, 29, 24
into
Male_Age, Male_Value, Female_Age, Female_Value
20 21 23 22
26 23 29 24
What i need to do is group by gender and instead of using an aggregate like (sum, count, avg) I need to create List[age] and List[value]. This should be possible because i am using a Dataset which allows functional operations.
If the number of rows for males and females are not the same, the columns should be filled with nulls.
One approach I tried was to make a new a new dataframe using the columns of other dataframes like so:
df
.select(male.select("sex").where('sex === 1).col("sex"),
female.select("sex").where('sex === 2).col("sex"))
However, this bizarrely produces output like so:
sex, sex,
1, 1
2, 2
1, 1
2, 2
I can't see how that is possible.
I also tried using pivot, but it forces me to aggregate after the group by:
df.withColumn("sex2", df.col("sex"))
.groupBy("sex")
.pivot("sex2")
.agg(
sum('value').as("mean"),
stddev('value).as("std. dev") )
.show()
|sex| 1.0_mean| 1.0_std. dev| 2.0_mean| 2.0_std. dev|
|1.0|0.4926065526| 1.8110632697| | |
|2.0| | |0.951250372|1.75060275400785|
The following code does what I need in Oracle SQL, so it should possible in Spark SQL too I reckon...
drop table mytable
CREATE TABLE mytable
( gender number(10) NOT NULL,
age number(10) NOT NULL,
value number(10) );
insert into mytable values (1,20,21);
insert into mytable values(2,23,22);
insert into mytable values (1,26,23);
insert into mytable values (2,29,24);
insert into mytable values (1,30,25);
select * from mytable;
SELECT A.VALUE AS MALE,
B.VALUE AS FEMALE
FROM
(select value, rownum RN from mytable where gender = 1) A
FULL OUTER JOIN
(select value, rownum RN from mytable where gender = 2) B
ON A.RN = B.RN
The following should give you the result.
val df = Seq(
(1, 20, 21),
(2, 23, 22),
(1, 26, 23),
(2, 29, 24)
).toDF("Gender", "Age", "Value")
scala> df.show
+------+---+-----+
|Gender|Age|Value|
+------+---+-----+
| 1| 20| 21|
| 2| 23| 22|
| 1| 26| 23|
| 2| 29| 24|
+------+---+-----+
// Gender 1 = Male
// Gender 2 = Female
import org.apache.spark.sql.expressions.Window
val byGender = Window.partitionBy("gender").orderBy("gender")
val males = df
.filter("gender = 1")
.select($"age" as "male_age",
$"value" as "male_value",
row_number() over byGender as "RN")
scala> males.show
+--------+----------+---+
|male_age|male_value| RN|
+--------+----------+---+
| 20| 21| 1|
| 26| 23| 2|
+--------+----------+---+
val females = df
.filter("gender = 2")
.select($"age" as "female_age",
$"value" as "female_value",
row_number() over byGender as "RN")
scala> females.show
+----------+------------+---+
|female_age|female_value| RN|
+----------+------------+---+
| 23| 22| 1|
| 29| 24| 2|
+----------+------------+---+
scala> males.join(females, Seq("RN"), "outer").show
+---+--------+----------+----------+------------+
| RN|male_age|male_value|female_age|female_value|
+---+--------+----------+----------+------------+
| 1| 20| 21| 23| 22|
| 2| 26| 23| 29| 24|
+---+--------+----------+----------+------------+
Given a DataFrame called df with columns gender, age, and value, you can do this:
df.groupBy($"gender")
.agg(collect_list($"age"), collect_list($"value")).rdd.map { row =>
val ages: Seq[Int] = row.getSeq(1)
val values: Seq[Int] = row.getSeq(2)
(row.getInt(0), ages.head, ages.last, values.head, values.last)
}.toDF("gender", "male_age", "female_age", "male_value", "female_value")
This uses the collect_list aggregating function in the very helpful Spark functions library to aggregate the values you want. (As you can see, there is also a collect_set as well.)
After that, I don't know of any higher-level DataFrame functions to expand those columnar arrays into individual columns of their own, so I fall back to the lower-level RDD API our ancestors used. I simply expand everything into a Tuple and then turn it back into a DataFrame. The commenters above mention corner cases I have not addressed; using functions like headOption and tailOption might be useful there. But this should be enough to get you moving.