How to select specific items in IN sql - postgresql

I have a table "products" with a column called "store_id".
This table has a lot af products from many stores.
I need to select 4 random products from 4 specific stores (id: 1, 34, 45, 100).
How can I do that?
I've tried to like this:
SELECT * FROM products WHERE store_id IN (1, 34, 45, 100)
But that query returns duplicated records (by store_id).
I need the following result:
store_id
title
1
title a
34
title b
45
title c
100
title d

To get a true random pick of the products use a row_number function with random order.
This query shows all data with a random index of the product for each store
select products.*,
row_number() over (partition by store_id order by random()) rn
from products
where store_id in (1,34)
store_id|product_id|title|rn|
--------+----------+-----+--+
1| 1|a | 1|
1| 3|c | 2|
1| 2|b | 3|
34| 6|f | 1|
34| 7|g | 2|
34| 8|h | 3|
34| 5|e | 4|
34| 4|d | 5|
To get only one product per store simple filter with rn=1
with prod as (
select products.*,
row_number() over (partition by store_id order by random()) rn
from products
where store_id in (1,34)
)
select store_id, title from prod
where rn = 1
;
store_id|title|
--------+-----+
1|a |
34|e |
Note this query will produce a different result on each run. If you need a stability you must call setseed before each execution. E.g.
SELECT setseed(1)

Use the DISTINCT construct to get unique records for the desired column:
SELECT distinct on (store_id) store_id, title FROM products WHERE store_id IN (1, 34, 45, 100);
Demo in sqldaddy.io

Related

How to calculate total revenue within 3 months in postgresql

Thanks in advance for any help.
I have a table with unique tickets, customer IDs and ticket price. For each ticket, I want to see the number of tickets and total revenue from a customer 3 months after the date of the ticket.
I tried to use the partition by function with the date condition set in the on clause, but it just evaluates all tickets of the customer rather than the 3 month period I want.
select distinct on (at2.ticket_number)
at2.customer_id
,at2.ticket_id
,at2.ticket_number
,at2.initial_sale_date
,ata.tix "a_tix"
,ata.aov "a_aov"
,ata.rev "a_rev"
from reports.agg_tickets at2
left join (select at2.customer_id, at2.final_fare_value, at2.initial_sale_date, count(at2.customer_id) OVER (PARTITION BY at2.customer_id) AS tix,
avg(at2.final_fare_value) over (partition by at2.customer_id) as aov,
sum(at2.final_fare_value) over (partition by at2.customer_id) as rev
from reports.agg_tickets at2
) ata
on (ata.customer_id = at2.customer_id
and ata.initial_sale_date > at2.initial_sale_date
and ata.initial_sale_date < at2.initial_sale_date + interval '3 months')
I could use a left join lateral, but it takes far too long. Slightly confused with how to achieve what I want, so any help would be greatly appreciated.
Many thanks
Edit:
Here is the sample of data. Picture of data table.
The table is unique on ticket number, but not on customer.
No need to use a join at all, this will yield (as you observe) a problemetic performnce.
What is your solution is a plain window function with a frame_clause that will consider the next 3 months for each ticket
Example (self explained)
count(*) over (partition by customer_id order by initial_sale_date
range between current row and '3 months' following) ticket_cnt
Here a full query with simplified sample data and the result
with dt as (
select * from (values
(1, 1, date'2020-01-01', 10),
(1, 2, date'2020-02-01', 15),
(1, 3, date'2020-03-01', 20),
(1, 4, date'2020-04-01', 25),
(1, 5, date'2020-05-01', 30),
(2, 6, date'2020-01-01', 15),
(2, 7, date'2020-02-01', 20),
(2, 7, date'2021-01-01', 25)
) tab (customer_id, ticket_id, initial_sale_date,final_fare_value)
)
select
customer_id, ticket_id, initial_sale_date, final_fare_value,
count(*) over (partition by customer_id order by initial_sale_date range between current row and '3 months' following) ticket_cnt,
sum(final_fare_value) over (partition by customer_id order by initial_sale_date range between current row and '3 months' following) ticket_sum
from dt;
customer_id|ticket_id|initial_sale_date|final_fare_value|ticket_cnt|ticket_sum|
-----------+---------+-----------------+----------------+----------+----------+
1| 1| 2020-01-01| 10| 4| 70|
1| 2| 2020-02-01| 15| 4| 90|
1| 3| 2020-03-01| 20| 3| 75|
1| 4| 2020-04-01| 25| 2| 55|
1| 5| 2020-05-01| 30| 1| 30|
2| 6| 2020-01-01| 15| 2| 35|
2| 7| 2020-02-01| 20| 1| 20|
2| 7| 2021-01-01| 25| 1| 25|

concatenate a column value for several rows based on condition

I have a table which has format like this (id is the pk)
id|timestamps |year|month|day|groups_ids|status |SCHEDULED |uid|
--|-------------------|----|-----|---|----------|-------|-------------------|---|
1|2021-02-04 17:18:24|2020| 8| 9| 1|OK |2020-08-09 00:00:00| 1|
2|2021-02-04 17:18:09|2020| 9| 9| 1|OK |2020-09-09 00:00:00| 1|
3|2021-02-04 17:19:51|2020| 10| 9| 1|HOLD |2020-10-09 00:00:00| 1|
4|2021-02-04 17:19:04|2020| 10| 10| 2|HOLD |2020-10-09 00:00:00| 1|
5|2021-02-04 17:18:30|2020| 10| 11| 2|HOLD |2020-10-09 00:00:00| 1|
6|2021-02-04 17:18:57|2020| 10| 12| 2|OK |2020-10-09 00:00:00| 1|
7|2021-02-04 17:18:24|2020| 8| 9| 1|HOLD |2020-08-09 00:00:00| 2|
8|2021-02-04 17:18:09|2020| 9| 9| 2|HOLD |2020-09-09 00:00:00| 2|
9|2021-02-04 17:19:51|2020| 10| 9| 2|HOLD |2020-10-09 00:00:00| 2|
10|2021-02-04 17:19:04|2020| 10| 10| 2|HOLD |2020-10-09 00:00:00| 2|
11|2021-02-04 17:18:30|2020| 10| 11| 2|HOLD |2020-10-09 00:00:00| 2|
12|2021-02-04 17:18:57|2020| 10| 12| 2|HOLD |2020-10-09 00:00:00| 2|
the job is i want to extract every group_ids for each uid when the status is OK order by SCHEDULED ascended, and if there's no OK found in the record for the uid it will takes for the latest HOLD based on year month and day. After that I want to make a weighing score with each group_ids:
group_ids > score
1 > 100
2 > 80
3 > 60
4 > 50
5 > 10
6 > 50
7 > 0
so if [1,1,2] will be change to (100+100+80) = 280
it will look like this:
ids|uid|pattern|score|
---|---|-------|-----|
1| 1|[1,1,2]| 280|
2| 2|[2] | 80|
It's pretty hard since i cannot found any operators like python for loop and append operators in PostgreSQL
step-by-step demo:db<>fiddle
SELECT
s.uid, s.values,
sum(v.value) as score
FROM (
SELECT DISTINCT ON (uid)
uid,
CASE
WHEN cardinality(ok_count) > 0 THEN ok_count
ELSE ARRAY[last_value]
END as values
FROM (
SELECT
*,
ARRAY_AGG(groups_ids) FILTER (WHERE status = 'OK') OVER (PARTITION BY uid ORDER BY scheduled)as ok_count,
first_value(groups_ids) OVER (PARTITION BY uid ORDER BY year, month DESC) as last_value
FROM mytable
) s
ORDER BY uid, scheduled DESC
) s,
unnest(values) as u_group_id
JOIN (VALUES
(1, 100), (2, 80), (3, 60), (4, 50), (5,10), (6, 50), (7, 0)
) v(group_id, value) ON v.group_id = u_group_id
GROUP BY s.uid, s.values
Phew... quite complex. Let's have a look at the steps:
a)
SELECT
*,
-- 1:
ARRAY_AGG(groups_ids) FILTER (WHERE status = 'OK') OVER (PARTITION BY uid ORDER BY scheduled)as oks,
-- 2:
first_value(groups_ids) OVER (PARTITION BY uid ORDER BY year, month DESC) as last_value
FROM mytable
Using the array_agg() window function to create an array of group_ids without loosing the other data as we would with simple GROUP BY. The FILTER clause is to put only the status = OK records into the array.
Find the last group_id of a group (partition) using the first_value() window function. In descending order is returns the last value.
b)
SELECT DISTINCT ON (uid) -- 2
uid,
CASE -- 1
WHEN cardinality(ok_count) > 0 THEN ok_count
ELSE ARRAY[last_value]
END as values
FROM (
...
) s
ORDER BY uid, scheduled DESC -- 2
The CASE clause either takes the previously created array (from step a1) or, if there is none, it takes the last value (from step a2), creates an one-elemented array.
The DISTINCT ON clause returns only the first element of an ordered group. The group is your uid and the order is given by the column scheduled. Since you don't want the first, but last records within the group, you have to order it DESC to make the most recent one the topmost record. That is taken by the DISTINCT ON
c)
SELECT
uid,
group_id
FROM (
...
) s,
unnest(values) as group_id -- 1
The arrays should be extracted into one record per element. That helps to join the weighted values later.
d)
SELECT
s.uid, s.values,
sum(v.weighted_value) as score -- 2
FROM (
...
) s,
unnest(values) as u_group_id
JOIN (VALUES
(1, 100), (2, 80), ...
) v(group_id, weighted_value) ON v.group_id = u_group_id -- 1
GROUP BY s.uid, s.values -- 2
Join your weighted value on the array elements. Naturally, this can be a table or query or whatever.
Regroup the uid groups to calculate the SUM() of the weighted_values
Additional note:
You should avoid duplicate data storing. You don't need to store the date parts year, month and day if you also store the complete date. You can always calculate them from the date.

Counting items in postgresql with outputting zero for those no found

I make the following request to count my items:
SELECT COUNT(cee."entryId"), cee."categoryId" FROM category_entries_entry cee
WHERE cee."categoryId" IN (1, 2, 3)
GROUP BY cee."categoryId";
If items with ids 1 and 2 not found then I will only see the result for item with id = 3. Nevertheless I would like to get the following output:
count|categoryId|
-----|----------|
1| 0|
2| 0|
3| 5|
How do I achieve it?
Meta:
PostgreSQL version: 12.3
Use a left join against a values clause:
SELECT COUNT(cee."entryId"),
t.id as category_id
FROM (
values (1),(2),(3)
) as t(id)
left join category_entries_entry cee on cee."categoryId" = t.id
GROUP BY t.id;

Count the unique and duplicate values in a column using Postgresql

Goal: Find the count of uniques and duplicates in the worker_ref_id column.
I find the solution here in MySQL but IF does not exist in PostgreSQL. So, how would I do that in PostgreSQL?
I have the following table:
|worker_ref_id|bonus_amount|
| 1| 5000|
| 2| 3000|
| 3| 4000|
| 1| 4500|
| 2| 3500|
I would like the following output:
|Unique|Duplicates|
|1 |2 |
I get the right answer but it appears as two rows rather than two columns and one row:
SELECT COUNT(*) AS "Duplicate" FROM (SELECT worker_ref_id,
COUNT(worker_ref_id) AS "Count"
FROM bonus
GROUP BY worker_ref_id
HAVING COUNT(worker_ref_id) > 1) AS mySub
UNION
SELECT COUNT(*) AS "Unique" FROM (SELECT worker_ref_id,
COUNT(worker_ref_id) AS "Count"
FROM bonus
GROUP BY worker_ref_id
HAVING COUNT(worker_ref_id) = 1) AS mySub2
We can do this in two steps, using a CTE:
WITH cte AS (
SELECT worker_ref_id, COUNT(*) AS cnt
FROM bonus
GROUP BY worker_ref_id
)
SELECT
COUNT(*) FILTER (WHERE cnt = 1) AS "Unique",
COUNT(*) FILTER (WHERE cnt > 1) AS Duplicates
FROM cte;

How to group by gender and join by positions per group?

I have tried numerous approaches to turn the following:
Gender, Age, Value
1, 20, 21
2, 23 22
1, 26, 23
2, 29, 24
into
Male_Age, Male_Value, Female_Age, Female_Value
20 21 23 22
26 23 29 24
What i need to do is group by gender and instead of using an aggregate like (sum, count, avg) I need to create List[age] and List[value]. This should be possible because i am using a Dataset which allows functional operations.
If the number of rows for males and females are not the same, the columns should be filled with nulls.
One approach I tried was to make a new a new dataframe using the columns of other dataframes like so:
df
.select(male.select("sex").where('sex === 1).col("sex"),
female.select("sex").where('sex === 2).col("sex"))
However, this bizarrely produces output like so:
sex, sex,
1, 1
2, 2
1, 1
2, 2
I can't see how that is possible.
I also tried using pivot, but it forces me to aggregate after the group by:
df.withColumn("sex2", df.col("sex"))
.groupBy("sex")
.pivot("sex2")
.agg(
sum('value').as("mean"),
stddev('value).as("std. dev") )
.show()
|sex| 1.0_mean| 1.0_std. dev| 2.0_mean| 2.0_std. dev|
|1.0|0.4926065526| 1.8110632697| | |
|2.0| | |0.951250372|1.75060275400785|
The following code does what I need in Oracle SQL, so it should possible in Spark SQL too I reckon...
drop table mytable
CREATE TABLE mytable
( gender number(10) NOT NULL,
age number(10) NOT NULL,
value number(10) );
insert into mytable values (1,20,21);
insert into mytable values(2,23,22);
insert into mytable values (1,26,23);
insert into mytable values (2,29,24);
insert into mytable values (1,30,25);
select * from mytable;
SELECT A.VALUE AS MALE,
B.VALUE AS FEMALE
FROM
(select value, rownum RN from mytable where gender = 1) A
FULL OUTER JOIN
(select value, rownum RN from mytable where gender = 2) B
ON A.RN = B.RN
The following should give you the result.
val df = Seq(
(1, 20, 21),
(2, 23, 22),
(1, 26, 23),
(2, 29, 24)
).toDF("Gender", "Age", "Value")
scala> df.show
+------+---+-----+
|Gender|Age|Value|
+------+---+-----+
| 1| 20| 21|
| 2| 23| 22|
| 1| 26| 23|
| 2| 29| 24|
+------+---+-----+
// Gender 1 = Male
// Gender 2 = Female
import org.apache.spark.sql.expressions.Window
val byGender = Window.partitionBy("gender").orderBy("gender")
val males = df
.filter("gender = 1")
.select($"age" as "male_age",
$"value" as "male_value",
row_number() over byGender as "RN")
scala> males.show
+--------+----------+---+
|male_age|male_value| RN|
+--------+----------+---+
| 20| 21| 1|
| 26| 23| 2|
+--------+----------+---+
val females = df
.filter("gender = 2")
.select($"age" as "female_age",
$"value" as "female_value",
row_number() over byGender as "RN")
scala> females.show
+----------+------------+---+
|female_age|female_value| RN|
+----------+------------+---+
| 23| 22| 1|
| 29| 24| 2|
+----------+------------+---+
scala> males.join(females, Seq("RN"), "outer").show
+---+--------+----------+----------+------------+
| RN|male_age|male_value|female_age|female_value|
+---+--------+----------+----------+------------+
| 1| 20| 21| 23| 22|
| 2| 26| 23| 29| 24|
+---+--------+----------+----------+------------+
Given a DataFrame called df with columns gender, age, and value, you can do this:
df.groupBy($"gender")
.agg(collect_list($"age"), collect_list($"value")).rdd.map { row =>
val ages: Seq[Int] = row.getSeq(1)
val values: Seq[Int] = row.getSeq(2)
(row.getInt(0), ages.head, ages.last, values.head, values.last)
}.toDF("gender", "male_age", "female_age", "male_value", "female_value")
This uses the collect_list aggregating function in the very helpful Spark functions library to aggregate the values you want. (As you can see, there is also a collect_set as well.)
After that, I don't know of any higher-level DataFrame functions to expand those columnar arrays into individual columns of their own, so I fall back to the lower-level RDD API our ancestors used. I simply expand everything into a Tuple and then turn it back into a DataFrame. The commenters above mention corner cases I have not addressed; using functions like headOption and tailOption might be useful there. But this should be enough to get you moving.