I make the following request to count my items:
SELECT COUNT(cee."entryId"), cee."categoryId" FROM category_entries_entry cee
WHERE cee."categoryId" IN (1, 2, 3)
GROUP BY cee."categoryId";
If items with ids 1 and 2 not found then I will only see the result for item with id = 3. Nevertheless I would like to get the following output:
count|categoryId|
-----|----------|
1| 0|
2| 0|
3| 5|
How do I achieve it?
Meta:
PostgreSQL version: 12.3
Use a left join against a values clause:
SELECT COUNT(cee."entryId"),
t.id as category_id
FROM (
values (1),(2),(3)
) as t(id)
left join category_entries_entry cee on cee."categoryId" = t.id
GROUP BY t.id;
Related
I have a table "products" with a column called "store_id".
This table has a lot af products from many stores.
I need to select 4 random products from 4 specific stores (id: 1, 34, 45, 100).
How can I do that?
I've tried to like this:
SELECT * FROM products WHERE store_id IN (1, 34, 45, 100)
But that query returns duplicated records (by store_id).
I need the following result:
store_id
title
1
title a
34
title b
45
title c
100
title d
To get a true random pick of the products use a row_number function with random order.
This query shows all data with a random index of the product for each store
select products.*,
row_number() over (partition by store_id order by random()) rn
from products
where store_id in (1,34)
store_id|product_id|title|rn|
--------+----------+-----+--+
1| 1|a | 1|
1| 3|c | 2|
1| 2|b | 3|
34| 6|f | 1|
34| 7|g | 2|
34| 8|h | 3|
34| 5|e | 4|
34| 4|d | 5|
To get only one product per store simple filter with rn=1
with prod as (
select products.*,
row_number() over (partition by store_id order by random()) rn
from products
where store_id in (1,34)
)
select store_id, title from prod
where rn = 1
;
store_id|title|
--------+-----+
1|a |
34|e |
Note this query will produce a different result on each run. If you need a stability you must call setseed before each execution. E.g.
SELECT setseed(1)
Use the DISTINCT construct to get unique records for the desired column:
SELECT distinct on (store_id) store_id, title FROM products WHERE store_id IN (1, 34, 45, 100);
Demo in sqldaddy.io
I have a table which has format like this (id is the pk)
id|timestamps |year|month|day|groups_ids|status |SCHEDULED |uid|
--|-------------------|----|-----|---|----------|-------|-------------------|---|
1|2021-02-04 17:18:24|2020| 8| 9| 1|OK |2020-08-09 00:00:00| 1|
2|2021-02-04 17:18:09|2020| 9| 9| 1|OK |2020-09-09 00:00:00| 1|
3|2021-02-04 17:19:51|2020| 10| 9| 1|HOLD |2020-10-09 00:00:00| 1|
4|2021-02-04 17:19:04|2020| 10| 10| 2|HOLD |2020-10-09 00:00:00| 1|
5|2021-02-04 17:18:30|2020| 10| 11| 2|HOLD |2020-10-09 00:00:00| 1|
6|2021-02-04 17:18:57|2020| 10| 12| 2|OK |2020-10-09 00:00:00| 1|
7|2021-02-04 17:18:24|2020| 8| 9| 1|HOLD |2020-08-09 00:00:00| 2|
8|2021-02-04 17:18:09|2020| 9| 9| 2|HOLD |2020-09-09 00:00:00| 2|
9|2021-02-04 17:19:51|2020| 10| 9| 2|HOLD |2020-10-09 00:00:00| 2|
10|2021-02-04 17:19:04|2020| 10| 10| 2|HOLD |2020-10-09 00:00:00| 2|
11|2021-02-04 17:18:30|2020| 10| 11| 2|HOLD |2020-10-09 00:00:00| 2|
12|2021-02-04 17:18:57|2020| 10| 12| 2|HOLD |2020-10-09 00:00:00| 2|
the job is i want to extract every group_ids for each uid when the status is OK order by SCHEDULED ascended, and if there's no OK found in the record for the uid it will takes for the latest HOLD based on year month and day. After that I want to make a weighing score with each group_ids:
group_ids > score
1 > 100
2 > 80
3 > 60
4 > 50
5 > 10
6 > 50
7 > 0
so if [1,1,2] will be change to (100+100+80) = 280
it will look like this:
ids|uid|pattern|score|
---|---|-------|-----|
1| 1|[1,1,2]| 280|
2| 2|[2] | 80|
It's pretty hard since i cannot found any operators like python for loop and append operators in PostgreSQL
step-by-step demo:db<>fiddle
SELECT
s.uid, s.values,
sum(v.value) as score
FROM (
SELECT DISTINCT ON (uid)
uid,
CASE
WHEN cardinality(ok_count) > 0 THEN ok_count
ELSE ARRAY[last_value]
END as values
FROM (
SELECT
*,
ARRAY_AGG(groups_ids) FILTER (WHERE status = 'OK') OVER (PARTITION BY uid ORDER BY scheduled)as ok_count,
first_value(groups_ids) OVER (PARTITION BY uid ORDER BY year, month DESC) as last_value
FROM mytable
) s
ORDER BY uid, scheduled DESC
) s,
unnest(values) as u_group_id
JOIN (VALUES
(1, 100), (2, 80), (3, 60), (4, 50), (5,10), (6, 50), (7, 0)
) v(group_id, value) ON v.group_id = u_group_id
GROUP BY s.uid, s.values
Phew... quite complex. Let's have a look at the steps:
a)
SELECT
*,
-- 1:
ARRAY_AGG(groups_ids) FILTER (WHERE status = 'OK') OVER (PARTITION BY uid ORDER BY scheduled)as oks,
-- 2:
first_value(groups_ids) OVER (PARTITION BY uid ORDER BY year, month DESC) as last_value
FROM mytable
Using the array_agg() window function to create an array of group_ids without loosing the other data as we would with simple GROUP BY. The FILTER clause is to put only the status = OK records into the array.
Find the last group_id of a group (partition) using the first_value() window function. In descending order is returns the last value.
b)
SELECT DISTINCT ON (uid) -- 2
uid,
CASE -- 1
WHEN cardinality(ok_count) > 0 THEN ok_count
ELSE ARRAY[last_value]
END as values
FROM (
...
) s
ORDER BY uid, scheduled DESC -- 2
The CASE clause either takes the previously created array (from step a1) or, if there is none, it takes the last value (from step a2), creates an one-elemented array.
The DISTINCT ON clause returns only the first element of an ordered group. The group is your uid and the order is given by the column scheduled. Since you don't want the first, but last records within the group, you have to order it DESC to make the most recent one the topmost record. That is taken by the DISTINCT ON
c)
SELECT
uid,
group_id
FROM (
...
) s,
unnest(values) as group_id -- 1
The arrays should be extracted into one record per element. That helps to join the weighted values later.
d)
SELECT
s.uid, s.values,
sum(v.weighted_value) as score -- 2
FROM (
...
) s,
unnest(values) as u_group_id
JOIN (VALUES
(1, 100), (2, 80), ...
) v(group_id, weighted_value) ON v.group_id = u_group_id -- 1
GROUP BY s.uid, s.values -- 2
Join your weighted value on the array elements. Naturally, this can be a table or query or whatever.
Regroup the uid groups to calculate the SUM() of the weighted_values
Additional note:
You should avoid duplicate data storing. You don't need to store the date parts year, month and day if you also store the complete date. You can always calculate them from the date.
I need to do a sum of the values in the last 30 days (exclusive) relative to that date, with every product in every store.
Assuming all months with 30 days:
date|store|product|values
2020-06-30|Store1|Product1|1
2020-07-02|Store1|Product2|4
2020-07-01|Store2|Product1|3
2020-07-18|Store1|Product1|4
2020-07-18|Store1|Product2|2
2020-07-18|Store2|Product1|2
2020-07-30|Store1|Product1|1
2020-08-01|Store1|Product1|1
2020-08-01|Store1|Product2|1
2020-08-01|Store2|Product1|6
In the lines of day 2020-08-01, sum the values of (2020-08-20 - 30 days) to 2020-08-19 and put it in the 2020-08-20 line, like this:
(first line doesn't include '2020-06-30' because is more than 30 days ago and '2020-08-01' because is the same day, and this goes on...)
date|store|product|sum_values_over_last_30_days_to_this_date
2020-08-01|Store1|Product1|5
2020-08-01|Store1|Product2|6
2020-08-01|Store2|Product1|5
....
Tried this below and nothing too:
spark.sql("""
SELECT
a.date,
a.store,
a.product,
SUM(a.values) OVER (PARTITION BY a.product,a.store ORDER BY a.date BETWEEN a.date - INTERVAL '1' DAY AND a.date - INTERVAL '30' DAY) AS sum
FROM table a
""").show()
Anybody can help me?
You can try self-join rather than window function, maybe this kind of join will work -
SELECT
a.date,
a.store,
a.product,
SUM(IFNULL(b.value,0))
FROM
table a
LEFT JOIN
(
SELECT
a.date,
a.store,
a.product,
a.value
FROM
table a
)b
ON
a.store = b.store
AND
a.product = b.product
AND
a.date > b.date - INTERVAL 30 DAYS
AND a.date <= b.date
GROUP BY
1,2,3
Make sure to sum the value from the inner query, to have the sum up until this daye.
Here is my trial with the sampel dataframe.
+----------+------+--------+------+
| date| store| product|values|
+----------+------+--------+------+
|2020-08-10|Store1|Product1| 1|
|2020-08-11|Store1|Product1| 1|
|2020-08-12|Store1|Product1| 1|
|2020-08-13|Store1|Product2| 1|
|2020-08-14|Store1|Product2| 1|
|2020-08-15|Store1|Product2| 1|
|2020-08-16|Store1|Product1| 1|
|2020-08-17|Store1|Product1| 1|
|2020-08-18|Store1|Product1| 1|
|2020-08-19|Store1|Product2| 1|
|2020-08-20|Store1|Product2| 1|
|2020-08-21|Store1|Product2| 1|
|2020-08-22|Store1|Product1| 1|
|2020-08-21|Store1|Product1| 1|
|2020-08-22|Store1|Product1| 1|
|2020-08-20|Store1|Product2| 1|
|2020-08-21|Store1|Product2| 1|
|2020-08-22|Store1|Product2| 1|
+----------+------+--------+------+
df.withColumn("date", to_date($"date"))
.createOrReplaceTempView("table")
spark.sql("""
SELECT
date,
store,
product,
COALESCE(SUM(values) OVER (PARTITION BY 1 ORDER BY date RANGE BETWEEN 3 PRECEDING AND 1 PRECEDING), 0) as sum
FROM table
""").show()
+----------+------+--------+---+
| date| store| product|sum|
+----------+------+--------+---+
|2020-08-10|Store1|Product1|0.0|
|2020-08-11|Store1|Product1|1.0|
|2020-08-12|Store1|Product1|2.0|
|2020-08-13|Store1|Product2|3.0|
|2020-08-14|Store1|Product2|3.0|
|2020-08-15|Store1|Product2|3.0|
|2020-08-16|Store1|Product1|3.0|
|2020-08-17|Store1|Product1|3.0|
|2020-08-18|Store1|Product1|3.0|
|2020-08-19|Store1|Product2|3.0|
|2020-08-20|Store1|Product2|3.0|
|2020-08-20|Store1|Product2|3.0|
|2020-08-21|Store1|Product2|4.0|
|2020-08-21|Store1|Product1|4.0|
|2020-08-21|Store1|Product2|4.0|
|2020-08-22|Store1|Product1|6.0|
|2020-08-22|Store1|Product1|6.0|
|2020-08-22|Store1|Product2|6.0|
+----------+------+--------+---+
Goal: Find the count of uniques and duplicates in the worker_ref_id column.
I find the solution here in MySQL but IF does not exist in PostgreSQL. So, how would I do that in PostgreSQL?
I have the following table:
|worker_ref_id|bonus_amount|
| 1| 5000|
| 2| 3000|
| 3| 4000|
| 1| 4500|
| 2| 3500|
I would like the following output:
|Unique|Duplicates|
|1 |2 |
I get the right answer but it appears as two rows rather than two columns and one row:
SELECT COUNT(*) AS "Duplicate" FROM (SELECT worker_ref_id,
COUNT(worker_ref_id) AS "Count"
FROM bonus
GROUP BY worker_ref_id
HAVING COUNT(worker_ref_id) > 1) AS mySub
UNION
SELECT COUNT(*) AS "Unique" FROM (SELECT worker_ref_id,
COUNT(worker_ref_id) AS "Count"
FROM bonus
GROUP BY worker_ref_id
HAVING COUNT(worker_ref_id) = 1) AS mySub2
We can do this in two steps, using a CTE:
WITH cte AS (
SELECT worker_ref_id, COUNT(*) AS cnt
FROM bonus
GROUP BY worker_ref_id
)
SELECT
COUNT(*) FILTER (WHERE cnt = 1) AS "Unique",
COUNT(*) FILTER (WHERE cnt > 1) AS Duplicates
FROM cte;
I have tried numerous approaches to turn the following:
Gender, Age, Value
1, 20, 21
2, 23 22
1, 26, 23
2, 29, 24
into
Male_Age, Male_Value, Female_Age, Female_Value
20 21 23 22
26 23 29 24
What i need to do is group by gender and instead of using an aggregate like (sum, count, avg) I need to create List[age] and List[value]. This should be possible because i am using a Dataset which allows functional operations.
If the number of rows for males and females are not the same, the columns should be filled with nulls.
One approach I tried was to make a new a new dataframe using the columns of other dataframes like so:
df
.select(male.select("sex").where('sex === 1).col("sex"),
female.select("sex").where('sex === 2).col("sex"))
However, this bizarrely produces output like so:
sex, sex,
1, 1
2, 2
1, 1
2, 2
I can't see how that is possible.
I also tried using pivot, but it forces me to aggregate after the group by:
df.withColumn("sex2", df.col("sex"))
.groupBy("sex")
.pivot("sex2")
.agg(
sum('value').as("mean"),
stddev('value).as("std. dev") )
.show()
|sex| 1.0_mean| 1.0_std. dev| 2.0_mean| 2.0_std. dev|
|1.0|0.4926065526| 1.8110632697| | |
|2.0| | |0.951250372|1.75060275400785|
The following code does what I need in Oracle SQL, so it should possible in Spark SQL too I reckon...
drop table mytable
CREATE TABLE mytable
( gender number(10) NOT NULL,
age number(10) NOT NULL,
value number(10) );
insert into mytable values (1,20,21);
insert into mytable values(2,23,22);
insert into mytable values (1,26,23);
insert into mytable values (2,29,24);
insert into mytable values (1,30,25);
select * from mytable;
SELECT A.VALUE AS MALE,
B.VALUE AS FEMALE
FROM
(select value, rownum RN from mytable where gender = 1) A
FULL OUTER JOIN
(select value, rownum RN from mytable where gender = 2) B
ON A.RN = B.RN
The following should give you the result.
val df = Seq(
(1, 20, 21),
(2, 23, 22),
(1, 26, 23),
(2, 29, 24)
).toDF("Gender", "Age", "Value")
scala> df.show
+------+---+-----+
|Gender|Age|Value|
+------+---+-----+
| 1| 20| 21|
| 2| 23| 22|
| 1| 26| 23|
| 2| 29| 24|
+------+---+-----+
// Gender 1 = Male
// Gender 2 = Female
import org.apache.spark.sql.expressions.Window
val byGender = Window.partitionBy("gender").orderBy("gender")
val males = df
.filter("gender = 1")
.select($"age" as "male_age",
$"value" as "male_value",
row_number() over byGender as "RN")
scala> males.show
+--------+----------+---+
|male_age|male_value| RN|
+--------+----------+---+
| 20| 21| 1|
| 26| 23| 2|
+--------+----------+---+
val females = df
.filter("gender = 2")
.select($"age" as "female_age",
$"value" as "female_value",
row_number() over byGender as "RN")
scala> females.show
+----------+------------+---+
|female_age|female_value| RN|
+----------+------------+---+
| 23| 22| 1|
| 29| 24| 2|
+----------+------------+---+
scala> males.join(females, Seq("RN"), "outer").show
+---+--------+----------+----------+------------+
| RN|male_age|male_value|female_age|female_value|
+---+--------+----------+----------+------------+
| 1| 20| 21| 23| 22|
| 2| 26| 23| 29| 24|
+---+--------+----------+----------+------------+
Given a DataFrame called df with columns gender, age, and value, you can do this:
df.groupBy($"gender")
.agg(collect_list($"age"), collect_list($"value")).rdd.map { row =>
val ages: Seq[Int] = row.getSeq(1)
val values: Seq[Int] = row.getSeq(2)
(row.getInt(0), ages.head, ages.last, values.head, values.last)
}.toDF("gender", "male_age", "female_age", "male_value", "female_value")
This uses the collect_list aggregating function in the very helpful Spark functions library to aggregate the values you want. (As you can see, there is also a collect_set as well.)
After that, I don't know of any higher-level DataFrame functions to expand those columnar arrays into individual columns of their own, so I fall back to the lower-level RDD API our ancestors used. I simply expand everything into a Tuple and then turn it back into a DataFrame. The commenters above mention corner cases I have not addressed; using functions like headOption and tailOption might be useful there. But this should be enough to get you moving.