Get distinct values from multiple tables efficiently - postgresql

I have the following tables in a Postgres database:
Table folders
| id | name |
|-----|-----------|
| 1 | folder A |
| 2 | folder B |
Table files -- Represents the files in folders (large table)
| id | folder_id |
|-----|-----------|
| 1 | 1 |
| 2 | 1 |
| 3 | 2 |
Table metadata_one -- Represents some info relating to files (large table)
| id | file_id | label |
|-----|---------|-------|
| 1 | 1 | abc |
| 2 | 1 | def |
| 3 | 2 | abc |
Table metadata_two -- Represents some other info relating to files (large table)
| id | file_id | label |
|-----|---------|-------|
| 1 | 1 | abc |
| 2 | 1 | def |
| 3 | 2 | abc |
How can I get a list of distinct label values on the folder level?
Desired result
Only distinct label values, across both metadata tables
| name | labels |
|--------------|-----------|
| folder A | abc,def |
| folder B | abc |
Attempt
Currently this is what I do:
SELECT
folders.name,
string_agg(m1.label, ',') AS m1_labels,
string_agg(m2.label, ',') AS m2_labels
FROM
folders
JOIN files ON
files.folder_id = folders.id
JOIN metadata_one m1 ON
m1.file_id = files.id
JOIN metadata_two m2 ON
m2.file_id = files.id
GROUP BY
folder.name
But this gives me the following:
| name | m1_labels | m2_labels |
|--------------|-----------|-----------|
| folder A | abc,def | abc,def |
| folder B | abc | abc |
I am looking for an optimised solution, since the files and metadata tables can be very large.

You can generate a UNION of metadata_one and metadata_two in a temporary table and then do your string aggregation like this
WITH metadata_by_folder AS (
SELECT
folders.name,
m1.label AS label
FROM
folders
JOIN files ON
files.folder_id = folders.id
JOIN metadata_one m1 ON
m1.file_id = files.id
UNION SELECT
folders.name,
m2.label AS label
FROM
folders
JOIN files ON
files.folder_id = folders.id
JOIN metadata_two m2 ON
m2.file_id = files.id
)
SELECT
metadata_by_folder.name,
string_agg(metadata_by_folder.label, ',') AS labels
FROM
metadata_by_folder
GROUP BY
metadata_by_folder.name;

Related

Make sure every distinct value of Column1 has a row with every distinct value of Column2, by populating a table with 0s - postgresql

Here's a crude example I've made up to illustrate what I want to achieve:
table1:
| Shop | Product | QuantityInStock |
| a | Prod1 | 13 |
| a | Prod3 | 13 |
| b | Prod2 | 13 |
| b | Prod3 | 13 |
| b | Prod4 | 13 |
table1 becomes:
| Shop | Product | QuantityInStock |
| a | Prod1 | 13 |
| a | Prod2 | 0 | -- new
| a | Prod3 | 13 |
| a | Prod4 | 0 | -- new
| b | Prod1 | 0 | -- new
| b | Prod2 | 13 |
| b | Prod3 | 13 |
| b | Prod4 | 13 |
In this example, I want to represent every Shop/Product combination
every Shop {a,b} to have a row with every Product {Prod1, Prod2, Prod3, Prod4}
QuantityInStock=13 has no significance, I just wanted a placeholder number :)
Use a calendar table cross join approach:
SELECT s.Shop, p.Product, COALESCE(t1.QuantityInStock, 0) AS QuantityInStock
FROM (SELECT DISTINCT Shop FROM table1) s
CROSS JOIN (SELECT DISTINCT Product FROM table1) p
LEFT JOIN table1 t1
ON t1.Shop = s.Shop AND
t1.Product = p.Product
ORDER BY
s.Shop,
p.Product;
The idea here is to generate an intermediate table containing of all shop/product combinations via a cross join. Then, we left join this to table1. Any shop/product combinations which do not have a match in the actual table are assigned a zero stock quantity.

Select common values when using group by [Postgres]

I have three main tables meetings, persons, hobbies with two relational tables.
Table meetings
+---------------+
| id | subject |
+----+----------+
| 1 | Kickoff |
| 2 | Relaunch |
| 3 | Party |
+----+----------+
Table persons
+------------+
| id | name |
+----+-------+
| 1 | John |
| 2 | Anna |
| 3 | Linda |
+----+-------+
Table hobbies
+---------------+
| id | name |
+----+----------+
| 1 | Soccer |
| 2 | Tennis |
| 3 | Swimming |
+----+----------+
Relation Table meeting_person
+-----------------+-----------+
| id | meeting_id | person_id |
+----+------------+-----------+
| 1 | 1 | 1 |
| 2 | 1 | 2 |
| 3 | 1 | 3 |
| 4 | 2 | 1 |
| 5 | 2 | 2 |
| 6 | 3 | 1 |
+----+------------+-----------+
Relation Table person_hobby
+----------------+----------+
| id | person_id | hobby_id |
+----+-----------+----------+
| 1 | 1 | 1 |
| 2 | 1 | 2 |
| 3 | 1 | 3 |
| 4 | 2 | 1 |
| 5 | 2 | 2 |
| 6 | 3 | 1 |
+----+-----------+----------+
Now I want to to find the common hobbies of all person attending each meeting.
So the desired result would be:
+------------+-----------------+------------------------+
| meeting_id | persons | common_hobbies |
| | (Aggregated) | (Aggregated) |
+------------+-----------------+------------------------+
| 1 | John,Anna,Linda | Soccer |
| 2 | John,Anna | Soccer,Tennis |
| 3 | John | Soccer,Tennis,Swimming |
+------------+-----------------+------------------------+
My current work in progress is:
select
m.id as "meeting_id",
(
select string_agg(distinct p.name, ',')
from meeting_person mp
inner join persons p on mp.person_id = p.id
where m.id = mp.meeting_id
) as "persons",
string_agg(distinct h2.name , ',') as "common_hobbies"
from meetings m
inner join meeting_person mp2 on m.id = mp2.meeting_id
inner join persons p2 on mp2.person_id = p2.id
inner join person_hobby ph2 on p2.id = ph2.person_id
inner join hobbies h2 on ph2.hobby_id = h2.id
group by m.id
But this query lists not the common_hobbies but all hobbies which are at least once mentioned.
+------------+-----------------+------------------------+
| meeting_id | persons | common_hobbies |
+------------+-----------------+------------------------+
| 1 | John,Anna,Linda | Soccer,Tennis,Swimming |
| 2 | John,Anna | Soccer,Tennis,Swimming |
| 3 | John | Soccer,Tennis,Swimming |
+------------+-----------------+------------------------+
Does anyone have any hints for me, on how I could solve this problem?
Cheers
This problem can be solved by implement custom aggregation function (found it here):
create or replace function array_intersect(anyarray, anyarray)
returns anyarray language sql
as $$
select
case
when $1 is null then $2
when $2 is null then $1
else
array(
select unnest($1)
intersect
select unnest($2))
end;
$$;
create aggregate array_intersect_agg (anyarray)
(
sfunc = array_intersect,
stype = anyarray
);
So, the solution can be next:
select
meeting_id,
array_agg(ph.name) persons,
array_intersect_agg(hobby) common_hobbies
from meeting_person mp
join (
select p.id, p.name, array_agg(h.name) hobby
from person_hobby ph
join persons p on ph.person_id = p.id
join hobbies h on h.id = ph.hobby_id
group by p.id, p.name
) ph on ph.id = mp.person_id
group by meeting_id;
Look the example fiddle
Result:
meeting_id | persons | common_hobbies
-----------+-----------------------+--------------------------
1 | {John,Anna,Linda} | {Soccer}
3 | {John} | {Soccer,Tennis,Swimming}
2 | {John,Anna} | {Soccer,Tennis}

Find rows in relation with at least n rows in a different table without joins

I have a table as such (tbl):
+----+------+-----+
| pk | attr | val |
+----+------+-----+
| 0 | ohif | 4 |
| 1 | foha | 56 |
| 2 | slns | 2 |
| 3 | faso | 11 |
+----+------+-----+
And another table in n-to-1 relationship with tbl (tbl2):
+----+-----+
| pk | rel |
+----+-----+
| 0 | 0 |
| 1 | 1 |
| 2 | 0 |
| 3 | 2 |
| 4 | 2 |
| 5 | 3 |
| 6 | 1 |
| 7 | 2 |
+----+-----+
(tbl2.rel -> tbl.pk.)
I would like to select only the rows from tbl which are in relationship with at least n rows from tbl2.
I.e., for n = 2, I want this table:
+----+------+-----+
| pk | attr | val |
+----+------+-----+
| 0 | ohif | 4 |
| 1 | foha | 56 |
| 2 | slns | 2 |
+----+------+-----+
This is the solution I came up with:
SELECT DISTINCT ON (tbl.pk) tbl.*
FROM (
SELECT tbl.pk
FROM tbl
RIGHT OUTER JOIN tbl2 ON tbl2.rel = tbl.pk
GROUP BY tbl.pk
HAVING COUNT(tbl2.*) >= 2 -- n
) AS tbl_candidates
LEFT OUTER JOIN tbl ON tbl_candidates.pk = tbl.pk
Can it be done without selecting the candidates with a subquery and re-joining the table with itself?
I'm on Postgres 10. A standard SQL solution would be better, but a Postgres solution is acceptable.
OK, just join once, as below:
select
t1.pk,
t1.attr,
t1.val
from
tbl t1
join
tbl2 t2 on t1.pk = t2.rel
group by
t1.pk,
t1.attr,
t1.val
having(count(1)>=2) order by t1.pk;
pk | attr | val
----+------+-----
0 | ohif | 4
1 | foha | 56
2 | slns | 2
(3 rows)
Or just join once and use CTE(with clause), as below:
with tmp as (
select rel from tbl2 group by rel having(count(1)>=2)
)
select b.* from tmp t join tbl b on t.rel = b.pk order by b.pk;
pk | attr | val
----+------+-----
0 | ohif | 4
1 | foha | 56
2 | slns | 2
(3 rows)
Is the SQL clearer?

PostgreSQL - How to do a Loop on a column

I am struggling to do a loop on a Postgres, but functions on postgres are not my piece of cake.
I have the following table on postgres:
| portfolio_1 | total_risk |
|----------------|------------|
| Top 10 Bets | |
| AAPL34 | 2,06699 |
| DISB34 | 1,712684 |
| PETR4 | 0,753324 |
| PETR3 | 0,087767 |
| VALE3 | 0,086346 |
| LREN3 | 0,055108 |
| AMZO34 | 0,0 |
| Bottom 10 Bets | |
| AAPL34 | 0,0 |
What I'm trying to do is get the values after the "Top 10 Bets" and before the "Botton 10 Bets".
My goal is the following result:
| portfolio_1 | total_risk |
|-------------|------------|
| AAPL34 | 2,06699 |
| DISB34 | 1,712684 |
| PETR4 | 0,753324 |
| PETR3 | 0,087767 |
| VALE3 | 0,086346 |
| LREN3 | 0,055108 |
| AMZO34 | 0,0 |
So, my goal is to take off the "Top 10 Bets", the "Botton 10 Bets" and the AAPL34 after the "Botton 10 Bets", which was repeated.
The quantity of rows is variable (I'm importing it from an Excel file), so I need a loop to do this, right?
SQL tables and result sets represent unordered sets. There is no "before" or "after" unless rows explicitly provide that information.
Let me assume that you have such a column, which I will call id for convenience.
Then you can do this in several ways. Here is one:
select t.*
from t
where t.id > (select min(t2.id) from t t2 where t2.portfolio_1 = 'Top 10 Bets') and
t.id < (select max(t2.id) from t t2 where t2.portfolio_1 = 'Bottom 10 Bets');

postgres sql : getting unified rows

I have one table where I dump all records from different sources (x, y, z) like below
+----+------+--------+
| id | source |
+----+--------+
| 1 | x |
| 2 | y |
| 3 | x |
| 4 | x |
| 5 | y |
| 6 | z |
| 7 | z |
| 8 | x |
| 9 | z |
| 10 | z |
+----+--------+
Then I have one mapping table where I map values between sources based on my usecase like below
+----+-----------+
| id | mapped_id |
+----+-----------+
| 1 | 2 |
| 1 | 9 |
| 3 | 7 |
| 4 | 10 |
| 5 | 1 |
+----+-----------+
I want merged results where I can see only unique results like
+-----+------------+
| id | mapped_ids |
+-----+------------+
| 1 | 2,9,5 |
| 3 | 7 |
| 4 | 10 |
| 6 | null |
| 8 | null |
+-----+------------+
I am trying different options but could not figure this out, is there way I can write joins to do this. I have to use the mapping table where associations are stored and identify unique records along with records which are not mapped anywhere.
My understanding is, you want to see all dump_table IDs that do not appear in the mapping_id column and then aggregate the mapped_ids for those that are left:
select d1.id,
array_agg(m1.mapped_id order by m1.mapped_id) filter (where m1.mapped_id is not null) as mapped_ids
from dump_table d1
left join mapping_table m1 using (id)
where not exists (select *
from mapping_table m2
where m2.mapped_id = d1.id)
group by d1.id;
Online example: https://rextester.com/JQZ17650
Try something like this:
SELECT id, name, ARRAY_AGG(mapped_id) AS mapped_ids
FROM table1 AS t1
LEFT JOIN table2 AS t2 USING (id)
GROUP BY id, name