I have two tables "prods"(id, price) and "prods_prices"(id, prod_id, price). It is necessary to display "prods". The minimum price for each product from prods_prices, and if there is none, then 'prods'.'price'.
I did it but I think it's wrong.
SELECT DISTINCT "prods"."id" as "id", "prods"."price",
CASE
WHEN (SELECT min("prods_prices"."price") FROM "prods_prices" WHERE "prods_prices"."prod_id"=id) isnull THEN "prods"."price"
ELSE (SELECT min("prods_prices"."price") FROM "prods_prices" WHERE "prods_prices"."prod_id"=id)
END
AS "price",
FROM "prods"
If id is the primary key on prods, then:
select prods.id,
coalesce(min(prods_prices.price), prods.price) price
from prods
left join prods_prices on prods_prices.prod_id = prods.id
group by prods.id
Coalesce returns prods.price if min(prods_prices.price) is null.
With the few informations you gave, you can do the next query (Result here)
with min_price as (select prod_id,min(price) as price from prod_prices group by prod_id)
select p.id,case when mp is null then p.price else mp.price end as price
from prods p left join min_price mp on p.id = mp.prod_id
I have two child tables with a one-to-many relationship to the parent table. I want to join them without extra duplication.
In the actual schema, there are more one-to-many relationships to this parent and to child tables. I'm sharing a simplified schema to make the root of the problem to be easy to be seen.
Any suggestion is highly appreciated.
CREATE TABLE computer (
id SERIAL PRIMARY KEY,
name TEXT
);
CREATE TABLE c_user (
id SERIAL PRIMARY KEY,
computer_id INT REFERENCES computer,
name TEXT
);
CREATE TABLE c_accessories (
id SERIAL PRIMARY KEY,
computer_id INT REFERENCES computer,
name TEXT
);
INSERT INTO computer (name) VALUES ('HP'), ('Toshiba'), ('Dell');
INSERT INTO c_user (computer_id, name) VALUES (1, 'John'), (1, 'Elton'), (1, 'David'), (2, 'Ali');
INSERT INTO c_accessories (computer_id, name) VALUES (1, 'mouse'), (1, 'keyboard'), (1, 'mouse'), (2, 'mouse'), (2, 'printer'), (2, 'monitor'), (3, 'speaker');
This is my query:
SELECT
c.id
,c.name
,jsonb_agg(c_user.name)
,jsonb_agg(c_accessories.name)
FROM
computer c
JOIN
c_user ON c_user.computer_id = c.id
JOIN
c_accessories ON c_accessories.computer_id = c.id
GROUP BY c.id
I'm getting this result:
1 "HP" ["John", "John", "John", "Elton", "Elton", "Elton", "David", "David", "David"] ["mouse", "keyboard", "mouse", "mouse", "keyboard", "mouse", "mouse", "keyboard", "mouse"]
2 "Toshiba" ["Ali", "Ali", ""Ali"] ["monitor", "printer", "mouse"]
I want to get this result (by preserving duplicates if exist in database). And also be able to filter computers by user and/or accessory:
1 "HP" ["John", "Elton", "David"] ["keyboard", "mouse", "mouse"]
2 "Toshiba" ["Ali"] ["monitor", "printer", "mouse"]
3 "Dell" Null ["speaker"]
Use subqueries instead of joins:
SELECT
c.id,
c.name,
(SELECT
jsonb_agg(c_user.name)
FROM c_user
WHERE c_user.computer_id = c.id
) AS user_names,
(SELECT
jsonb_agg(c_accessories.name)
FROM c_accessories
WHERE c_accessories.computer_id = c.id
) AS accessory_names
FROM
computer c
Join the users to a derived table that does the joining and aggregation of computers and accessories and aggregate again.
SELECT ca.id,
jsonb_agg(u.name) AS users,
ca.accessories
FROM (SELECT c.id,
jsonb_agg(a.name) AS accessories
FROM computer AS c
LEFT JOIN c_accessories AS a
ON a.computer_id = c.id
GROUP BY c.id) AS ca
INNER JOIN c_user AS u
ON u.computer_id = ca.id
GROUP BY ca.id,
ca.accessories;
You could also first aggregate including the IDs of users and accessories, so that you can use DISTINCT in the aggregation function, for example into arrays of records. Reaggrete to JSON in subqueries.
SELECT c.id,
(SELECT jsonb_agg(x.name)
FROM unnest(array_agg(DISTINCT row(u.id, u.name))) AS x
(id integer,
name text)) AS users,
(SELECT jsonb_agg(x.name)
FROM unnest(array_agg(DISTINCT row(a.id, a.name))) AS x
(id integer,
name text)) AS accessories
FROM computer AS c
LEFT JOIN c_accessories AS a
ON a.computer_id = c.id
INNER JOIN c_user AS u
ON u.computer_id = c.id
GROUP BY c.id;
db<>fiddle
Aggregate first, then join to the computer table to the result of the aggregation.
select c.id, c.name,
cu.users,
ca.accessories
from computer c
left join (
select computer_id, jsonb_agg(name) as users
from c_user
group by computer_id
) as cu on cu.computer_id = c.id
left join (
select computer_id, jsonb_agg(name) as accessories
from c_accessories
group by computer_id
) as ca on ca.computer_id = c.id
I'm trying to pull a dataset that returns records ONLY when there are two QUALIFERs present. I've tried left joins, populating data in temp tables then manipulating something, then numerous having clauses (resulting in subquery selects, and additional groups). I would appreciate any assistance on what I can do further.
Query:
Select E, CASE WHEN QUALIFER = '1' THEN 'NAME1' WHEN QUALIFER = '2' then 'NAME2' ELSE 'FINALNAME' END AS TYPE, count(rt.ID) 'Number '
from TABLE_ONE co (nolock)
join TABLE_TWO rt (nolock)
on co.ID = rt.ID
where co.E in (select * from #tempEmail)
AND convert(date,co.INSERTED_TIMESTAMP)between '1/1/2020' and '8/15/2020'
AND TRANS_STATUS = 'APPROVED'
group by E, QUALIFER
order by E, QUALIFER
Current resultset:
E TYPE Number
FAKEEMAIL1#Gmail NAME1 1
FAKEEMAIL1#Gmail NAME2 1
otheremailj#gmail.com Name1 21
Desired resultset:
E TYPE Number
FAKEEMAIL1#Gmail NAME1 1
FAKEEMAIL1#Gmail NAME2 1
Thank you.
Let's try the below query. I used a temp table to make things more simple for my mind.
if object_id('tempdb.dbo.#email') is not null drop table #email
create table #email
(
email varchar(50),
typeValue varchar(15),
Number int
)
insert into #email(email, typeValue, Number)
Select
E,
CASE WHEN QUALIFER = '1' THEN 'NAME1' WHEN QUALIFER = '2' then 'NAME2' ELSE 'FINALNAME' END AS TYPE,
count(rt.ID) 'Number '
from TABLE_ONE co (nolock)
join TABLE_TWO rt (nolock)
on co.ID = rt.ID
where co.E in (select * from #tempEmail)
AND convert(date,co.INSERTED_TIMESTAMP)between '1/1/2020' and '8/15/2020'
AND TRANS_STATUS = 'APPROVED'
group by E, QUALIFER
select a.email, a.typeValue
from #email a
inner join
(
select email, typeValue, rank() over (partition by email order by typeValue) as typeCount
from #email t
) as b
on b.email = a.email
and b.typeCount > 1
I have a question regarding lateral joins in Postgres.
My use case is I want to return a dataset that combines multiple tables but limits the number of publications and reviews returned. The simplified table schema is below
Table Author
ID
NAME
Table Review
ID
AUTHOR_ID
PUBLICATION_ID
CONTENT
Table Publication
ID
NAME
Table AuthorPublication
AUTHOR_ID
PUBLICATION_ID
So for my initial query I have this:
SELECT
a.id,
a.name
json_agg (
json_build_object (
'id', r.id,
'content', r.content
)
) AS reviews,
json_agg (
json_build_object(
'id', p.id,
'name', p.name
)
) AS publications
FROM
public.author a
INNER JOIN
public.review r ON r.author_id = a.id
INNER JOIN
public.author_publication ap ON ap.author_id = a.id
INNER JOIN
public.publication p ON p.id = ap.publication_id
WHERE
a.id = '1'
GROUP BY
a.id
This returns the data I need, for example I get the author's name, id and a list of all of their reviews and publications they belong to. What I want to be able to do is limit the number of reviews and publications. For example return 5 reviews, and 3 publications.
I tried doing this with a lateral query but am running into an issue where if I do a single lateral query it works as intended.
so like:
INNER JOIN LATERAL
(SELECT r.* FROM public.review r WHERE r.author_id = a.id LIMIT 5) r ON TRUE
This returns the dataset with only 5 reviews - but if I add a second lateral query
INNER JOIN LATERAL
(SELECT ap.* FROM public.author_publication ap WHERE ap.author_id = a.id LIMIT 5) r ON TRUE
I now get 25 results for both reviews and publications with repeated/duplicated data.
So my question is are you allowed to have multiple lateral joins in a single PG query and if not what is a good way to go about limiting the number of results from a JOIN?
Thanks!
You must change your query to something like this:
SELECT
a.id,
a.name,
(
SELECT
json_agg ( r )
FROM (
SELECT
json_build_object (
'id', r.id,
'content', r.content
) AS r
FROM public.review r
WHERE r.author_id = a.id
ORDER BY r.id
LIMIT 5
) AS a
) AS reviews,
(
SELECT
json_agg (p)
FROM (
SELECT
json_build_object(
'id', p.id,
'name', p.name
) AS p
FROM public.author_publication ap
INNER JOIN public.publication p ON p.id = ap.publication_id
WHERE ap.author_id = a.id
ORDER BY p.id
LIMIT 3
) AS a
) AS publications
FROM
public.author a
WHERE
a.id = '1'
I'm trying to map the results of a query to JSON using the row_to_json() function that was added in PostgreSQL 9.2.
I'm having trouble figuring out the best way to represent joined rows as nested objects (1:1 relations)
Here's what I've tried (setup code: tables, sample data, followed by query):
-- some test tables to start out with:
create table role_duties (
id serial primary key,
name varchar
);
create table user_roles (
id serial primary key,
name varchar,
description varchar,
duty_id int, foreign key (duty_id) references role_duties(id)
);
create table users (
id serial primary key,
name varchar,
email varchar,
user_role_id int, foreign key (user_role_id) references user_roles(id)
);
DO $$
DECLARE duty_id int;
DECLARE role_id int;
begin
insert into role_duties (name) values ('Script Execution') returning id into duty_id;
insert into user_roles (name, description, duty_id) values ('admin', 'Administrative duties in the system', duty_id) returning id into role_id;
insert into users (name, email, user_role_id) values ('Dan', 'someemail#gmail.com', role_id);
END$$;
The query itself:
select row_to_json(row)
from (
select u.*, ROW(ur.*::user_roles, ROW(d.*::role_duties)) as user_role
from users u
inner join user_roles ur on ur.id = u.user_role_id
inner join role_duties d on d.id = ur.duty_id
) row;
I found if I used ROW(), I could separate the resulting fields out into a child object, but it seems limited to a single level. I can't insert more AS XXX statements, as I think I should need in this case.
I am afforded column names, because I cast to the appropriate record type, for example with ::user_roles, in the case of that table's results.
Here's what that query returns:
{
"id":1,
"name":"Dan",
"email":"someemail#gmail.com",
"user_role_id":1,
"user_role":{
"f1":{
"id":1,
"name":"admin",
"description":"Administrative duties in the system",
"duty_id":1
},
"f2":{
"f1":{
"id":1,
"name":"Script Execution"
}
}
}
}
What I want to do is generate JSON for joins (again 1:1 is fine) in a way where I can add joins, and have them represented as child objects of the parents they join to, i.e. like the following:
{
"id":1,
"name":"Dan",
"email":"someemail#gmail.com",
"user_role_id":1,
"user_role":{
"id":1,
"name":"admin",
"description":"Administrative duties in the system",
"duty_id":1
"duty":{
"id":1,
"name":"Script Execution"
}
}
}
}
Update: In PostgreSQL 9.4 this improves a lot with the introduction of to_json, json_build_object, json_object and json_build_array, though it's verbose due to the need to name all the fields explicitly:
select
json_build_object(
'id', u.id,
'name', u.name,
'email', u.email,
'user_role_id', u.user_role_id,
'user_role', json_build_object(
'id', ur.id,
'name', ur.name,
'description', ur.description,
'duty_id', ur.duty_id,
'duty', json_build_object(
'id', d.id,
'name', d.name
)
)
)
from users u
inner join user_roles ur on ur.id = u.user_role_id
inner join role_duties d on d.id = ur.duty_id;
For older versions, read on.
It isn't limited to a single row, it's just a bit painful. You can't alias composite rowtypes using AS, so you need to use an aliased subquery expression or CTE to achieve the effect:
select row_to_json(row)
from (
select u.*, urd AS user_role
from users u
inner join (
select ur.*, d
from user_roles ur
inner join role_duties d on d.id = ur.duty_id
) urd(id,name,description,duty_id,duty) on urd.id = u.user_role_id
) row;
produces, via http://jsonprettyprint.com/:
{
"id": 1,
"name": "Dan",
"email": "someemail#gmail.com",
"user_role_id": 1,
"user_role": {
"id": 1,
"name": "admin",
"description": "Administrative duties in the system",
"duty_id": 1,
"duty": {
"id": 1,
"name": "Script Execution"
}
}
}
You will want to use array_to_json(array_agg(...)) when you have a 1:many relationship, btw.
The above query should ideally be able to be written as:
select row_to_json(
ROW(u.*, ROW(ur.*, d AS duty) AS user_role)
)
from users u
inner join user_roles ur on ur.id = u.user_role_id
inner join role_duties d on d.id = ur.duty_id;
... but PostgreSQL's ROW constructor doesn't accept AS column aliases. Sadly.
Thankfully, they optimize out the same. Compare the plans:
The nested subquery version; vs
The latter nested ROW constructor version with the aliases removed so it executes
Because CTEs are optimisation fences, rephrasing the nested subquery version to use chained CTEs (WITH expressions) may not perform as well, and won't result in the same plan. In this case you're kind of stuck with ugly nested subqueries until we get some improvements to row_to_json or a way to override the column names in a ROW constructor more directly.
Anyway, in general, the principle is that where you want to create a json object with columns a, b, c, and you wish you could just write the illegal syntax:
ROW(a, b, c) AS outername(name1, name2, name3)
you can instead use scalar subqueries returning row-typed values:
(SELECT x FROM (SELECT a AS name1, b AS name2, c AS name3) x) AS outername
Or:
(SELECT x FROM (SELECT a, b, c) AS x(name1, name2, name3)) AS outername
Additionally, keep in mind that you can compose json values without additional quoting, e.g. if you put the output of a json_agg within a row_to_json, the inner json_agg result won't get quoted as a string, it'll be incorporated directly as json.
e.g. in the arbitrary example:
SELECT row_to_json(
(SELECT x FROM (SELECT
1 AS k1,
2 AS k2,
(SELECT json_agg( (SELECT x FROM (SELECT 1 AS a, 2 AS b) x) )
FROM generate_series(1,2) ) AS k3
) x),
true
);
the output is:
{"k1":1,
"k2":2,
"k3":[{"a":1,"b":2},
{"a":1,"b":2}]}
Note that the json_agg product, [{"a":1,"b":2}, {"a":1,"b":2}], hasn't been escaped again, as text would be.
This means you can compose json operations to construct rows, you don't always have to create hugely complex PostgreSQL composite types then call row_to_json on the output.
I am adding this solution becasue the accepted response does not contemplate N:N relationships. aka: collections of collections of objects
If you have N:N relationships the clausula with it's your friend.
In my example, I would like to build a tree view of the following hierarchy.
A Requirement - Has - TestSuites
A Test Suite - Contains - TestCases.
The following query represents the joins.
SELECT reqId ,r.description as reqDesc ,array_agg(s.id)
s.id as suiteId , s."Name" as suiteName,
tc.id as tcId , tc."Title" as testCaseTitle
from "Requirement" r
inner join "Has" h on r.id = h.requirementid
inner join "TestSuite" s on s.id = h.testsuiteid
inner join "Contains" c on c.testsuiteid = s.id
inner join "TestCase" tc on tc.id = c.testcaseid
GROUP BY r.id, s.id;
Since you can not do multiple aggregations, you need to use "WITH".
with testcases as (
select c.testsuiteid,ts."Name" , tc.id, tc."Title" from "TestSuite" ts
inner join "Contains" c on c.testsuiteid = ts.id
inner join "TestCase" tc on tc.id = c.testcaseid
),
requirements as (
select r.id as reqId ,r.description as reqDesc , s.id as suiteId
from "Requirement" r
inner join "Has" h on r.id = h.requirementid
inner join "TestSuite" s on s.id = h.testsuiteid
)
, suitesJson as (
select testcases.testsuiteid,
json_agg(
json_build_object('tc_id', testcases.id,'tc_title', testcases."Title" )
) as suiteJson
from testcases
group by testcases.testsuiteid,testcases."Name"
),
allSuites as (
select has.requirementid,
json_agg(
json_build_object('ts_id', suitesJson.testsuiteid,'name',s."Name" , 'test_cases', suitesJson.suiteJson )
) as suites
from suitesJson inner join "TestSuite" s on s.id = suitesJson.testsuiteid
inner join "Has" has on has.testsuiteid = s.id
group by has.requirementid
),
allRequirements as (
select json_agg(
json_build_object('req_id', r.id ,'req_description',r.description , 'test_suites', allSuites.suites )
) as suites
from allSuites inner join "Requirement" r on r.id = allSuites.requirementid
)
select * from allRequirements
What it does is building the JSON object in small collection of items and aggregating them on each with clausules.
Result:
[
{
"req_id": 1,
"req_description": "<character varying>",
"test_suites": [
{
"ts_id": 1,
"name": "TestSuite",
"test_cases": [
{
"tc_id": 1,
"tc_title": "TestCase"
},
{
"tc_id": 2,
"tc_title": "TestCase2"
}
]
},
{
"ts_id": 2,
"name": "TestSuite",
"test_cases": [
{
"tc_id": 2,
"tc_title": "TestCase2"
}
]
}
]
},
{
"req_id": 2,
"req_description": "<character varying> 2 ",
"test_suites": [
{
"ts_id": 2,
"name": "TestSuite",
"test_cases": [
{
"tc_id": 2,
"tc_title": "TestCase2"
}
]
}
]
}
]
My suggestion for maintainability over the long term is to use a VIEW to build the coarse version of your query, and then use a function as below:
CREATE OR REPLACE FUNCTION fnc_query_prominence_users( )
RETURNS json AS $$
DECLARE
d_result json;
BEGIN
SELECT ARRAY_TO_JSON(
ARRAY_AGG(
ROW_TO_JSON(
CAST(ROW(users.*) AS prominence.users)
)
)
)
INTO d_result
FROM prominence.users;
RETURN d_result;
END; $$
LANGUAGE plpgsql
SECURITY INVOKER;
In this case, the object prominence.users is a view. Since I selected users.*, I will not have to update this function if I need to update the view to include more fields in a user record.