MongoDB Where Not Exists - mongodb

I'm trying to do the following (from SQL) in MongoDB but can't seem to get the hang of it. I think the trick is using unwind, but I'm not sure.
select p.id, count(*)
from model.patients p
join model.patientAddresses pa on pa.PatientId = p.Id
where p.lastname like '%a%' and not exists
(select *
from model.patients p2
join model.PatientAddresses pa2 on pa2.PatientId = p2.Id
where pa2.relatedpatientid = p.id
)
group by p.id
having count(*) >1
I have:
db.Foo.aggregate( [ { $unwind : "$relatedNames" },
{ $match: { relatedNames: 'x,' name: 'x'}] )
The corresponding structure for a "Foo" in MongoDB is:
{
name: ''
relatedNames: ['', '', '', '']
}
And I want to find Foos where Name = 'x' and there are no other Foos where RelatedNames contains 'x'
Assistance appreciated.

Related

Is there a way to merge these json aggregations?

I am trying to create json object from getting some info from one table, then creating interger arrays from some other tables' id's and adding n > 1 (2 or more) arrays to the json object. I am using Postgres version 10.7:
select json_build_object(
'id', bi.id,
'street', ba.street,
'features1', features1.f1_json_arr,
'features2', features2.f2_json_arr
)
from business.info bi
inner join business.address ba on bi.id = ba.location_id
left outer join (
select f1.location_id,
json_agg(f1_id) as f1_json_arr
from business.features1 as f1
group by f1.location_id
) features1 on bi.id = features1.location_id
left outer join (
select f2.location_id,
json_agg(f2_id) as f2_json_arr
from business.feature2 as f2
group by f2.location_id
) features2 on bi.id = features2.location_id
where bi.id='1234';
which gives me a result as I want, like so:
{
"id": "1234",
"street", "some street",
"features1": [
2,
1
],
"features2": [
3,
2,
1
]
}
Is there a cleaner way to do this? I tried this:
select json_build_object(
'id', bi.id,
'street', ba.street_name,
'features1', f1_and_f2.f1_json_arr,
'features2', f1_and_f2.f2_json_arr
)
from business.info bi
inner join business.address ba
on bi.id = ba.location_id
left outer join (
select f1.location_id,
json_agg(f1_id) as f1_json_arr,
json_agg(f2_id) as f2_json_arr
from business.feature1 as f1
inner join business.feature2 as f2 on f1.location_id = f2.location_id
group by f1.location_id
) f1_and_f2 on bi.id = f1_and_f2.location_id
where bi.id = '1234';
but got a result like this:
{
"id": "1234",
"street_name": "a street",
"features1": [
2,
2,
2,
1,
1,
1
],
"features2": [
3,
2,
1,
3,
2,
1
]
}
SELECT A.*, B.*, C_GROUPED.C_STUFF, D_GROUPED.D_STUFF
FROM A
INNER JOIN B ON B.A_ID = A.ID
LEFT JOIN ( SELECT A_ID, JSON_AGG(STUFF) AS C_STUFF FROM C GROUP BY A_ID ) AS C_GROUPED ON C_GROUPED.A_ID = A.ID
LEFT JOIN ( SELECT A_ID, JSON_AGG(OTHER_STUFF) AS D_STUFF FROM D GROUP BY A_ID ) AS D_GROUPED ON D_GROUPED.A_ID = A.ID
WHERE A.ID = 123;
should return the same result as
SELECT
A.*,
B.*,
( SELECT JSON_AGG(STUFF) FROM C WHERE C.A_ID = A.ID ) AS C_STUFF,
( SELECT JSON_AGG(OTHER_STUFF) FROM D WHERE D.A_ID = A.ID ) AS D_STUFF
FROM A
INNER JOIN B ON B.A_ID = A.ID
WHERE A.ID = 123
In fact, I would expect the second query be faster.
Ps - Since LEFT JOIN and LEFT OUTER JOIN are the same, I would suggest writing them in the same way in your query.

Use PostgreSQL array_to_string in Phoenix

I have 2 n-n relationships between posts and tags tables. This is my query in Postgres:
SELECT t0.*, array_to_string(array_agg(t2.tag), ', ')
FROM "posts" AS t0
INNER JOIN "posts_tags" AS t1 ON (t0.id = t1.post_id)
INNER JOIN "tags" AS t2 ON (t1.tag_id = t2.id)
GROUP BY t0.id
I tried to use something similar in Ecto:
Repo.all(
from p in Post,
join: a in Post_Tag, on: p.id == a.post_id,
join: t in Tag, on: a.tag_id == t.id,
select: {p, array_to_string(array_agg(t.tag),', ')},
limit: ^limit,
offset: ^offset,
group_by: p.id
)
But I get this error:
(Ecto.Query.CompileError) `array_to_string(array_agg(t.tag()), ', ')` is not a valid query expression.
You will need to use a fragment/1 for the array_to_string(...) portion of your query. I have not tested it, but it should look something like:
fragment("array_to_string(array_agg(?), ', ')", t.tag)

Sequelize: joining table on a subquery

I am trying to join a table on a subquery, but I don't know how to express it using Sequelize ORM. This is the raw SQL I want to run:
SELECT *
FROM table_a a
LEFT OUTER JOIN (SELECT * FROM table_b b WHERE col = VAL) ON a.id = b.id;
I tried
A.findAll({
include: [
{
model: B,
where: { col: val },
}
]
}).then(...);
but that doesn't get me the query I want. Instead it changes the join to an INNER JOIN, and joins on col = VALUE instead. Is there a way to do a join on the result of a subquery? I am using Postgres if it matters.
Update: After making the following change, the resulting query now uses a LEFT OUTER JOIN as expected:
include: [
{
model: B,
where: { col: val },
required: false,
}
]
However, it is still joining on col = VALUE, the generated query looks like:
SELECT * FROM table_a a
LEFT OUTER JOIN table_b b ON a.id = b.id AND b.col = VALUE;
These 2 queries ARE functionally equivalent:
SELECT * FROM table_a a
LEFT OUTER JOIN (SELECT * FROM table_b b WHERE col = VAL) ON a.id = b.id;
SELECT * FROM table_a a
LEFT OUTER JOIN table_b b ON a.id = b.id AND b.col = VALUE;
There is NO ADVANTAGE from the first one whatsoever. So, if the first one provides the wanted results, so should the second one.

Postgres Lateral Join Multiple Tables to Limit Results

I have a question regarding lateral joins in Postgres.
My use case is I want to return a dataset that combines multiple tables but limits the number of publications and reviews returned. The simplified table schema is below
Table Author
ID
NAME
Table Review
ID
AUTHOR_ID
PUBLICATION_ID
CONTENT
Table Publication
ID
NAME
Table AuthorPublication
AUTHOR_ID
PUBLICATION_ID
So for my initial query I have this:
SELECT
a.id,
a.name
json_agg (
json_build_object (
'id', r.id,
'content', r.content
)
) AS reviews,
json_agg (
json_build_object(
'id', p.id,
'name', p.name
)
) AS publications
FROM
public.author a
INNER JOIN
public.review r ON r.author_id = a.id
INNER JOIN
public.author_publication ap ON ap.author_id = a.id
INNER JOIN
public.publication p ON p.id = ap.publication_id
WHERE
a.id = '1'
GROUP BY
a.id
This returns the data I need, for example I get the author's name, id and a list of all of their reviews and publications they belong to. What I want to be able to do is limit the number of reviews and publications. For example return 5 reviews, and 3 publications.
I tried doing this with a lateral query but am running into an issue where if I do a single lateral query it works as intended.
so like:
INNER JOIN LATERAL
(SELECT r.* FROM public.review r WHERE r.author_id = a.id LIMIT 5) r ON TRUE
This returns the dataset with only 5 reviews - but if I add a second lateral query
INNER JOIN LATERAL
(SELECT ap.* FROM public.author_publication ap WHERE ap.author_id = a.id LIMIT 5) r ON TRUE
I now get 25 results for both reviews and publications with repeated/duplicated data.
So my question is are you allowed to have multiple lateral joins in a single PG query and if not what is a good way to go about limiting the number of results from a JOIN?
Thanks!
You must change your query to something like this:
SELECT
a.id,
a.name,
(
SELECT
json_agg ( r )
FROM (
SELECT
json_build_object (
'id', r.id,
'content', r.content
) AS r
FROM public.review r
WHERE r.author_id = a.id
ORDER BY r.id
LIMIT 5
) AS a
) AS reviews,
(
SELECT
json_agg (p)
FROM (
SELECT
json_build_object(
'id', p.id,
'name', p.name
) AS p
FROM public.author_publication ap
INNER JOIN public.publication p ON p.id = ap.publication_id
WHERE ap.author_id = a.id
ORDER BY p.id
LIMIT 3
) AS a
) AS publications
FROM
public.author a
WHERE
a.id = '1'

Using row_to_json() with nested joins

I'm trying to map the results of a query to JSON using the row_to_json() function that was added in PostgreSQL 9.2.
I'm having trouble figuring out the best way to represent joined rows as nested objects (1:1 relations)
Here's what I've tried (setup code: tables, sample data, followed by query):
-- some test tables to start out with:
create table role_duties (
id serial primary key,
name varchar
);
create table user_roles (
id serial primary key,
name varchar,
description varchar,
duty_id int, foreign key (duty_id) references role_duties(id)
);
create table users (
id serial primary key,
name varchar,
email varchar,
user_role_id int, foreign key (user_role_id) references user_roles(id)
);
DO $$
DECLARE duty_id int;
DECLARE role_id int;
begin
insert into role_duties (name) values ('Script Execution') returning id into duty_id;
insert into user_roles (name, description, duty_id) values ('admin', 'Administrative duties in the system', duty_id) returning id into role_id;
insert into users (name, email, user_role_id) values ('Dan', 'someemail#gmail.com', role_id);
END$$;
The query itself:
select row_to_json(row)
from (
select u.*, ROW(ur.*::user_roles, ROW(d.*::role_duties)) as user_role
from users u
inner join user_roles ur on ur.id = u.user_role_id
inner join role_duties d on d.id = ur.duty_id
) row;
I found if I used ROW(), I could separate the resulting fields out into a child object, but it seems limited to a single level. I can't insert more AS XXX statements, as I think I should need in this case.
I am afforded column names, because I cast to the appropriate record type, for example with ::user_roles, in the case of that table's results.
Here's what that query returns:
{
"id":1,
"name":"Dan",
"email":"someemail#gmail.com",
"user_role_id":1,
"user_role":{
"f1":{
"id":1,
"name":"admin",
"description":"Administrative duties in the system",
"duty_id":1
},
"f2":{
"f1":{
"id":1,
"name":"Script Execution"
}
}
}
}
What I want to do is generate JSON for joins (again 1:1 is fine) in a way where I can add joins, and have them represented as child objects of the parents they join to, i.e. like the following:
{
"id":1,
"name":"Dan",
"email":"someemail#gmail.com",
"user_role_id":1,
"user_role":{
"id":1,
"name":"admin",
"description":"Administrative duties in the system",
"duty_id":1
"duty":{
"id":1,
"name":"Script Execution"
}
}
}
}
Update: In PostgreSQL 9.4 this improves a lot with the introduction of to_json, json_build_object, json_object and json_build_array, though it's verbose due to the need to name all the fields explicitly:
select
json_build_object(
'id', u.id,
'name', u.name,
'email', u.email,
'user_role_id', u.user_role_id,
'user_role', json_build_object(
'id', ur.id,
'name', ur.name,
'description', ur.description,
'duty_id', ur.duty_id,
'duty', json_build_object(
'id', d.id,
'name', d.name
)
)
)
from users u
inner join user_roles ur on ur.id = u.user_role_id
inner join role_duties d on d.id = ur.duty_id;
For older versions, read on.
It isn't limited to a single row, it's just a bit painful. You can't alias composite rowtypes using AS, so you need to use an aliased subquery expression or CTE to achieve the effect:
select row_to_json(row)
from (
select u.*, urd AS user_role
from users u
inner join (
select ur.*, d
from user_roles ur
inner join role_duties d on d.id = ur.duty_id
) urd(id,name,description,duty_id,duty) on urd.id = u.user_role_id
) row;
produces, via http://jsonprettyprint.com/:
{
"id": 1,
"name": "Dan",
"email": "someemail#gmail.com",
"user_role_id": 1,
"user_role": {
"id": 1,
"name": "admin",
"description": "Administrative duties in the system",
"duty_id": 1,
"duty": {
"id": 1,
"name": "Script Execution"
}
}
}
You will want to use array_to_json(array_agg(...)) when you have a 1:many relationship, btw.
The above query should ideally be able to be written as:
select row_to_json(
ROW(u.*, ROW(ur.*, d AS duty) AS user_role)
)
from users u
inner join user_roles ur on ur.id = u.user_role_id
inner join role_duties d on d.id = ur.duty_id;
... but PostgreSQL's ROW constructor doesn't accept AS column aliases. Sadly.
Thankfully, they optimize out the same. Compare the plans:
The nested subquery version; vs
The latter nested ROW constructor version with the aliases removed so it executes
Because CTEs are optimisation fences, rephrasing the nested subquery version to use chained CTEs (WITH expressions) may not perform as well, and won't result in the same plan. In this case you're kind of stuck with ugly nested subqueries until we get some improvements to row_to_json or a way to override the column names in a ROW constructor more directly.
Anyway, in general, the principle is that where you want to create a json object with columns a, b, c, and you wish you could just write the illegal syntax:
ROW(a, b, c) AS outername(name1, name2, name3)
you can instead use scalar subqueries returning row-typed values:
(SELECT x FROM (SELECT a AS name1, b AS name2, c AS name3) x) AS outername
Or:
(SELECT x FROM (SELECT a, b, c) AS x(name1, name2, name3)) AS outername
Additionally, keep in mind that you can compose json values without additional quoting, e.g. if you put the output of a json_agg within a row_to_json, the inner json_agg result won't get quoted as a string, it'll be incorporated directly as json.
e.g. in the arbitrary example:
SELECT row_to_json(
(SELECT x FROM (SELECT
1 AS k1,
2 AS k2,
(SELECT json_agg( (SELECT x FROM (SELECT 1 AS a, 2 AS b) x) )
FROM generate_series(1,2) ) AS k3
) x),
true
);
the output is:
{"k1":1,
"k2":2,
"k3":[{"a":1,"b":2},
{"a":1,"b":2}]}
Note that the json_agg product, [{"a":1,"b":2}, {"a":1,"b":2}], hasn't been escaped again, as text would be.
This means you can compose json operations to construct rows, you don't always have to create hugely complex PostgreSQL composite types then call row_to_json on the output.
I am adding this solution becasue the accepted response does not contemplate N:N relationships. aka: collections of collections of objects
If you have N:N relationships the clausula with it's your friend.
In my example, I would like to build a tree view of the following hierarchy.
A Requirement - Has - TestSuites
A Test Suite - Contains - TestCases.
The following query represents the joins.
SELECT reqId ,r.description as reqDesc ,array_agg(s.id)
s.id as suiteId , s."Name" as suiteName,
tc.id as tcId , tc."Title" as testCaseTitle
from "Requirement" r
inner join "Has" h on r.id = h.requirementid
inner join "TestSuite" s on s.id = h.testsuiteid
inner join "Contains" c on c.testsuiteid = s.id
inner join "TestCase" tc on tc.id = c.testcaseid
GROUP BY r.id, s.id;
Since you can not do multiple aggregations, you need to use "WITH".
with testcases as (
select c.testsuiteid,ts."Name" , tc.id, tc."Title" from "TestSuite" ts
inner join "Contains" c on c.testsuiteid = ts.id
inner join "TestCase" tc on tc.id = c.testcaseid
),
requirements as (
select r.id as reqId ,r.description as reqDesc , s.id as suiteId
from "Requirement" r
inner join "Has" h on r.id = h.requirementid
inner join "TestSuite" s on s.id = h.testsuiteid
)
, suitesJson as (
select testcases.testsuiteid,
json_agg(
json_build_object('tc_id', testcases.id,'tc_title', testcases."Title" )
) as suiteJson
from testcases
group by testcases.testsuiteid,testcases."Name"
),
allSuites as (
select has.requirementid,
json_agg(
json_build_object('ts_id', suitesJson.testsuiteid,'name',s."Name" , 'test_cases', suitesJson.suiteJson )
) as suites
from suitesJson inner join "TestSuite" s on s.id = suitesJson.testsuiteid
inner join "Has" has on has.testsuiteid = s.id
group by has.requirementid
),
allRequirements as (
select json_agg(
json_build_object('req_id', r.id ,'req_description',r.description , 'test_suites', allSuites.suites )
) as suites
from allSuites inner join "Requirement" r on r.id = allSuites.requirementid
)
select * from allRequirements
What it does is building the JSON object in small collection of items and aggregating them on each with clausules.
Result:
[
{
"req_id": 1,
"req_description": "<character varying>",
"test_suites": [
{
"ts_id": 1,
"name": "TestSuite",
"test_cases": [
{
"tc_id": 1,
"tc_title": "TestCase"
},
{
"tc_id": 2,
"tc_title": "TestCase2"
}
]
},
{
"ts_id": 2,
"name": "TestSuite",
"test_cases": [
{
"tc_id": 2,
"tc_title": "TestCase2"
}
]
}
]
},
{
"req_id": 2,
"req_description": "<character varying> 2 ",
"test_suites": [
{
"ts_id": 2,
"name": "TestSuite",
"test_cases": [
{
"tc_id": 2,
"tc_title": "TestCase2"
}
]
}
]
}
]
My suggestion for maintainability over the long term is to use a VIEW to build the coarse version of your query, and then use a function as below:
CREATE OR REPLACE FUNCTION fnc_query_prominence_users( )
RETURNS json AS $$
DECLARE
d_result json;
BEGIN
SELECT ARRAY_TO_JSON(
ARRAY_AGG(
ROW_TO_JSON(
CAST(ROW(users.*) AS prominence.users)
)
)
)
INTO d_result
FROM prominence.users;
RETURN d_result;
END; $$
LANGUAGE plpgsql
SECURITY INVOKER;
In this case, the object prominence.users is a view. Since I selected users.*, I will not have to update this function if I need to update the view to include more fields in a user record.