Join select query from 2 tables - postgresql

Given two tables (in the same DB):
I would like to query a list of results from both tables WHERE environment = qa/staging ordered by time
I am using Postgres DB and express server.
Expected results :
build | qa | 2020-09-04 18:01:04.425261 | true
test | qa | 2020-09-04 22:46:50.862843 | #signUpHappyPath | 35530 | true
test | qa | 2020-09-04 22:50:30.256647 | #passwordStrength| 6877 | true
build | qa | 2020-09-05 01:15:44.063051 | false
test | qa | 2020-09-05 01:20:54.900635 | #shortseq | 74450 | false

If I understand your question correctly, the following SQL should work to join the 2 tables on the "environment" field, limit the results to where the environment field is either "qa" or "staging", and then sort by time in ascending order:
SELECT Tab1.*, Tab2.*
FROM YourTableOne AS Tab1, YourTableTwo AS Tab2
WHERE (Tab1.environment = Tab2.environment)
AND ((Tab1.environment = 'qa') OR (Tab1.environment = 'staging'))
ORDER BY Tab1.time
Alternatively, join the 2 tables on the "time" field, limit the results to where the environment field is either "qa" or "staging", and then sort by time in ascending order:
SELECT Tab1.*, Tab2.*
FROM YourTableOne AS Tab1, YourTableTwo AS Tab2
WHERE (Tab1.time = Tab2.time)
AND ((Tab1.environment = 'qa') OR (Tab1.environment = 'staging'))
ORDER BY Tab1.time

Related

Postgres Complex Query to get column values having corresponding column joined with another table having a specific value

I have two Postgres tables:
1- relationships:
| user_id | target_user_id |
2- affiliations:
| user_id | user_type_id | current |
user_id from affiliations can be any of the two column values in relationships, and current in affiliations is a boolean value.
In relationships, user_id is not unique and can have multiple corresponding target_user_id values.
I want to get from affiliations a list of user_id that are also in the user_id column in relationships, and have all their corresponding target_user_id values have their 'current' value in affiliations set as false
Example:
relationships:
user_id | target_user_id
1 | 11
1 | 12
1 | 13
2 | 14
2 | 15
2 | 16
affiliations:
user_id | current
1 | true
11 | false
12 | false
13 | false
2 | false
14 | true
15 | false
15 | false
so I want the query to return 1 only, since user 2 doesn't have all its corresponding target_user_id having their current as false
Thanks in advance!
Ok i finally constructed the right query as follows:
UPDATE app.affiliations
SET current = true
where current = false
and user_id in (select r.user_id
from app.affiliations as a join app.relationships as r
on r.target_user_id = a.user_id
group by r.user_id
having false = ALL(array_agg(a.current))
)

Mysql- SELECT Column 'A' even with NULLS

Table A contains student names, table B and C contain classes and the presence of students.
I would like to display all students and attend their presence. The problem is that I can not display all students who did not have a checked presence.
Where I checked the presence of students it is ok, but if there is no checked presence in a given class, on a given day and in a given subject- nothing is displayed.
My query:
SELECT student.id_student, CONCAT(student.name,' ' ,student.surname) as 'name_surname',pres_student_present, pres_student_absent, pres_student_justified, pres_student_late, pres_student_rel, pres_student_course, pres_student_delegation, pres_student_note FROM student
LEFT JOIN class ON student.no_classes = class.no_classes
LEFT JOIN pres_student ON student.id_student = pres_student.id_student
WHERE (class.no_classes = '$class' OR NULL AND pres_student_data = '$data' AND pres_student_id_subject = $id_subject OR NULL)
GROUP BY student.surname
ORDER BY student.surname ASC
I want to display name_surname always and any other column should have NULL or 1
like:
Name | present | absent | just | late | rel | delegation | note |
Donald Trump | 1 | | | | | | |
Bush | | | | | | | |
Someone | 1 | | | | | | |
etc...
You should move restrictions on class and pres_studenttables from the WHERE clause to the ON (LEFT join).
In your case when you perform a restriction in the WHERE clause on a table with an outer join, the sql engine consider you are performing an INNER join
SELECT student.id_student
, CONCAT(student.name, ' ', student.surname) AS 'name_surname'
, pres_student_present
, pres_student_absent
, pres_student_justified
, pres_student_late
, pres_student_rel
, pres_student_course
, pres_student_delegation
, pres_student_note
FROM student
LEFT JOIN class
ON student.no_classes = class.no_classes
AND class.no_classes = '$class'
LEFT JOIN pres_student
ON student.id_student = pres_student.id_student
AND pres_student_data = '$data'
AND pres_student_id_subject = $id_subject
GROUP BY student.surname
ORDER BY student.surname ASC

SELECT DISTINCT on a ordered subquery's table

I'm working on a problem, involving these two tables.
books
isbn | title | author
------------+-----------------------------------------+------------------
1840918626 | Hogwarts: A History | Bathilda Bagshot
3458400871 | Fantastic Beasts and Where to Find Them | Newt Scamander
9136884926 | Advanced Potion-Making | Libatius Borage
transactions
id | patron_id | isbn | checked_out_date | checked_in_date
----+-----------+------------+------------------+-----------------
1 | 1 | 1840918626 | 2012-05-05 | 2012-05-06
2 | 4 | 9136884926 | 2012-05-05 | 2012-05-06
3 | 2 | 3458400871 | 2012-05-05 | 2012-05-06
4 | 3 | 3458400871 | 2018-04-29 | 2018-05-02
5 | 2 | 9136884926 | 2018-05-03 | NULL
6 | 1 | 3458400871 | 2018-05-03 | 2018-05-05
7 | 5 | 3458400871 | 2018-05-05 | NULL
the query "Make a list of all book titles and denote whether or not a copy of that book is checked out." so pretty much just the first table with a checked out column.
im trying to SELECT DISTINCT on a sub query with the checkout books first, but that doesn't work. I've researched and others say to accomplish this use a GROUP BY clause instead of DISTINCT but the examples they provide are one column queries and when more columns are added it doesn't work.
this is my closest attempt
SELECT DISTINCT ON (title)
title, checked_out
FROM(
SELECT b.title, t.checked_in_date IS NULL AS checked_out
FROM transactions t
natural join books b
ORDER BY checked_out DESC
) t;
or you can join only transactions where books are not checked in:
SELECT b.title, t.isbn IS NOT NULL AS checked_out
, t.checked_out_date
FROM books b
LEFT JOIN transactions t ON t.isbn = b.isbn AND t.checked_in_date IS NULL
ORDER BY checked_out DESC
I adjusted your attempt a little bit. Basically I changed the way your data is joined
SELECT DISTINCT ON (title)
title, checked_out
FROM(
SELECT b.title, t.checked_in_date IS NULL AS checked_out
FROM books b
LEFT OUTER JOIN transactions t USING (isbn)
ORDER BY checked_out DESC
) t;

How to get back aggregate values across 2 dimensions using Python Cubes?

Situation
Using Python 3, Django 1.9, Cubes 1.1, and Postgres 9.5.
These are my datatables in pictorial form:
The same in text format:
Store table
------------------------------
| id | code | address |
|-----|------|---------------|
| 1 | S1 | Kings Row |
| 2 | S2 | Queens Street |
| 3 | S3 | Jacks Place |
| 4 | S4 | Diamonds Alley|
| 5 | S5 | Hearts Road |
------------------------------
Product table
------------------------------
| id | code | name |
|-----|------|---------------|
| 1 | P1 | Saucer 12 |
| 2 | P2 | Plate 15 |
| 3 | P3 | Saucer 13 |
| 4 | P4 | Saucer 14 |
| 5 | P5 | Plate 16 |
| and many more .... |
|1000 |P1000 | Bowl 25 |
|----------------------------|
Sales table
----------------------------------------
| id | product_id | store_id | amount |
|-----|------------|----------|--------|
| 1 | 1 | 1 |7.05 |
| 2 | 1 | 2 |9.00 |
| 3 | 2 | 3 |1.00 |
| 4 | 2 | 3 |1.00 |
| 5 | 2 | 5 |1.00 |
| and many more .... |
| 1000| 20 | 4 |1.00 |
|--------------------------------------|
The relationships are:
Sales belongs to Store
Sales belongs to Product
Store has many Sales
Product has many Sales
What I want to achieve
I want to use cubes to be able to do a display by pagination in the following manner:
Given the stores S1-S3:
-------------------------
| product | S1 | S2 | S3 |
|---------|----|----|----|
|Saucer 12|7.05|9 | 0 |
|Plate 15 |0 |0 | 2 |
| and many more .... |
|------------------------|
Note the following:
Even though there were no records in sales for Saucer 12 under Store S3, I displayed 0 instead of null or none.
I want to be able to do sort by store, say descending order for, S3.
The cells indicate the SUM total of that particular product spent in that particular store.
I also want to have pagination.
What I tried
This is the configuration I used:
"cubes": [
{
"name": "sales",
"dimensions": ["product", "store"],
"joins": [
{"master":"product_id", "detail":"product.id"},
{"master":"store_id", "detail":"store.id"}
]
}
],
"dimensions": [
{ "name": "product", "attributes": ["code", "name"] },
{ "name": "store", "attributes": ["code", "address"] }
]
This is the code I used:
result = browser.aggregate(drilldown=['Store','Product'],
order=[("Product.name","asc"), ("Store.name","desc"), ("total_products_sale", "desc")])
I didn't get what I want.
I got it like this:
----------------------------------------------
| product_id | store_id | total_products_sale |
|------------|----------|---------------------|
| 1 | 1 | 7.05 |
| 1 | 2 | 9 |
| 2 | 3 | 2.00 |
| and many more .... |
|---------------------------------------------|
which is the whole table with no pagination and if the products not sold in that store it won't show up as zero.
My question
How do I get what I want?
Do I need to create another data table that aggregates everything by store and product before I use cubes to run the query?
Update
I have read more. I realised that what I want is called dicing as I needed to go across 2 dimensions. See: https://en.wikipedia.org/wiki/OLAP_cube#Operations
Cross-posted at Cubes GitHub issues to get more attention.
This is a pure SQL solution using crosstab() from the additional tablefunc module to pivot the aggregated data. It typically performs better than any client-side alternative. If you are not familiar with crosstab(), read this first:
PostgreSQL Crosstab Query
And this about the "extra" column in the crosstab() output:
Pivot on Multiple Columns using Tablefunc
SELECT product_id, product
, COALESCE(s1, 0) AS s1 -- 1. ... displayed 0 instead of null
, COALESCE(s2, 0) AS s2
, COALESCE(s3, 0) AS s3
, COALESCE(s4, 0) AS s4
, COALESCE(s5, 0) AS s5
FROM crosstab(
'SELECT s.product_id, p.name, s.store_id, s.sum_amount
FROM product p
JOIN (
SELECT product_id, store_id
, sum(amount) AS sum_amount -- 3. SUM total of product spent in store
FROM sales
GROUP BY product_id, store_id
) s ON p.id = s.product_id
ORDER BY s.product_id, s.store_id;'
, 'VALUES (1),(2),(3),(4),(5)' -- desired store_id's
) AS ct (product_id int, product text -- "extra" column
, s1 numeric, s2 numeric, s3 numeric, s4 numeric, s5 numeric)
ORDER BY s3 DESC; -- 2. ... descending order for S3
Produces your desired result exactly (plus product_id).
To include products that have never been sold replace [INNER] JOIN with LEFT [OUTER] JOIN.
SQL Fiddle with base query.
The tablefunc module is not installed on sqlfiddle.
Major points
Read the basic explanation in the reference answer for crosstab().
I am including with product_id because product.name is hardly unique. This might otherwise lead to sneaky errors conflating two different products.
You don't need the store table in the query if referential integrity is guaranteed.
ORDER BY s3 DESC works, because s3 references the output column where NULL values have been replaced with COALESCE. Else we would need DESC NULLS LAST to sort NULL values last:
PostgreSQL sort by datetime asc, null first?
For building crosstab() queries dynamically consider:
Dynamic alternative to pivot with CASE and GROUP BY
I also want to have pagination.
That last item is fuzzy. Simple pagination can be had with LIMIT and OFFSET:
Displaying data in grid view page by page
I would consider a MATERIALIZED VIEW to materialize results before pagination. If you have a stable page size I would add page numbers to the MV for easy and fast results.
To optimize performance for big result sets, consider:
SQL syntax term for 'WHERE (col1, col2) < (val1, val2)'
Optimize query with OFFSET on large table

joining with a DISTINCT ON on an ordered subquery in sqlalchemy

Here is (an extremely simplified version of) my problem.
I'm using Postgresql as the backend and trying to build a sqlalchemy query
from another query.
Table setup
Here are the tables with some random data for the example.
You can assume that each table was declared in sqlalchemy declaratively, with
the name of the mappers being respectively Item and ItemVersion.
At the end of the question you can find a link where I put the code for
everything in this question, including the table definitions.
Some items.
item
+----+
| id |
+----+
| 1 |
| 2 |
| 3 |
+----+
A table containing versions of each item. Each has at least one.
item_version
+----+---------+---------+-----------+
| id | item_id | version | text |
+----+---------+---------+-----------+
| 1 | 1 | 0 | item_1_v0 |
| 2 | 1 | 1 | item_1_v1 |
| 3 | 2 | 0 | item_2_v0 |
| 4 | 3 | 0 | item_3_v0 |
+----+---------+---------+-----------+
The query
Now, for a given sqlalchemy query over Item, I want a function that returns
another query, but this time over (Item, ItemVersion), where the Items are
the same as in the original query (and in the same order!), and where the
ItemVersion are the corresponding latest versions for each Item.
Here is an example in SQL, which is pretty straightforward:
First a random query over the item table
SELECT item.id as item_id
FROM item
WHERE item.id != 2
ORDER BY item.id DESC
which corresponds to
+---------+
| item_id |
+---------+
| 3 |
| 1 |
+---------+
Then from that query, if I want to join the right versions, I can do
SELECT sq2.item_id AS item_id,
sq2.item_version_id AS item_version_id,
sq2.item_version_text AS item_version_text
FROM (
SELECT DISTINCT ON (sq.item_id)
sq.item_id AS item_id,
iv.id AS item_version_id,
iv.text AS item_version_text
FROM (
SELECT item.id AS item_id
FROM item
WHERE id != 2
ORDER BY id DESC) AS sq
JOIN item_version AS iv
ON iv.item_id = sq.item_id
ORDER BY sq.item_id, iv.version DESC) AS sq2
ORDER BY sq2.item_id DESC
Note that it has to be wrapped in a subquery a second time because the
DISTINCT ON discards the ordering.
Now the challenge is to write a function that does that in sqlalchemy.
Here is what I have so far.
First the initial sqlalchemy query over the items:
session.query(Item).filter(Item.id != 2).order_by(desc(Item.id))
Then I'm able to build my second query but without the original ordering. In
other words I don't know how to do the second subquery wrapping that I did in
SQL to get back the ordering that was discarded by the DISTINCT ON.
def join_version(session, query):
sq = aliased(Item, query.subquery('sq'))
sq2 = session.query(sq, ItemVersion) \
.distinct(sq.id) \
.join(ItemVersion) \
.order_by(sq.id, desc(ItemVersion.version))
return sq2
I think this SO question could be part of the answer but I'm not quite
sure how.
The code to run everything in this question (database creation, population and
a failing unit test with what I have so far) can be found here. Normally
if you can fix the join_version function, it should make the test pass!
Ok so I found a way. It's a bit of a hack but still only queries the database twice so I guess I will survive! Basically I'm querying the database for the Items first, and then I do another query for the ItemVersions, filtering on item_id, and then reordering with a trick I found here (this is also relevant).
Here is the code:
def join_version(session, query):
items = query.all()
item_ids = [i.id for i in items]
items_v_sq = session.query(ItemVersion) \
.distinct(ItemVersion.item_id) \
.filter(ItemVersion.item_id.in_(item_ids)) \
.order_by(ItemVersion.item_id, desc(ItemVersion.version)) \
.subquery('sq')
sq = aliased(ItemVersion, items_v_sq)
items_v = session.query(sq) \
.order_by('idx(array{}, sq.item_id)'.format(item_ids))
return zip(items, items_v)