Postgres Hierarchy output - postgresql

im struggling on how to get the correct output using hierarchy query.
I have one table which loads per day all product and its price. during time this can cancel and being activate again.
I believe with oracle we could use the Connect By.
WITH RECURSIVE cte AS (
select min(event_date) event_date, item_code,sum(price::numeric)/1024/1024 price, 1 AS level
from rdpidevdat.raid_r_cbs_offer_accttype_map where product_type='cars' and item_code in ('Renault')
group by item_code
UNION ALL
SELECT e.event_date, e.item_code, e.price, cte.level + 1
from (select event_date, item_code,sum(price::numeric)/1024/1024 price
from rdpidevdat.raid_r_cbs_offer_accttype_map where product_type='cars' and item_code in ('9859')
group by event_date,item_code) e join cte ON e.event_date = cte.event_date and e.item_code = cte.item_code
)
SELECT *
FROM cte where item_code in ('Renault') ;
how do i put an ouput where will have the range of each product during time?
if we have the data:
EVENT_DATE | ITEM_COD| PRICE
20210910 | Renaut | 2500
20210915 | Renaut | 2500
20210920 | Renaut | 2600
20211020 | Renaut | 2900
20220101 | Renaut | 2500
the expected output should be:
-------------------------------------------------
FROM_EVENT_DATE | TO_EVENT_DATE | ITEM_COD| PRICE
20210910 | 20210915 | Renaut | 2500
20210915 | 20210920 | Renaut | 2600
20210920 | 20211020 | Renaut | 2900
20211020 | 20220101 | Renaut | 2500
Thanks in Advance and Regards!

I already found the solution. Using the Lag and lastvalue function. no need to use the hierarchy one.

Related

How can I `SUM()` in PostgreSQL based on certain condition? For summing debits and credits in accounting journal table

I have a database full with accounting journals. There is table for accounting journal itself (the accounting journal's metadata) and there is a table for accounting journal line (for each account with its debit or credit).
I have database like this:
+----+---------------+--------+---------+
| ID | JOURNAL_NAME | DEBIT | CREDIT |
+----+---------------+--------+---------+
| | | | |
| 1 | INV/0001 | 100 | 0 |
| | | | |
| 2 | INV/0001 | 0 | 100 |
| | | | |
| 3 | INV/0002 | 200 | 0 |
| | | | |
| 4 | INV/0002 | 0 | 200 |
+----+---------------+--------+---------+
I want to have all journal with the same name to be summed in one, their debits and credits. So from the above table... I want to have a query that makes something like this:
+--------------+--------+---------+
| JOURNAL_NAME | DEBIT | CREDIT |
+--------------+--------+---------+
| | | |
| INV/0001 | 100 | 100 |
| | | |
| INV/0002 | 200 | 200 |
+--------------+--------+---------+
I have tried with:
SELECT DISTINCT ON (accounting_journal.id)
accounting_journal.name,
accounting_journal_line.debit,
accounting_journal_line.credit
FROM accounting_journal_line
JOIN accounting_journal ON accounting_journal.id = accounting_journal_line.move_id
ORDER BY accounting_journal.id ASC
LIMIT 3;
With the above query, I have all the journal and the journal lines. I just need to have the above query to sum the debits and credits for every same accounting_journal.name.
I have tried with SUM() but it always stuck in GROUP BY` clause.
SELECT DISTINCT ON (accounting_journal.id)
accounting_journal.name,
accounting_journal.ref,
accounting_journal_line.name,
SUM(accounting_journal_line.debit),
SUM(accounting_journal_line.credit)
FROM accounting_journal_line
JOIN accounting_journal ON accounting_journal.id = accounting_journal_line.move_id
ORDER BY accounting_journal.id ASC
LIMIT 3;
The error:
Error in query (7): ERROR: column "accounting_journal.name" must appear in the GROUP BY clause or be used in an aggregate function
LINE 2: accounting_journal.name,
I hope I can get assistance or pointer where I need to look at, here. Thanks!
When you are using any aggregation function with normal columns then your have to mention all the non-aggregating column in group by clause,
So try This:
SELECT DISTINCT ON (accounting_journal.id)
accounting_journal.name,
accounting_journal.ref,
accounting_journal_line.name,
SUM(accounting_journal_line.debit),
SUM(accounting_journal_line.credit)
FROM accounting_journal_line
JOIN accounting_journal ON accounting_journal.id = accounting_journal_line.move_id
group by 1,2,3
ORDER BY accounting_journal.id ASC
LIMIT 3;
In your query you are having 3 non-aggregation column so you can mention column number in group by clause to achieve it.
You can use the Sum Window Function, it does not require "group by". So:
select aj.id journal_id
aj.name journal_name,
aj.ref journal_ref,
ajl.name line_name,
sum(ajl.debit) over(partition by aj.id) total_debit,
sum(ajl.credit) over(partition by aj.id) total_credit
from accounting_journal_line ajl
join accounting_journal aj
on aj.id = ajl.move_id
order by aj.id;
See fiddle for a working example.

SUM values from two tables with GROUP BY and WHERE

I have two tables below named sent_table and received_table. I am attempting to mash them together in a query to achieve output_table. All my attempts so far result in a huge amount of duplicates and totally bogus sum values.
I am assuming I would need to use GROUP BY and WHERE to achieve this goal. I want to be able to filter based on the users name.
sent_table
+----+------+-------+----------+
| id | name | value | order_id |
+----+------+-------+----------+
| 1 | dave | 100 | 1 |
| 2 | dave | 200 | 1 |
| 3 | dave | 300 | 2 |
+----+------+-------+----------+
received_table
+----+------+-------+----------+
| id | name | value | order_id |
+----+------+-------+----------+
| 1 | dave | 400 | 1 |
| 2 | dave | 500 | 2 |
| 3 | dave | 600 | 2 |
+----+------+-------+----------+
output table
+------+----------+----------+
| sent | received | order_id |
+------+----------+----------+
| 300 | 400 | 1 |
| 300 | 1100 | 2 |
+------+----------+----------+
I tried the following with no joy. This does not impose any restrictions on how I would desire to solve this problem. It is just how I attempted to do it.
SELECT *
FROM
( select SUM(value) as sent, order_id FROM sent_table WHERE name='dave' GROUP BY order_id) A
CROSS JOIN
( select SUM(value) as received, order_id FROM received_table WHERE name='dave' GROUP BY order_id) B
Any help would be greatly appreciated.
Do the sums on each table, grouping by order_id, then join the results. To get the rows even if one side is missing, do a FULL OUTER JOIN:
SELECT COALESCE(s.order_id, r.order_id) AS order_id, s.sent, r.received
FROM (
SELECT order_id, SUM(value) AS sent
FROM sent
GROUP BY order_id
) s
FULL OUTER JOIN (
SELECT order_id, SUM(value) AS received
FROM received
GROUP BY order_id
) r
USING (order_id)
ORDER BY 1
Result:
| order_id | sent | received |
| -------- | ---- | -------- |
| 1 | 300 | 400 |
| 2 | | 1100 |
Note the COALESCE on the order_id, so that if it's missing from sent it will be taken from recevied, so that that value will never be NULL.
If you want to have 0 in place of NULL (when e.g. there is no record for that order_id in either sent or received), you would do COALESCE(s.sent, 0) AS sent, COALESCE(r.received, 0) AS received.
https://www.db-fiddle.com/f/nq3xYrcys16eUrBRHT6xLL/2

Accomplishing what I need without a CROSS JOIN

I have a query that pulls from a table. With this table, I would like to build a query that allows me to make projections into the future.
SELECT
b.date,
a.id,
SUM(CASE WHEN a.date = b.date THEN a.sales ELSE 0 END) sales,
SUM(CASE WHEN a.date = b.date THEN a.revenue ELSE 0 END) revenue
FROM
table_a a
CROSS JOIN table_b b
WHERE a.date BETWEEN '2018-10-31' AND '2018-11-04'
GROUP BY 1,2
table_b is a table with literally only one column that contains dates going deep into the future. This returns results like this:
+----------+--------+-------+---------+
| date | id | sales | revenue |
+----------+--------+-------+---------+
| 11/4/18 | 113972 | 0 | 0 |
| 11/4/18 | 111218 | 0 | 0 |
| 11/3/18 | 111218 | 0 | 0 |
| 11/3/18 | 113972 | 0 | 0 |
| 11/2/18 | 111218 | 0 | 0 |
| 11/2/18 | 113972 | 0 | 0 |
| 11/1/18 | 111218 | 89 | 2405.77 |
| 11/1/18 | 113972 | 265 | 3000.39 |
| 10/31/18 | 111218 | 64 | 2957.71 |
| 10/31/18 | 113972 | 120 | 5650.91 |
+----------+--------+-------+---------+
Now there's more to the query after this where I get into the projections and what not, but for the purposes of this question, this is all you need, as it's where the CROSS JOIN exists.
How can I recreate these results without using a CROSS JOIN? In reality, this query is a much larger date range with way more data and takes hours and so much power to run and I know CROSS JOIN's should be avoided if possible.
Use the table of all dates as the "from table" and left join the data, this still returns each date.
SELECT
d.date
, t.id
, COALESCE(SUM(t.sales),0) sales
, COALESCE(SUM(t.revenue),0) revenue
FROM all_dates d
LEFT JOIN table_data t
ON d.date = t.date
WHERE d.date BETWEEN '2018-10-31' AND '2018-11-04'
GROUP BY
d.date
, t.id
Another alternative (to avoid the cross join) could be to use generate series but for this - in Redshift - I suggest this former answer. I'm a fan of generate series, but if you already have a table I would probably stay with that (but this is based on what little I know about your query etc.).

postgres lag when data is missing

I have data on baseball players annual salaries, with some years missing. What I would like to do is calculate the min, max, average change in salary from the prior year for all players in a year.
For example data looks like below from the table 'salaries':
| playerid | yearid | salary |
| a | 2016 | 10000 |
| b | 2016 | 5000 |
| a | 2015 | 9000 |
| b | 2015 | 3000 |
| a | 2014 | 3000 |
| b | 2014 | 15000 |
| a | 2010 | 1000 |
As you can see, player A has a yearly change of 1k and 6k. player B has a yearly change of 2k and -12k. So I would like a select statement that brings out:
| yearid | min change | max change | avg change |
| 2016 | 1k | 2k | 1.5k |
| 2015 | -12k | 6k | -9k |
Is there a way to do this?
My lag function has unfortunately captured the difference between 2014 and 2010 for playerid a and that is obviously wrong. I couldn't figure out how to use the lag function only if the previous row's yearid was 1 less than the current rows yearid.
Any suggestions would be greatly appreciated.
Just use the previous year for the filtering:
select year, min(salary - prev_salary), max(salary - prev_salary),
avg(salary - prev_salary)
from (select s.*,
lag(s.salary) over (partition by s.playerid order by yearid) as prev_salary,
lag(s.yearid) over (partition by s.playerid order by yearid) as prev_yearid
from salaries s
) s
where prev_yearid = yearid - 1;
Or, you can just use a join:
select s.yearid, . . .
from salaries s join
salaries sp
on sp.playerid = s.playerid and sp.yearid = s.yearid - 1
group by s.yearid;

How to list the train operators that use the second oldest trains (PostgreSQL)

train_operators:
| train_operator_id | name |
------------------------------
| 1 | Virgin |
| 2 | First |
journeys:
| journey_id | train_operator | train_type |
--------------------------------------------
| 1 | 2 | 2 |
| 2 | 2 | 1 |
| 3 | 1 | 3 |
| 4 | 1 | 2 |
train_types:
| train_type_id | date_made |
------------------------------
| 1 | 1999-02-15 |
| 2 | 2001-03-11 |
| 3 | 2000-12-05 |
How would you write a query to find all the train operators that use the second oldest type of train?
With the given schema the query should result with just Virgin since it is the only train operator that uses the second oldest train type
Try this:
select distinct train_operator from journeys
inner join (Select * from train_types order by date_made LIMIT 1 OFFSET 1) sectrain
on sectrain.train_type_id = journeys.train_type
You're into the UK Rail Network are you? I used to work for Funkwerk IT, who in turn used to provide the timetable planning software for Network Rail...
It can be pretty easy using the power of window functions in pg
SELECT DISTINCT train_operator_id,
name
FROM (SELECT t.train_operator_id,
t.name,
Rank() OVER (ORDER BY tt.date_made) AS rank
FROM train_operators AS t
JOIN journeys AS j
ON j.train_operator = t.train_operator_id
JOIN train_types AS tt
ON tt.train_type_id = j.train_type) AS q
WHERE rank = 2;
http://sqlfiddle.com/#!12/98816/8
select to.name
from
train_operators to
inner join
journeys j on to.train_operator_id = j.train_operator
where
j.train_type = (
select train_type_id
from train_types
order by date_made
limit 1 offset 1
)