Postgres - Update running count whenever row meets a certain condition - postgresql

I have a table with the following entries in them
id price quantity
1. 10 75
2. 10 75
3. 10 -150
4. 10 75
5. 10 -75
What I need to do is to update each row with a number that is the number of times the running total has been 0. In the above example, the cumulative totals would be
id. cum_total
1. 750
2. 1500
3. 0
4. 750
5. 0
Desired result
id price quantity seq
1. 10 75 1
2. 10 75 1
3. 10 -150 1
4. 10 75 2
5. 10 -75 2
I'm now lost in a spiral of CTEs and window functions and figured I'd ask the experts.
Thanks in advance :-)

Here is one option using analytic functions:
WITH cte AS (
SELECT *, CASE WHEN SUM(price*quantity) OVER (ORDER BY id) = 0 THEN 1 ELSE 0 END AS price_sum
FROM yourTable
),
cte2 AS (
SELECT *, LAG(price_sum, 1, 0) OVER (ORDER BY id) price_sum_lag
FROM cte
)
SELECT id, price, quantity, 1 + SUM(price_sum_lag) OVER (ORDER BY id) cumulative_total
FROM cte2
ORDER BY id;
Demo
You may try running each CTE in succession to see how the logic is working.

With window functions:
SELECT id, price, quantity,
coalesce(
sum(CASE WHEN iszero THEN 1 ELSE 0 END)
OVER (ORDER BY id
ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING),
0
) + 1 AS batch
FROM (SELECT id, price, quantity,
sum(price * quantity) OVER (ORDER BY id) = 0 AS iszero
FROM mytable) AS subq;

Related

Unable to calculate compound interest in PostgreSQL

I have a table table1 which contains the details of any depositor like
Depositor
Deposit_Amount
Deposit_Date
Maturity_Date
Tenure
Rate
A
25000
2021-08-10
2022-08-10
12
10%
I have another table table2 which contains the interest due date as:
Interest_Due_Date
2021-09-30
2021-12-31
2022-03-31
2022-06-30
2022-08-10
My Code is:
with recursive recur (n, start_bal, days,principle,interest, end_bal) as
(
select sno,deposit_amount,rate,days,deposit_amount * (((rate::decimal(18,2))/100)/365)*days as interest, deposit_amount+(deposit_amount * (((rate::decimal(18,2))/100)/365)*days) as end_bal from (
SELECT
sno, COALESCE(DATE_PART('day', deposit_date::TIMESTAMP - lag(deposit_date::TIMESTAMP) over
(ORDER BY sno ASC rows BETWEEN UNBOUNDED PRECEDING AND CURRENT row)),0) AS
days, deposit_date, deposit_amount, rate
FROM
( SELECT
ROW_NUMBER () OVER (ORDER BY deposit_date) AS sno,
deposit_date,
deposit_amount,
rate
FROM
( SELECT
t1.deposit_date, t1.deposit_amount, t1.rate from table1 t1
UNION ALL
SELECT
t2.Interest_Due_Date AS idate, 0 as depo_amount, 0 as rate
FROM
table2 t2
ORDER BY
deposit_date) dep) calc) b where sno = 1 union all select b.sno, b.end_bal,b.days,b.prin_bal,(coalesce(a.end_bal,0)) * (((b.rate)/100)/365)*b.days as interest_NEW,
coalesce(a.end_bal,0)+ ((a.end_bal) * (((calc.rate)/100)/365)*calc.days) as end_bal_NEW
from b, recur as a
where calc.sno = a.n+1 ) select * from recur
"Every time when i try to execute the query its showing an error 'relation 'b' does not exist"
...
The result table should be
Deposit Amount
Date
Days
Interest
Total Amount
25000
2021-08-10
0
0
25000
0
2021-09-30
51
349.32
25349.32
0
2021-12-31
92
638.94
25988.26
0
2022-03-31
90
640.81
26629.06
0
2022-06-30
91
663.90
27292.97
0
2022-08-10
41
306.58
27599.54

Taking N-samples from each group in PostgreSQL

I have a table containing data that has a column named id that looks like below:
id
value 1
value 2
value 3
1
244
550
1000
1
251
551
700
1
540
60
1200
...
...
...
...
2
19
744
2000
2
10
903
100
2
44
231
600
2
120
910
1100
...
...
...
...
I want to take 50 sample rows per id that exists but if less than 50 exist for the group to simply take the entire set of data points.
For example I would like a maximum 50 data points randomly selected from id = 1, id = 2 etc...
I cannot find any previous questions similar to this but have tried taking a stab at at least logically working through the solution where I could iterate and union all queries by id and limit to 50:
SELECT * FROM (SELECT * FROM schema.table AS tbl WHERE tbl.id = X LIMIT 50) UNION ALL;
But it's obvious that you cannot use this type of solution because UNION ALL requires aggregating outputs from one id to the next and I do not have a list of id values to use in place of X in tbl.id = X.
Is there a way to accomplish this by gathering that list of unique id values and union all results or is there a more optimal way this could be done?
If you want to select a random sample for each id, then you need to randomize the rows somehow. Here is a way to do it:
select * from (
select *, row_number() over (partition by id order by random()) as u
from schema.table
) as a
where u <= 50;
Example (limiting to 3, and some row number for each id so you can see the selection randomness):
setup
DROP TABLE IF EXISTS foo;
CREATE TABLE foo
(
id int,
value1 int,
idrow int
);
INSERT INTO foo
select 1 as id, (1000*random())::int as value1, generate_series(1, 100) as idrow
union all
select 2 as id, (1000*random())::int as value1, generate_series(1, 100) as idrow
union all
select 3 as id, (1000*random())::int as value1, generate_series(1, 100) as idrow;
Selection
select * from (
select *, row_number() over (partition by id order by random()) as u
from foo
) as a
where u <= 3;
Output:
id
value1
idrow
u
1
542
6
1
1
24
86
2
1
155
74
3
2
505
95
1
2
100
46
2
2
422
33
3
3
966
88
1
3
747
89
2
3
664
19
3
In case you are looking to get 50 (or less) from each group of IDs then you can use windowing -
From question - "I want to take 50 sample rows per id that exists but if less than 50 exist for the group to simply take the entire set of data points."
Query -
with data as (
select row_number() over (partition by id order by random()) rn,
* from table_name)
select * from data where rn<=50 order by id;
Fiddle.
Your description of trying to get the UNION ALL without specifying all the branches ahead of time is aiming for a LATERAL join. And that is one way to solve the problem. But unless you have a table of all distinct ids, you would have to compute one on the fly. For example (using the same fiddle as Pankaj used):
with uniq as (select distinct id from test)
select foo.* from uniq cross join lateral
(select * from test where test.id=uniq.id order by random() limit 3) foo
This could be either slower or faster than the Window Function method, depending on your system and your data and your indexes. In my hands, it was quite a bit faster even with the need to dynamically compute the list of distinct ids.

PostgreSQL cummulative sum max condition

I have the following table:
account
size id name
100 1 John
200 2 Mary
300 3 Jane
400 4 Anne
100 5 Mike
600 6 Joanne
I want to partition the rows in groups, where the sum of size <= 600.
Expected result:
account
group size id name
1 100 1 John
1 200 2 Mary
1 300 3 Jane
2 400 4 Anne
2 100 5 Mike
3 600 6 Joanne
I don't know how to do the partition and add the condition.
I cannot think of how to do this without recursion. I left the running_total in the result to make it easier to follow:
with recursive rns as ( -- Assign row numbers as rn in case of gaps in id
select *, row_number() over (order by id) as rn
from account
), sumgrp as ( -- Start with first row
select size, id, name, rn, 1 as grp, size as running_total
from rns
where rn = 1
union all
select n.size, n.id, n.name, n.rn,
case -- Increment the grp when running_total exceeds 600
when p.running_total + n.size > 600 then p.grp + 1
else p.grp
end as grp,
case -- Reset the running_total when it exceeds 600
when p.running_total + n.size > 600 then n.size
else p.running_total + n.size
end as running_total
from sumgrp p
join rns n on n.rn = p.rn + 1
)
select grp, size, id, name, running_total
from sumgrp
order by id;
db<>fiddle here

Select rows with second highest value for each ID repeated multiple times

Id values
1 10
1 20
1 30
1 40
2 3
2 9
2 0
3 14
3 5
3 7
Answer should be
Id values
1 30
2 3
3 7
I tried as below
Select distinct
id,
(select max(values)
from table
where values not in(select ma(values) from table)
)
You need the row_number window function. This adds a column with a row count for each group (in your case the ids). In a subquery you are able to ask for the second row of each group.
demo:db<>fiddle
SELECT
id, values
FROM (
SELECT
*,
row_number() OVER (PARTITION BY id ORDER BY values DESC)
FROM
table
) s
WHERE row_number = 2

T-SQL A problem with SELECT TOP (case [...])

I have query like that:
(as You see I'd like to retrieve 50% of total rows or first 100 rows etc)
//#AllRowsSelectType is INT
SELECT TOP (
case #AllRowsSelectType
when 1 then 100 PERCENT
when 2 then 50 PERCENT
when 3 then 25 PERCENT
when 4 then 33 PERCENT
when 5 then 50
when 6 then 100
when 7 then 200
end
) ROW_NUMBER() OVER(ORDER BY [id]) AS row_num, a,b,c etc
why have I the error : "Incorrect syntax near the keyword 'PERCENT'." on line "when 1 [...]"
The syntax for TOP is:
TOP (expression) [PERCENT]
[ WITH TIES ]
The reserved keyword PERCENT cannot be included in the expression. Instead you can run two different queries: one for when you want PERCENT and another for when you don't.
If you need this to be one query you can run both queries and use UNION ALL to combine the results:
SELECT TOP (
CASE #AllRowsSelectType
WHEN 1 THEN 100
WHEN 2 THEN 50
WHEN 3 THEN 25
WHEN 4 THEN 33
ELSE 0
END) PERCENT
ROW_NUMBER() OVER(ORDER BY [id]) AS row_num, a, b, c, ...
UNION ALL
SELECT TOP (
CASE #AllRowsSelectType
WHEN 5 THEN 50
WHEN 6 THEN 100
WHEN 7 THEN 200
ELSE 0
END)
ROW_NUMBER() OVER(ORDER BY [id]) AS row_num, a, b, c, ...
You're also mixing two different types of use. The other is.
DECLARE #ROW_LIMT int
IF #AllRowsSelectType < 5
SELECT #ROW_LIMIT = COUNT(*)/#AllRowsSelectType FROM myTable -- 100%, 50%, 33%, 25%
ELSE
SELECT #ROW_LIMIT = 50 * POWER(2, #AllRowsSelectType - 5) -- 50, 100, 200...
WITH OrderedMyTable
(
select *, ROW_NUMBER() OVER (ORDER BY id) as rowNum
FROM myTable
)
SELECT * FROM OrderedMyTable
WHERE rowNum <= #ROW_LIMIT
You could do:
select top (CASE #FilterType WHEN 2 THEN 50 WHEN 3 THEN 25 WHEN 4 THEN 33 ELSE 100 END) percent * from
(select top (CASE #FilterType WHEN 5 THEN 50 WHEN 6 THEN 100 WHEN 7 THEN 200 ELSE 2147483647 END) * from
<your query here>
) t
) t
Which may be easier to read.