I am using the following query to get column totals by week for a given date range...
SELECT to_char(week_start - 1, 'dd-MON-yy') week_end, run_qty, acc_qty, CASE WHEN run_qty <> 0 THEN ROUND(acc_qty/run_qty, 4) ELSE 0 END pct
FROM (SELECT week_start, SUM(run_qty) run_qty, SUM(acc_qty) acc_qty
FROM (SELECT TRUNC(NEXT_DAY(TRUNC(created_date), 'Monday')) week_start, NVL(SUM(run_qty), 0) run_qty, NVL(SUM(accepted_qty), 0) acc_qty
FROM shema.table_a
WHERE (some conditions)
AND created_date BETWEEN :FromDate AND :ToDate
GROUP BY TRUNC(NEXT_DAY(TRUNC(created_date), 'Monday'))
UNION
SELECT TRUNC(NEXT_DAY(TRUNC(to_date(zday, 'dd-mon-rrrr')), 'Monday')) week_start, 0run_qty, 0acc_qty
FROM (SELECT :FromDate + (level - 1) zday
FROM dual
CONNECT BY LEVEL <= (:ToDate - :FromDate)))
GROUP BY week_start
ORDER BY week_start desc)
With input paramters of :FromDate = 4/31/2015 and :ToDate = 6/25/2015, this gives me the last day of each week (week defined as Monday - Sunday), the run total for each each week, the accepted total for each week, and the pct of the run total accepted for each week, in a result set that looks like so...
28-JUN-2015 0 0 0
21-JUN-2015 100 50 0.5
14-JUN-2015 50 40 0.8
07-JUN-2015 0 0 0
31-MAY-2015 0 0 0
24-MAY-2015 50 40 0.75
17-MAY-2015 80 50 0.625
10-MAY-2015 60 20 0.3333
03-MAY-2015 0 0 0
Can I use a similar approach in order to calculate a running total of the run and accepted quantities and percentage of quantity accepted over the date range provided? (to give me a result set that would look like)...
28-JUN-2015 340 200 0.5882
21-JUN-2015 340 200 0.5882
14-JUN-2015 240 150 0.625
07-JUN-2015 190 110 0.5789
31-MAY-2015 190 110 0.5789
24-MAY-2015 190 110 0.5789
17-MAY-2015 140 70 0.5
10-MAY-2015 60 20 0.3333
03-MAY-2015 0 0 0
I figured it out... in case anyone runs into this thread by searching for a solution to a similar issue, I wrapped the query above inside...
SELECT week_end, run_qty, acc_qty, pct
FROM (query above)
MODEL
DIMENSION BY(row_number() OVER (ORDER BY to_date(week_end, 'dd-MON-yy') asc) rec)
MEASURES(week_end, run_qty, acc_qty, pct)
RULES(
run_qty[rec > 1] ORDER BY rec = run_qty[cv()] + run_qty[cv() - 1],
acc_qty[rec > 1] ORDER BY rec = acc_qty[cv()] + acc_qty[cv() - 1],
pct[rec >= 0] ORDER BY rec = CASE WHEN run_qty[cv()] <> 0 THEN ROUND(acc_qty[cv()]/run_qty[cv()], 4) ELSE 0 END
)
ORDER BY to_date(week_end, 'dd-MON-yy') desc
...and it runs quickly, giving me the expected results
Related
I have a table table1 which contains the details of any depositor like
Depositor
Deposit_Amount
Deposit_Date
Maturity_Date
Tenure
Rate
A
25000
2021-08-10
2022-08-10
12
10%
I have another table table2 which contains the interest due date as:
Interest_Due_Date
2021-09-30
2021-12-31
2022-03-31
2022-06-30
2022-08-10
My Code is:
with recursive recur (n, start_bal, days,principle,interest, end_bal) as
(
select sno,deposit_amount,rate,days,deposit_amount * (((rate::decimal(18,2))/100)/365)*days as interest, deposit_amount+(deposit_amount * (((rate::decimal(18,2))/100)/365)*days) as end_bal from (
SELECT
sno, COALESCE(DATE_PART('day', deposit_date::TIMESTAMP - lag(deposit_date::TIMESTAMP) over
(ORDER BY sno ASC rows BETWEEN UNBOUNDED PRECEDING AND CURRENT row)),0) AS
days, deposit_date, deposit_amount, rate
FROM
( SELECT
ROW_NUMBER () OVER (ORDER BY deposit_date) AS sno,
deposit_date,
deposit_amount,
rate
FROM
( SELECT
t1.deposit_date, t1.deposit_amount, t1.rate from table1 t1
UNION ALL
SELECT
t2.Interest_Due_Date AS idate, 0 as depo_amount, 0 as rate
FROM
table2 t2
ORDER BY
deposit_date) dep) calc) b where sno = 1 union all select b.sno, b.end_bal,b.days,b.prin_bal,(coalesce(a.end_bal,0)) * (((b.rate)/100)/365)*b.days as interest_NEW,
coalesce(a.end_bal,0)+ ((a.end_bal) * (((calc.rate)/100)/365)*calc.days) as end_bal_NEW
from b, recur as a
where calc.sno = a.n+1 ) select * from recur
"Every time when i try to execute the query its showing an error 'relation 'b' does not exist"
...
The result table should be
Deposit Amount
Date
Days
Interest
Total Amount
25000
2021-08-10
0
0
25000
0
2021-09-30
51
349.32
25349.32
0
2021-12-31
92
638.94
25988.26
0
2022-03-31
90
640.81
26629.06
0
2022-06-30
91
663.90
27292.97
0
2022-08-10
41
306.58
27599.54
I have a table with the following entries in them
id price quantity
1. 10 75
2. 10 75
3. 10 -150
4. 10 75
5. 10 -75
What I need to do is to update each row with a number that is the number of times the running total has been 0. In the above example, the cumulative totals would be
id. cum_total
1. 750
2. 1500
3. 0
4. 750
5. 0
Desired result
id price quantity seq
1. 10 75 1
2. 10 75 1
3. 10 -150 1
4. 10 75 2
5. 10 -75 2
I'm now lost in a spiral of CTEs and window functions and figured I'd ask the experts.
Thanks in advance :-)
Here is one option using analytic functions:
WITH cte AS (
SELECT *, CASE WHEN SUM(price*quantity) OVER (ORDER BY id) = 0 THEN 1 ELSE 0 END AS price_sum
FROM yourTable
),
cte2 AS (
SELECT *, LAG(price_sum, 1, 0) OVER (ORDER BY id) price_sum_lag
FROM cte
)
SELECT id, price, quantity, 1 + SUM(price_sum_lag) OVER (ORDER BY id) cumulative_total
FROM cte2
ORDER BY id;
Demo
You may try running each CTE in succession to see how the logic is working.
With window functions:
SELECT id, price, quantity,
coalesce(
sum(CASE WHEN iszero THEN 1 ELSE 0 END)
OVER (ORDER BY id
ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING),
0
) + 1 AS batch
FROM (SELECT id, price, quantity,
sum(price * quantity) OVER (ORDER BY id) = 0 AS iszero
FROM mytable) AS subq;
I have a table
t:flip `dt`id`data ! (`d1`d1`d2`d2`d3`d3; 0 1 0 1 0 1; 100 200 100 300 0 200)
and from some other query, I have a table
s:flip `dt`id ! (`d1`d2`d2`d3; 0 0 1 1)
How can I select from t such that it returns all entries where the combination of dt and id are in s, so return
flip `dt`id`data ! (`d1`d2`d2`d3; 0 0 1 1; 100 100 300 200)
You can use in on table to table ops so just create a table from your required columns in t and use in to search s for the corresponding records. As long as the table columns and types from the left argument and right argument are the same, then in will produce a boolean list as expected.
q)select from t where ([]dt;id) in s
dt id data
----------
d1 0 100
d2 0 100
d2 1 300
d3 1 200
I have a sql problem (on Redshift) where I need to get the value from column index for each id in column id based on max value in column final_score and put this value in a new column fav_index. score2 equals to the value of score1 where index n = index n + 1, for example, for id = abc1, index = 0 and score1 = 10 the value of score2 will be the value of score1 where index = 1 and the value of final_score is the difference between score1 and score2.
It's easier if you look at below table score. This table score is a result of a sql query which is shown later below.
id index score1 score2 final_score
abc1 0 10 20 10
abc1 1 20 45 25
abc1 2 45 (null) (null)
abc2 0 5 10 5
abc2 1 10 (null) (null)
abc3 0 50 30 -20
abc3 1 30 (null) (null)
So, the resulting table containing column fav_index should look like this:
id index score1 score2 final_score fav_index
abc1 0 10 20 10 0
abc1 1 20 45 25 1
abc1 2 45 (null) (null) 0
abc2 0 5 10 5 0
abc2 1 10 (null) (null) 0
abc3 0 50 30 -20 0
abc3 1 30 (null) (null) 0
Below is the script to generate table score from table story:
select
m.id,
m.index,
max(m.max) as score1,
fmt.score2,
round(fmt.score2 - max(m.max), 1) as final_score
from
(select
sv.id,
case when sv.story_number % 2 = 0 then cast(sv.story_number / 2 - 1 as int) else cast(floor(sv.story_number/2) as int) end as index,
max(sv.score1)
from
story as sv
group by
sv.id,
index,
sv.score1
order by
sv.id,
index
) as m
left join
(select
sv.id,
case when sv.story_number % 2 = 0 then cast(sv.story_number / 2 - 1 as int) else cast(floor(sv.story_number/2) as int) end as index,
max(score1) as score2
from
story as sv
group by
id,
index
) as fmt
on
m.id = fmt.id
and
m.index = fmt.index - 1
group by
m.id,
m.index,
fmt.score2
Table story is as below:
id story_number score1
abc1 1 10
abc1 2 10
abc1 3 20
abc1 4 20
abc1 5 45
abc1 6 45
The only solution I can think of is to do something like,
select id, max(final_score) from score group by id
and then join it back to the long script above (which was used to generate table score). I really want to avoid writing such a long script to get just 1 extra column of information that I need.
Is there a better way to do this?
Thank you!
Update: answer in mysql is also accepted. thanks!
After spending more hours on this and asking people around, I finally figured out a solution by referring to this window function documentation - PostgreSQL https://www.postgresql.org/docs/9.1/static/tutorial-window.html
I basically added 2 x select statements at the top and 1 x where statement at the very bottom. The where statement is to take care of the rows where final_score = null because otherwise the rank() function will rank them as 1.
My code then becomes:
select
id, index, final_score, rank, case when rank = 1 then index else null end as fav_index
from
(select
id, index, final_score, rank() over (partition by id order by final_score desc)
from
(select
m.id,
m.index,
max(m.max) as score1,
fmt.score2,
round(fmt.score2 - max(m.max), 1) as final_score
from
(select
sv.id,
case when sv.story_number % 2 = 0 then cast(sv.story_number / 2 - 1 as int) else cast(floor(sv.story_number/2) as int) end as index,
max(sv.score1)
from
story as sv
group by
sv.id,
index,
sv.score1
order by
sv.id,
index
) as m
left join
(select
sv.id,
case when sv.story_number % 2 = 0 then cast(sv.story_number / 2 - 1 as int) else cast(floor(sv.story_number/2) as int) end as index,
max(score1) as score2
from
story as sv
group by
id,
index
) as fmt
on
m.id = fmt.id
and
m.index = fmt.index - 1
group by
m.id,
m.index,
fmt.score2)
where
final_score is not null)
And the result is as follows:
id index final_score rank fav_index
abc1 0 10 2 (null)
abc1 1 25 1 1
abc2 0 5 1 0
abc3 0 -20 1 0
Result is slightly different than what I stated in the question, however, the fav_index for each id is identified and this is what I needed really. Hope this might help someone. Cheers
So this is my table in database:
Worker X have this work result BETWEEN '2015-06-01' AND '2015-06-06':
What I want to do is to count the number of work days but my condition is that if (nb_heures + nb_heures_s) > 4 I count it 1 day but if (nb_heures + nb_heures_s) <= 4 I count it 0.5 day.
So the result I must get from this table 5.5 work days and not 6.
I tried this query but it's not working well:
SELECT
count(CASE WHEN (nb_heures + nb_heures_s) > 4 THEN 1 END) as full_day_work,
count(CASE WHEN (nb_heures + nb_heures_s) <= 4 THEN 0.5 END) as half_day_work
FROM pointage_full pf
WHERE date_pointage BETWEEN '2015-06-01' AND '2015-06-06'
AND pf.id_salarie = 5
How can I reach my objectif ?
COUNT(expr) always returns a bigint, as it simply returns the number of rows for which expr is not NULL.
You can use SUM instead :
SELECT SUM(CASE WHEN (nb_heures + nb_heures_s) > 4 THEN 1 ELSE 0.5 END) as number_of_days
FROM pointage_full pf
WHERE date_pointage BETWEEN '2015-06-01' AND '2015-06-06';